magento2.4 – CPU issues after upgrading from Magento 2.2.4 to Magento 2.4

We recently upgraded our site from Magento 2.2.4 to Magento 2.4.

Done the upgrade on a copy of our site on a test server and everything was fine.

When we upgrade on our live server, the page load time increased dramatically when 4+ people were on the site at the same time. This also crashed EleasticSearch, so we moved that to it’s own VPS and ES works fine now. Before the upgrade, 15-20+ users online at the one time wouldn’t have been uncommon and the server handled it fine.

With 2.2.4, we had a VPS with 6GB RAM, 4 CPU’s. Our hosting provider suggested we increase this when we ran into issues after the upgrade. We’re now at 8CPU’s, 12GB RAM and although that improved performance, load times from server were very long.

We now have Varnish running on a separate VPS and while that has sped up load times, it’s still not good enough. Varnish has been running for the last 24 hours and we’ve been getting 503 and 504 errors and our Magento developer has told me these are due to Varnish waiting so long for our Magento server to respond.

Our hosting company is now telling us we need to get a dedicated server, is this necessary? Our Magento developer has said our VPS, if there’s no issues with the server, should be fine. Our hosting company is telling us that the server is fine.

We’re unsure what to do as we haven’t much confidence in our hosting company as they have just been telling us to increase our package, without really investigating why we’re having these issues.

network – Macbook 2007 (A1181) Wi-Fi issues in Windows

I have:

  • Macbook A1181
  • Windows 8.1 in Bootcamp with the latest patches
  • All bootcamp drivers installed
  • Keenetic Viva router with 2.4/5 GHz dual-band Wi-Fi, located in ~1m from the Macbook

The issue is that download/upload speed via Wi-Fi is very low in any app (~60kbps) and often interrupts. On the other hand, ethernet connection works great.

And the same Macbook works great with Wi-Fi when I boot macOS, but, unfortunately, I need Windows.

Other devices, like M1 Macbook Pro (late 2020), iPhones and lots of Android phones (even 2.4 GHz) work fine too, so it seems to be a driver issue.

Any ideas on how to fix it?

long exposure – Issues with dark frame subtraction: Dark frames adding “noise” and changing image color/tint

While editing some landscape shots with stars, I tried to use darkframes to reduce the noise.
More precisely, my approach was to take a series of shots, then firstly to subtract dark frames from each shot, secondly to use the mean of the series for the foreground to further reduce noise, and thirdly to use an astro stacking tool (Sequator) to stack the sky.

Instead of reducing noise, the darkframe subtraction:

  1. increased the noise- or rather, added some dark/monochrome noise.
  2. changed the white-balance/tinted the image.
    (see below)
    I do not understand why this is happening/What I am doing wrong.

Procedure/Employed Troubleshooting:

  • All photos were shot in succession, with the same settings (15sec, @ISO6400, in-camera dark frame disabled).

  • All photos were shot with the same white balance.

  • While shooting the darkframes, both the lens cap and the viewfinder cover were applied.

  • Photos were imported from my Pentax K1ii, converted to DNG in LR, and exported to PS without any editing/import presets applied.

  • I used PS, placed the darkframe layer(s) above my picture, and used the subtract blending mode.

  • I followed basic instructions found here/in various videos on dark frame subtraction in photoshop. Note that basically, all of those cover dark frame subtraction with one frame (or use tools other than photoshop). I have tried both using one, and 3 frames. The results are similar, albeit more pronounced with 3.

  • I used the free tool “sequator” to subtract darkframes instead (and to align stars). Adding the dark frames here made absolutely no difference.

  • (This is an edit/composite done with the frames I tried to subtract darkframes of)

  • A crop of the first picture, with (3) dark frames subtracted:
    with (3) dark frames

  • A crop of the second picture, without dark frames subtracted:
    without dark frames

networking – How to troubleshoot DNS not resolved issues?

I can’t open Hotstar/Disney+ on my Android phone/ Android TV. Upon investigation, I found out that the webpage hotstar.com is not opening in general on my phone/TV and is throwing an “err::NAME NOT RESOLVED” error. I can open hotstar.com on my laptop using the same network. Since this problem just started abruptly, I don’t even know where to look for. Thanks for any suggestions you may provide.

nginx udp load balancer issues

I am trying to get nginx working to load balance udp traffic. The upstream servers are configured for DSR and do not pass traffic back through nginx. I have a group of ports that I need to forward to the upstream servers while preserving the server port. So I need the traffic to get to nginx, nginx look at the list of ips of upstream servers and pick one using “hash $remote_addr consistent”, then send the traffic to the ip of the chosen upstream server with the ip of the incoming client and the original destination port it came to nginx on. Then the upstream server will receive the traffic as if it didn’t go through nginx. Any thoughts?

I have tried using a range with listen 9000-9999; but it doesn’t work and gives an error “host not found in “9000-9999” of the “listen” directive.” So I have a listen line for each port which is a real pain.

stream {

    upstream stream_backend {
        hash $remote_addr consistent;
        server 10.10.10.14:8999;
    }
    
    
    server { #use this for upstream lb
        listen     8999;
        proxy_pass stream_backend;
        proxy_bind $remote_addr:$remote_port transparent;
        proxy_responses 0;
    }
    
    server { #test going directly to ip
        listen 9000;
        listen 9001;
        listen 9002;
        listen 9003;
        #listen lines continue for whole port range
        proxy_pass 10.10.10.30:$server_port; #used this to go directly to a server for testing
        proxy_bind $remote_addr:$remote_port transparent;
        proxy_responses 0;
    }
    
}

networking – Debugging Linux/Java -> Redis performance issues

I have an application that currently has an in-memory cache and, of course, the performance is blazing fast. But due to some reasons (that are out of scope here) I want to start using Redis but the performance has dropped drastically.

I have a couple dozen instances, each running nginx and a Java app. /proc/sys/net/ipv4/ip_local_port_range has been bumped to 10240 65535 so that there are 64,000 ports available for nginx to talk to the app. With the in-memory cache, the environment can easily support some 10,000 RPS and the app would be utilizing some 45-50% of the CPU. With Redis, I’m only getting 2500 RPS and the CPU utilization isn’t crossing even 10%. The Redis cluster is already configured to support tens of thousands of clients but my app isn’t actually making more than a couple hundred connections total — it’s making approximately 8 connections per instance. I am using the Jedis driver and the maxTotal property is set to 200, so I’m expecting to see (200)*(~25 instances) = ~5000 connections.

How do I go about debugging this? Is there an OS-level knob that I should twist for it to make more connections to Redis? I would appreciate any help.

C++: Is a pointer to a vector going to cause memory issues?

I started to write a function which had a pointer to a vector as a parameter so that it could modify that vector to output results (as the actual return value was an error code), when I started to think about the memory behind that.

For example, if I have the following code:

std::vector<int> *vect = new std::vector<int>();

for (uint32_t i = 0; i < 10; i++)
{
    std::cout << "Ptr: " << vect << " Size " << vect->size() <<  " Max Size " << vect->capacity();
    vect->push_back(i);
    std::cout << " elements 0: " << (*vect)(0) << ", " << i << " :" << (*vect)(i) << std::endl; 
}

And I run it, I get the following output:

Ptr: 0x557c393f9e70 Size 0 Max Size 0 elements 0: 0, 0 :0
Ptr: 0x557c393f9e70 Size 1 Max Size 1 elements 0: 0, 1 :1
Ptr: 0x557c393f9e70 Size 2 Max Size 2 elements 0: 0, 2 :2
Ptr: 0x557c393f9e70 Size 3 Max Size 4 elements 0: 0, 3 :3
Ptr: 0x557c393f9e70 Size 4 Max Size 4 elements 0: 0, 4 :4
Ptr: 0x557c393f9e70 Size 5 Max Size 8 elements 0: 0, 5 :5
Ptr: 0x557c393f9e70 Size 6 Max Size 8 elements 0: 0, 6 :6
Ptr: 0x557c393f9e70 Size 7 Max Size 8 elements 0: 0, 7 :7
Ptr: 0x557c393f9e70 Size 8 Max Size 8 elements 0: 0, 8 :8
Ptr: 0x557c393f9e70 Size 9 Max Size 16 elements 0: 0, 9 :9

It seems as though this could cause major memory issues – because if the vector needs to expand, it could be writing into space which is already being utilized, because it looks like that pointer does not change. Even running this over a much larger loop, this pointer looks like it is constant.

I’m still a (relatively) new programmer, and am not sure that I have the grasp on memory allocation that I would like to. Is my understanding correct – will this cause buffer errors and overwrite adjacent memory? Or is there some protection in std::vector that I am not considering?

rules – Issues with Domain Access module

Using Drupal 8.9.13. I cannot seem to make the site I am building functional with the Domain suite of modules installed. Symptoms:
After installing and enabling the Domain module, the site generates a general error when I log in. There is an error in the dblog:

TypeError: Argument 1 passed to DrupalCorePluginContextContextHandler::DrupalCorePluginContext{closure}() must implement interface DrupalCorePluginContextContextInterface, null given in DrupalCorePluginContextContextHandler->DrupalCorePluginContext{closure}() (line 76 of C:UserseugeneSitesacquianewfaceswebcorelibDrupalCorePluginContextContextHandler.php)

I narrowed it down to some kind of conflict between Layout builder module and Domain module. If either one is uninstalled, the error is gone. When Layout builder is uninstalled I can create domain records and then re-enable Layout builder. At this point there will be no errors.

When I continue to enable other modules in the domain suite, they seem to install fine, until I install Domain Source module. It seems to be in some conflict with the rules module. I cannot view user profile, including my own with the error indicating that memory of 512M is exhausted (no error in the dblog). Uninstalling either modules removes the error condition.

Please help. I don’t see any similar issues here.

linux – Issues in communicating via IPSec (StrongSwan) between an Android client and its gateway (IKEv2)

I’ve been attempting to create an IPSec VPN into my home network to which I can tunnel from outside, eg. on my phone or thru my laptop when I’m abroad. Authenticating the clients is done via pubkey authentication with x509 certificates. All is working there, the only issue I have is with the Android client (on the official StrongSwan VPN app) which is failing to connect.

(IKE) authentication of 'arch' with RSA_EMSA_PKCS1_SHA2_256 successful
(IKE) IKE_SA android(3) established between (redacted)(C=IT, O=(redacted),
CN=(redacted) (havoc))...(redacted)(arch)
(IKE) scheduling rekeying in 35733s
(IKE) maximum IKE_SA lifetime 37533s
(IKE) installing DNS server 192.168.1.254
(IKE) installing new virtual IP 192.168.1.74
(IKE) received NO_PROPOSAL_CHOSEN notify, no CHILD_SA built
(IKE) closing IKE_SA due CHILD_SA setup failure

From what I’ve found (and been told) the received NO_PROPOSAL_CHOSEN notify, no CHILD_SA built is either due to a mismatch between cipher suites or an invalid ts config. Both should be correct, considering that the official StrongSwan wiki has a configuration that should support most, if not all, the (up-to date) client cipher suites. ts is likely correct because the Android client, as can be seen above, does actually get an IP via DHCP and does actually install it.

Configuration:

root@arch ~ # cat /etc/swanctl/swanctl.conf 
connections { 
        rw { 
                local_addrs = 192.168.1.143, (redacted) 
                pools = dhcp 
                local { 
                        auth = pubkey 
                        certs = serverCert.pem 
                        id = arch 
                } 
                remote { 
                        auth = pubkey 
                } 
                children { 
                        net { 
                                local_ts = 192.168.1.0/24 
                                updown = /usr/local/libexec/ipsec/_updown iptables 
                                esp_proposals = aes192gcm16-aes128gcm16-prfsha256-ecp256-ecp521,aes192-sha256-modp3072,default 
                        } 
                } 
                version = 2 
                proposals = aes192gcm16-aes128gcm16-prfsha256-ecp256-ecp521,aes192-sha256-modp3072,default 
        } 
} 
include conf.d/*.conf 

Does anyone have any insight into this?