is there a good shared hosting that’s better than most and slightly lower than VPS?

So i’ve been with a host for over 10yrs.. but our small WORDPRESS + WOO-COMMERCE website is just crawling / freezing on it. And we’ve done all we can on the site side. We have reasonable amount of plugins installed… and there is barely any traffic on the site but as soon as 1 or 2 users are on it, it crawls and sometimes even times out!

So i got a reseller package from INMOTIONHOSTING and trying them out.. this site is slightly better performing but the moment there are 2 or more users it crawls.. CPU spikes 100% etc..

Method of testing:

2 Tabs open for WP ADMIN

-> refreshing the Orders panel (simulating 2 admins processing orders)

2 Tabs open for Product display (com/?s=baby+&post_type=product)

-> with basic search “baby” (simulating 2 customers browsing products on the front end of the site)

NO OTHER USERS on site, and the damn thing is barely responsive.. and occasionally times out. ERROR 500

View post on

So just 2 admins processing orders, and 2 customers browsing, and the site is barely usable!

Im honestly disappointed with the new host i thought IMH is high performing (as im paying nearly double for their reseller rates)

I refuse to believe that I need a strong VPS for this site (jesaz , just 4 users and a few plugins?!) …. but i am wiling to pay a bit more for a truly good performing shared host better hardware or better customer management (because perhaps IMH is overselling or im on a machine that has an abusive user hogging machine resources!?)

So im at a point willing to pay for a stronger better shared hosting (VPS is a waste of $$ at this point and does not make sense at all, again, with just 4 busy users on site!)

mysql – How do I join rows from the same table with slightly different info?

We have a new sector of our company that is being put into our employee record feed. However this sector is listed as contractors as there are still parts of the sector that need access to systems that require domain access… so…

I have:



and for Tom Smith
blah blah,
blah blah,
blah blah…

and also for Tom Smith

Not everyone on the feed is like this, only about 10%. Most are in case number 2.

What I am looking for is the easiest way to equate the two rows associated with Tom and fill in as much info on Tom as I can because some fields come from one account and other fields come from the other. The only thing that can accurately make the two rows common is the beginning of the email address – this is standardized. So there should be a few thousand people (rows of duplicates) on my feed that I can combine if it can be done right.

I would also like to capture both EmpIDs somehow in the same row and both email addresses.

complexity theory – Is there CIRCUIT-SAT algorithms that slightly depends on gates count?

Yes, such an algorithm exists, but it is the same algorithm used for SAT solving in general.

While the Tseytin extension variables associated with gates do appear in the CNF transformation of the original circuit, a minimally competent SAT solver will rarely branch on them. This is because one of the earliest heuristics discovered for speeding up searches was choosing decision variables based on how often they appear in clauses. The one-per-subformula Tseytin extension variables appear in only a few clauses, while the original input variables will be involved in many clauses as the leaves of the tree of Tseytin extensions. So the Tseytin variables will all have low scores and won’t be used often as decision variables. The solver will therefore iterate through assignments of the input variables much like a brute-force search would, except that it will also take advantage of unit propagation, clause learning and all the other features of a modern SAT solver to get through the search space faster.

I say “minimally competent SAT solver” above because a really competent solver would take advantage of well-known CNF preprocessing techniques that can remove most of these gate variables before the actual solver search even starts. So CIRCUIT-SAT solvers are generally just SAT solvers. You can gain speed in some circumstances by having access to the original circuit in addition to the CNF result, but the gate variables don’t much factor into that.

encryption – ECDH for P-521 (Web Crypto Api) / secp521r1 (NodeJS Crypto) generate a slightly different shared secret

I have generated a public and private key pair with ECDH from NodeJS

function _genPrivateKey(curveName = "secp384r1", encoding = "hex") {
    const private_0 = crypto.createECDH(curveName);
    return private_0.getPrivateKey().toString(encoding);





(jwk) {
  key_ops: ( 'deriveKey' ),
  ext: true,
  kty: 'EC',
  crv: 'P-521'

and Alice‘s keys from a web page with Web Crypto API

const generateAlicesKeyPair = window.crypto.subtle.generateKey({
        name: "ECDH",
        namedCurve: "P-521"



When I try to derive the shared key a strange thing happens, the key has different bits at the end.


function _getSharedSecret(privateKey, publicKey, curveName = "secp521r1", encoding = "hex") {
    const private_0 = crypto.createECDH(curveName);
    private_0.setPrivateKey(privateKey, encoding);
    const _sharedSecret = private_0.computeSecret(publicKey, encoding);
    return _sharedSecret

Web Crypto API

const sharedSecret = await window.crypto.subtle.deriveBits({
        name: "ECDH",
        namedCurve: "P-521",
        public: publicKey





This happens only with the curve P-521/secp521r1 but not with the curves P-256/secp256r1

encryption – Is it possible to attack a one-time pad encrypting only slightly more than its length?

This question might be uncannily similiar to this one posted over 6 years ago, but the structure is completely different.

It’s written nearly everywhere that OTPs are incapable of sustaining more than their length of data – but I have yet to find an answer besides “reusing them makes cryptanalysis possible” in several different ways.

Suppose Alice and Bob wanted to maintain a OTP-based connection, but are unable to physically exchange entropy besides an initial file. Luckily, Alice has her own entropy source. What Alice and Bob could do is use the initial OTP to send two bits of data at the cost of one bit of entropy. The order of entropy and starting offset could be negotiated beforehand using typical OTP, but let’s assume for simplicity’s sake that you’re sending 2 payload bits in series – and XORing them with an entropy “key” bit, and that the offset Alice picks is also taken from her entropy source.

A meddling Eve, if she knows how this protocol works, could only tell one of two metadata possibilities as a fact:

  1. that any two linked data bits are the same (ambiguous between 00 and 11)
  2. that any two linked data bits are different (01 and 10)

However, Eve should be unable to determine which of the 2 ambiguous combinations is the correct combination… right? After all, Eve could pick up on patterns of the metadata, but not before Alice and Bob notice.

The data transferred this way (which could be even more entropy – Eve doesn’t know which part of the data is which!) would be protected by the initial challenge, and by the entropy of the initial OTP. Knowing all this, we can finally rephrase the question:

Is it possible to perform a cryptanalysis on a hybrid OTP described above, and if it is, what would be an efficient attack vector?

Would it be instead preferable to perform an RSA/Diffie-Hellman key exchange over the initial OTP?

c# – Why is my CameraCrop script blurring the screen slightly in Unity?

I’ve been trying to enforce a 4:3 aspect ratio in my Unity game, in order to give a nostalgia feeling to it. But whenever I add this script to my camera and compile my game, it blurs slightly. I am using a 4:3 aspect ratio in the game view and have Fullscreen Window chosen in my Window Option and in Supported Resolutions I have only 4:3 ticked. Also, I am used Pixel Perfect Camera with Pixel Snapping enabled.

Here’s what my game looks like without the CameraCrop script added:

enter image description here

And here is what it looks like when the script is added:

enter image description here

Here’s the script I add to my camera:

using UnityEngine;
using System.Collections;
using System.Collections.Generic;

// Requires that the camera component is on the GameObject
public class CameraCrop : MonoBehaviour

    // Sets the aspect ratio to whatever you want
    public Vector2 targetAspect = new Vector2(4, 3);
    Camera _camera;

    void Start()
        _camera = GetComponent<Camera>();

    // Call this method if your window size or target aspect change.
    public void UpdateCrop()
        // Determine ratios of screen/window & target, respectively.
        float screenRatio = Screen.width / (float)Screen.height;
        float targetRatio = targetAspect.x / targetAspect.y;

        if (Mathf.Approximately(screenRatio, targetRatio))
            // Screen or window is the target aspect ratio: use the whole area.
            _camera.rect = new Rect(0, 0, 1, 1);
        else if (screenRatio > targetRatio)
            // Screen or window is wider than the target: pillarbox.
            float normalizedWidth = targetRatio / screenRatio;
            float barThickness = (1f - normalizedWidth) / 2f;
            _camera.rect = new Rect(barThickness, 0, normalizedWidth, 1);
            // Screen or window is narrower than the target: letterbox.
            float normalizedHeight = screenRatio / targetRatio;
            float barThickness = (1f - normalizedHeight) / 2f;
            _camera.rect = new Rect(0, barThickness, 1, normalizedHeight);

I am using Unity 2019.4.17f LTS and using Windows 10.

mariadb – Galera cluster slightly slower than single database?

I recently set up a MariaDB galera cluster for our production. I used sysbench to benchmark the cluster against the old database which is on a single server.

On my PRD Galera Cluster I got the following results:

SQL statistics:
    queries performed:
        read:                            3914980
        write:                           0
        other:                           782996
        total:                           4697976
    transactions:                        391498 (1304.77 per sec.)
    queries:                             4697976 (15657.22 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          300.0492s
    total number of events:              391498

Latency (ms):
         min:                                    5.37
         avg:                                   12.26
         max:                                   66.20
         95th percentile:                       15.83
         sum:                              4798745.23

Threads fairness:
    events (avg/stddev):           24468.6250/414.77
    execution time (avg/stddev):   299.9216/0.01

Meanwhile our old single database production got this results:

SQL statistics:
    queries performed:
        read:                            5306060
        write:                           0
        other:                           1061212
        total:                           6367272
    transactions:                        530606 (1768.51 per sec.)
    queries:                             6367272 (21222.18 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          300.0266s
    total number of events:              530606

Latency (ms):
         min:                                    3.87
         avg:                                    9.04
         max:                                   59.99
         95th percentile:                       12.08
         sum:                              4798278.00

Threads fairness:
    events (avg/stddev):           33162.8750/440.14
    execution time (avg/stddev):   299.8924/0.01

Now I’m wondering why does the cluster operate a bit slower than the single database? They have the same specs: Quadcore CPU, 32GB RAM and vm.swappiness=1. Here’s my cluster configuration (same across 3 servers) and is using HAProxy to load balance between 3 servers:

max_connections = 3000

innodb_flush_method = O_DIRECT_NO_FSYNC


thread_handling = pool-of-threads
thread_stack = 192K
thread_cache_size = 4
thread_pool_size = 8
thread_pool_oversubscribe = 3

wsrep_provider_options="gcache.size=10G; gcache.page_size=10G"

I used sysbench on a spare server, does the latency between servers also affect the outputs? I would appreciate any inputs, thank you.

bash – How to remove duplicate images that are slightly modified?

I have thousands of images that I have made in my phone and DSLR over last couple of years and never given a thought about photo management until recently and it is a mess now. I used
fdupes -r . > picLog &
to retrieve information about duplicate images and then used fdupes again to delete them.

However, there are still several hundreds (if not thousands) of images left that are duplicate. Hence, I used ‘identify -verbose image.jpg’ to compare some images that fdupes and fslint-gui fail to differentiate. It looks like those have slightly modified time. Is there any way I can compare those and delete the modified ones only?

fileNames and size are same.

Google Assistant Voice Search is Slightly Modifying My Search Queries

Has anyone else noticed when voice searching with Google Assistant it’ll sometimes change your query slightly but to a question you didn’t ask?

This is not a simple case of mis-hearing spoken words; Google is trying to be smarter than me by changing my search query to idiot-proof it.

One example of many I’ve experienced:

  • Asked google: “How to disable ‘Application X’ from starting with MacOS Boot”
  • Search field populated with and results showing for: “‘Application X’ will not start.”

Not what I asked, and this happens frequently.

Is this a preference, is there any way to turn this off?

Any thoughts appreciated.