networking – Broadband network protocol overhead

I have an assignment for my university course that requires to review the 2020 report of Measuring Broadband Australia that covers the current state of the connection quiality within Australian NBN broadband network.

In particular in one of the sections the report covers the topic of why most of the users of NBN can usually attain between 90 and 95% of advertised download speed and the authors say:

This reiterates the point raised in previous reports that NBN tier
speeds are not provisioned so that maximum plan speeds are attainable
after accounting for protocol overhead.

The go on to further elaborate in the footnote that:

Protocol overhead include packet headers, which are added to network communications to
ensure that they arrive at the right network
address. Packet headers take up space, which means that the connection has less room for whatever data is being sent.

This gap of 5-10% of the advertised speed appears to be independent of the advertised speed so a person on 50/20 plan loses 2.5-5 Mbps and on 100/40 plan they lose 5-10 Mbps and this seems like an extremely high overhead.

Unfortunately, I couldn’t find any reliable sources on protocol overhead for broadband connections and was wondering if anyone here knows any sources or has any information.

For information, the Australian NBN is a combination of fibre-to-the-curb (FTTC), fibre-to-the-node (FTTN), fibre-to-the-premises (FTTP) and hybrid fibre coaxial (HFC).

performance – Overhead when using a partial application in C++

I am trying to find a way to pass a partial application as an argument of an other application with no overhead. I think I have find a way to do it (which might be dirty). The templated structure “partial_add” is kind of a partial version of add2. I am passing the type “partial_add” as template argument to another function.

void add2(int *a, int *b){
    *a += *b;
}

template <int **a>
struct partial_add{
    static void method(int *b){
        add2(*a,b);
    }
};

// Apply a function to every elements of b
template <typename FUNCTION>
void apply_array(int *b){
    for (int i = 0; i < 100; ++i){
        FUNCTION::method(&b(i));
    }
    
} 

int main(){
    // a required to be static so that &a can be passed as a template argument
    static int *a = new int();
    int *b = new int(100);


    apply_array<partial_add<&a>>(b);

}

What I like about that solution is that if you look at it in godbolt you will see that at optimization level 1 (In real life I use -O3), the line “apply_array<partial_add<&a>>(b)” becomes :

.L4:
        add     ecx, DWORD PTR (rdx)
        add     rdx, 4
        cmp     rdx, rax
        jne     .L4

So the compiler understood what I was trying to do.
What I don’t like about this solution is the fact that I have to declare “a” static, it bothers me a bit. In theory it’s not a problem for my program, in practice it’s kind of bad.

I tried to use partial application with lambda expression but at the end there was indeed an overhead (a call for exemple).

Maybe you have an idea of what I could do about that.

Thanks a lot in advance.

Cheers

magento2.3 – How can I reduce the Largest Contentful Paint overhead of Magento 2 cookie restriction mode?

Our Google Pagespeed Insights score is suffering due to Largest Contentful Paint when using the Magento 2.3.0 default cookie restriction mode popup. With cookie restriction mode enabled the LCP is showing at over 7 seconds for mobile, the score is close to 3 seconds with cookie restriction disabled (some further minor changes will get LCP below 3s, to the ‘good’ Pagespeed level).

Can anyone suggest how I can reduce the system overhead of the default cookie popup please? For example, can it be preloaded or loaded earlier in the page? Are there any 3rd party extensions which are known to perform much better than the default cookie popup? Would a Magento version upgrade (=> 2.4) help?

Hoping for good news. Thanks…

Why does querying a PostgreSQL database have an overhead of a few dozens of milliseconds?

Everything is in the same place, same local computer.

Whenever my Go application as well as pgAdmin 4 queries the PostgreSQL database, it takes at least a few dozens of milliseconds, however short the actual execution time is. Where does this overhead originate from? What causes the delay? It’s the same without the EXPLAIN ANALYSE.

pgAdmin 4

Overhead

c# – Measuring async/await overhead

A while ago I read an article stating that overhead of an async/await call was around 50ms. More recently I read an article that it was around 5ms. I was having a discussion about whether we should standardize async operations for all DB access and decided to take a crack at measuring it myself. And I ended up adding the following methods to a controller:

private int profileIterations = 1000;
(HttpGet)
public long NonAsyncLoop()
{
    var timer = new System.Diagnostics.Stopwatch();
    timer.Start();
    for (int i = 0; i < profileIterations; i++)
    {
        Thread.Sleep(5);
    }
    timer.Stop();
    return timer.ElapsedMilliseconds;
}

(HttpGet)
public async Task<long> AsyncLoop()
{
    var timer = new System.Diagnostics.Stopwatch();
    timer.Start();
    for (int i = 0; i < profileIterations; i++)
    {
        await Task.Delay(5);
    }
    timer.Stop();
    return timer.ElapsedMilliseconds;
}

This test returns surprisingly regular results indicating that the overhead of calling await Task.Delay() vs Thread.Sleep() is ~1/3 of a ms. Does anyone have any other easy test that could indicate the overhead? Because below 10ms of overhead it becomes a no-brainer to standardize async operations for all DB access.

wi fi – Wi-Fi Multimedia (WMM) causing a 100ms latency overhead on Android device

So the issue is that I have a Samsung M01 (Android 10) which is having a 100ms overhead latency.

Basically pinging the router gives me a average of 110ms of latency.

So after going around tweaking many settings I finally isolated the problem to the WMM setting.
The lag only happens when WMM is enabled so disabling it fixes the problem.
The problem with disabling WMM is that after that my router only goes at a extremely slow 14Mbps Disabling WMM causes Wi-Fi Speeds to drop to 14Mbps

Also another thing I noticed is that this latency overhead only happens when I am connected to a mobile tower (tested by enabling airplane mode )

The latency was monitored while I was playing a game, Downloading some files on a Wi-Fi call and it doesn’t go below 100ms. But there are some random times when it starts behaving properly and gives me a latency of 2 to 5ms

Note : The ping monitoring was done by pinging in the local network.
This problem only occurs on my Samsung android device (It also occurs on another Samsung device (Samsung On7 Pro Android 5) but not on the Lenovo device I own). This doesn’t occur on any PC/Laptops I have access to.

So does anyone have any suggestion on how to fix this issue?
Thanks in advance.

wi fi – WMM causing a 100ms latency overhead on Android device

So the issue is that I have a Samsung M01 (Android 10) which is having a 100ms overhead latency.

Basically pinging the router gives me a average of 110ms of latency.

So after going around tweaking many settings I finally isolated the problem to the WMM setting.
The lag only happens when WMM is enabled so disabling it fixes the problem.
The problem with disabling WMM is that after that my router only goes at a extremely slow 14Mbps
(https://superuser.com/questions/1625684/disabling-wmm-causes-wi-fi-speeds-to-drop-to-14mbps)(1)

Also another thing I noticed is that this latency overhead only happens when I am connected to a mobile tower (tested by enabling airplane mode )

The latency was monitored while I was playing a game, Downloading some files on a Wi-Fi call and it doesn’t go below 100ms. But there are some random times when it starts behaving properly and gives me a latency of 2 to 5ms

Note : The ping monitoring was done by pinging in the local network.
This problem only occurs on my Samsung android device (It also occurs on another Samsung device (Samsung On7 Pro Android 5) but not on the Lenovo device I own). This doesn’t occur on any PC/Laptops I have access to.

So does anyone have any suggestion on how to fix this issue?
Thanks in advance.

backup – Sync directories without too much overhead

I want to have a cloud-backup of my Documents directory; however, I’m not too fond of the idea of uploading the Documents in an unencrypted fashion. That’s why I looked into the Cryptomator app. I do really like the idea of having a cloud-backup while still being sure that my documents are not analyzed by any storage provider.

The general approach would be to move the documents folder to the mountable Cryptomator drive and to save everything directly to this mounted drive. As I cannot be totally sure that Cryptomator will never end their service, and I also cannot be sure that there will never be any decryption issues, I don’t like the idea of only having encrypted versions of my documents.

Hence, my optimal solution would be to keep my local Documents directory and sync it to the mounted Cryptomator drive, which will then upload the encrypted documents to my cloud storage. If a problem arises, I will still have access to the local copy; if not, great, I’ll now be able to access my documents wherever I am, without fearing that any third-party can read my documents.

How should I go about the syncing of the directory? Can I do this with the functions present on macOS, or do I have to install additional software? Do you think that my plan is well thought out?

database design – The overhead of OCC validation phase

The validation phase of optimistic concurrency control has two derctions: one is backward validation, checking for conflicts with any previously validated transaction, the other is forward validation, checking for conflicts with transaction that not yet committed.

The validated transactions install modifications into “global” database, which means the main work backward validation needs to do is checking conflicts with “global” database. However, the forward validation needs to check conflicts with each running transaction. It introduces expensive communication between threads if the database supports multithreading, and also extensive memory read when the concurrency level of transaction is pretty high.

As far as I know, forword validation is more widely adopted than backward validation. Why? Which cases are suited for forward validation and backward validation respectively?