Is a MacBook Pro 13” M1 supposed to be laggy?

so I’ve been reading of how amazing the M1 is. Any YouTube and their mother says that’s incredible what the new MacMini can do and how their 13” M1 MacBook is faster than their MacPro even when rendering videos, that it’s a “groundbreaking” technology and such.

A couple of months ago I changed job and I got a shiny new Macbook Pro 13” M1with 8 GB of RAM. It’s sluggish. Like a pain to work with. Almost every action I input is executed with some delay that goes from “I can feel that something is not working properly” to “I pressed Cmd+F and I have to wait 2 to 3 seconds before the search window appears”.

I’ve read that Google Chrome is not optimised for M1 and that causes know issues, so I uninstalled it and switched to Safari. It’s a bit better, but way far from the “astonishing revolution” I keep reading about. I use it with an external monitor attached via HDMI, external magic mouse and magic keyboard. I use it to program and typically my IDE takes 3 or 4 GB, but isn’t the new RAM supposed to magic or something?

Is there any action I can perform to check that everything is in order?

code organization – Where are all the small functions supposed to go in XP?

I like the concept of Extreme Programming (as I understand it), that there should be many smaller functions with descriptive names, instead of fewer, longer ones, even if those functions are only called from one piece of code.

I do this all the time, but it does tend to litter the class definition quite a lot. I already mark those “only internal” functions by prepending a “_” to them, but they still take up a lot of editor real estate.

Is there a canonical way to deal with this? For example, write them all in alphabetical order under all the “public” functions, or write them under the function they are called in.

In case it matters I mostly write Python code.

dungeon world – How am I supposed to handle disruption in a Perilous Journey?

Undertake a Perilous Journey is perhaps my least favourite basic move.

The intention seems to be to abstract the specifics of a perilous journey, which is useful. My issue comes with the fact that a perilous journey seems to always require lowering the level of abstraction to deal with peril, and the move sort of breaks down leaving me unsure of what to do.

My agenda and the very text of the move itself encourage me to add some adventure into the mix: Goblin’s attack, you find a cool cave etc. So I end up with stoppages to deal with all the cool stuff. This is fine but when the encounter is handled the party usually wants to resume their journey. I am never sure if I should just complete the journey based on the stale roll from earlier or if they trigger the move again. I’ve played with both, but neither really seems to “work” for me.

This is particularly an issue because my party loves to start journeys with little or, more often, no food. So running out of food and foraging every day is not uncommon for our journeys. This creates lots of stoppages and exacerbates the issues with the move. The stoppages are fun, collecting food and exploring places isn’t a chore, especially in the perilous wilds. Its the weirdness with the journey move that gets in our way. Either you are rolling for each leg of the journey, or you end up with a really outdated roll hanging over the party for a long time.

How should this move be done? How is it supposed to interact with stoppages?

reverse proxy – Why “/” nginx location rule fails to catch some URLs? Isn’t it supposed to go with all?

I have an nginx that serves as reverse proxy, and it redirects requests to an angular app or to a node js backend app depending on the request URL. There is also a rule location ~ /s/(cas)/(.*) that serves static content (although I’m seeing now that if “/” caught this route too, it would not be necessary to have that rule, as static content is also kept at backend:4000).

My concern is particular to the most general rule “/” that is supposed to catch all requests that did not fall into any other location, it is not applying correctly to some URLS causing nginx to send its 50x.html error page. In particular, my problem is that this redirection seems to not catch all traffic that didn’t fit a previous rule. And is the one rule in charge of redirecting the traffic that should land on the angular app.

If I’m correct, this should fall under the “/” rule:

https://SUBDOMAIN.DOMAIN.es/user/trip/13925/instant?sharedToken=(REDACTED)

And these should at least be redirected correctly by the “/” rule, but also show the nginx fail page after a lot of timeout:

https://SUBDOMAIN.DOMAIN.es/user/trip/foo/instant?sharedToken=(REDACTED) # changed id for "foo"
https://SUBDOMAIN.DOMAIN.es/user/trip/instant?sharedToken=(REDACTED) # removed id segment of url
https://SUBDOMAIN.DOMAIN.es/user/instant?sharedToken=(REDACTED) # also removed "trip" segment of url

Any other variation of the url works fine and is redirected to https://backend:4000.

So, why aren’t these rules caught by the location “/”?

This is the nginx config file. Domain and subdomain have been omitted on purpose:

server {
    listen 443 ssl http2;
    listen (::):443 ssl http2;
    expires $expires;
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomains" always;
    server_name (SUBDOMAIN).(DOMAIN_NAME).es;
    ssl_certificate /etc/nginx/ssl/CERT.crt;
    ssl_certificate_key /etc/nginx/ssl/CERT.key;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
    ssl_session_cache shared:SSL:5m;
    ssl_session_timeout 1h;
    gzip on;
    gzip_disable "msie6";
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

    location ~ /api(?<url>/.*)  {
        resolver 127.0.0.11;
        set $target http://backend:5000/api${url}$is_args$args;
        proxy_set_header X-Forwarded-Host $host;     # Relay whatever hostname was received
        proxy_set_header X-Forwarded-Proto $scheme;  # Relay either http or https
        proxy_set_header X-Forwarded-Server $host;   # Relay whatever hostname was received
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Prefix /api/;
        proxy_set_header Host "SUBDOMAIN.DOMAIN.es";

        add_header Access-Control-Allow-Origin *;
        add_header Access-Control-Max-Age 3600;
        add_header Access-Control-Expose-Headers Content-Length;
        add_header Access-Control-Allow-Headers Range;

    ## Websockets support 2/2
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    ## END Websockets support 2/2

        proxy_pass $target;
        client_max_body_size 10M;
    }

    location ^~ /_assets/ {
        alias /usr/share/nginx/html/assets/;
    }

    location ^~ /.well-known/acme-challenge/ {
        alias /usr/share/nginx/html/.well-known/acme-challenge/;
    }

    location ~ /s/(cas)/(.*) {
        add_header Pragma "no-cache";
        add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0";
        proxy_pass http://backend:4000;
    }

    location / {
        #root /usr/share/nginx/html;
        proxy_pass http://backend:4000;
        expires -1;
        proxy_set_header X-Forwarded-Host "SUBDOMAIN.DOMAIN.es";
        proxy_set_header X-Forwarded-Server "SUBDOMAIN.DOMAIN.es";
        proxy_set_header Host "SUBDOMAIN.DOMAIN.es";

        add_header Pragma "no-cache";
        add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0";

        add_header Access-Control-Allow-Origin *;
        add_header Access-Control-Max-Age 3600;
        add_header Access-Control-Expose-Headers Content-Length;
        add_header Access-Control-Allow-Headers Range;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

}

my program is supposed to take input n times but i takes for n-1 times only pls help

#include
#include

using namespace std;

int main()
{ int n;
cin >> n ;
string AllWords(n);

for(int i=0;i<n;i++){
    getline(cin,AllWords(i));
}

for(int x=0; x<n;x++){
    if (AllWords(x).length()>10){
        cout << AllWords(x).at(0);
        cout << AllWords(x).length()/10;
        cout << AllWords(x).length()%10;
        cout << AllWords(x).at(AllWords(x).length()-1) << endl ;
    }

}

}

c++ – How am I “supposed” to hold a pointer to an object I don’t own?

For instance, let’s do this in the context of a dynamic programming solver where each partial solution has a link back to the problem it solves. I might do that like so:

class Problem { /* ... */ };

class PartialSolution
{
  Problem * problem;

  // ...
};

This is fine and produces working code — including a default copy constructor and copy-assignment operator that work correctly — but then when I turn on various kinds of warnings things get upset that I’m holding a raw pointer — they assume that I must own the thing I’m pointing at and that I’m not cleaning it up.

So I take it I’m supposed to use some kind of smart pointer? But what are my options here? std::unique_pointer is definitely incorrect, since I don’t own the object. std::shared_pointer would at least work correctly, but it causes me to do unnecessary reference tracking. There’s also std::weak_ptr, but that’s even worse in terms of unnecessary cost because I have to convert it to a temporary std::shared_ptr each time I want to use it.

There’s also std::reference_wrapper in <functional>. This seems like it may actually do to right thing here, although I don’t enough about it to be confident about this and given that it’s in <functional> and not <memory> I feel like this may not be the intent.

Or should I just hold a raw pointer and do something to disable the warnings? (And, if so, what?)

Where is admin bar supposed to appear?

Where in the page’s HTML is the admin bar supposed to appear? On my blog it is added to the footer section, with the result being that there is a gap above it when viewing the blog on mobile phones. (The admin bar has position:absolute; at small resolutions, and so does the footer it’s contained within.)

difficulty – Isn’t Bitcoin’s hash target supposed to be a multiple of 2?

From Bitcoin’s whitepaper I’ve gathered that the hash of a block must start with a certain number of zeroes. And that this number of zeroes is adjusted every 2 weeks. Consequently the hash target is a multiple of two.

Requiring an extra zero for the hash function will divide the hash target by 2.

With that being said, in another question on the exchange (How is difficulty calculated?) it appears that the difficulty can be multiplied by fractions (e.g. 40%).

Does that mean that the hash targets aren’t necessarily multiples of 2? Or is the target rounded to the nearest multiple of 2?

differential equations – How are Derivatives supposed to be used with DSolve?

How are Derivatives supposed to be used with DSolve?

I’m trying to solve a simple general PDE $-y_{tt}+beta y=0$ with $y_t(0),y_t(1)$ Neumann BCs.

The documentation wasn’t very helpful

https://reference.wolfram.com/language/ref/DSolve.html

Boundary conditions for PDEs can be given …

But it doesn’t seem to specify, how to input those BCs.

best practices – Are SQL unit tests supposed to be so long?

I am writing stored procedures with some non-trivial business logic. I am trying to unit test them, but the actual tests end up being quite long (shorter ones starting at 40-50 LoCs, using commonly 4 different tables), which doesn’t seem very “unit”. (Admittedly, I format my code in a way where it takes a lot of space.)

In context of “normal” programming languages I’ve heard the advice to refactor the complex procedure into smaller chunks. But I don’t want to do that here because:

  1. I don’t want to pollute “global namespace” by small routines called from one place only.
  2. Passing around tables from and to stored procedures is cumbersome.
  3. Custom functions can have negative effects on performance.

Am I wrong about this reasoning?
I am new to unit testing, so perhaps I am just writing my tests wrong?
Is SQL longwinded language and thus it’s unit tests are longer as well?


(I am using SQL Server with tSQLt framework, but I believe the question is system-agnostic.)