## charts – How to avoid performance issues in user-customizable dashboards by limiting in some way the amount of information being displayed

I work for a Product that allows users to create customs dashboards. They have the possibility to create 1-25 custom charts and the available chart types are indicators, columns, bars, area, pie, and line.

The problem that we are currently facing is that some users create dashboards with dimensions that not only bring an insane amount of data which causes performance issues but also are really hard to read and analyze.

• Reduce the data that we render in the charts.
• Limit the users so they don’t create no-sense reports.

## Limiting the number of images resized during upload

I’m troubleshooting issues with image uploading and noticed that each imaged uploaded is making literally 10 sizes of varying proportion. I don’t understand why. Why are there so many resized versions and how can I limit the number of images created?

BTW I have no plugins installed and this is a nearly stock install on Ubuntu with WP 5.8. Thanks.

``````total 24592
-rw-rw-r-- 1 www-data www-data 20899309 Aug 10 16:50 20210807_114924.jpg
-rw-rw-r-- 1 www-data www-data  1558883 Aug 10 16:51 20210807_114924-scaled.jpg
-rw-rw-r-- 1 www-data www-data    82553 Aug 10 16:51 20210807_114924-225x300.jpg
-rw-rw-r-- 1 www-data www-data    66595 Aug 10 16:51 20210807_114924-150x150.jpg
-rw-rw-r-- 1 www-data www-data   345543 Aug 10 16:51 20210807_114924-768x1024.jpg
-rw-rw-r-- 1 www-data www-data   679444 Aug 10 16:52 20210807_114924-1152x1536.jpg
-rw-rw-r-- 1 www-data www-data  1090133 Aug 10 16:52 20210807_114924-1536x2048.jpg
-rw-rw-r-- 1 www-data www-data   100992 Aug 10 16:52 20210807_114924-300x400.jpg
-rw-rw-r-- 1 www-data www-data    90766 Aug 10 16:52 20210807_114924-350x260.jpg
-rw-rw-r-- 1 www-data www-data   182116 Aug 10 16:52 20210807_114924-680x500.jpg
-rw-rw-r-- 1 www-data www-data    60817 Aug 10 16:52 20210807_114924-86x60.jpg
``````

I’m using default dimensions defined in Settings:

## Have you ever had serious problems with Paypal? (freezing, limiting) | Proxies-free

I see some alarming stories lately online, from people claiming Paypal is withholding their account balance for various reasons. I’m not talking about the “pending” state that Paypal enforces. Now, for some of the cases I read, the users were violating the Terms of Service by misrepresenting themselves as adults when they were minors, providing a false identity etc.
In one conversation on reddit a person claimed that Paypal will “use any excuse they find to keep your money”, but I’m not sure how biased that view is. I guess that makes sense in a way, since it’s a profit-geared service not unlike a bank, and some banks do this all the time.

Other things that can trigger a freezing or putting a cap on how larga amounts you can process per transaction, like making sudden purchases or transfers with large sums of money. In some cases people who launch their enteprises may be faced with this problem.

Series of chargebacks, bad reviews by other transacting parties, strange IP logins, suspicion of earnings from providing illicit content, products or services, or a bad credit score can be other reasons that attract Paypal’s attention to your account.

There are instances of people receiving a 6 month balance-lockdown on balances worth thousands of dollars. While they rarely do this for periods of time >6 months, that small timeframe can be detrimental to people who pay bills through Paypal, or have their entire earnings and savings handled by Paypal.

Have you ever had these kinds of problems with Paypal?
I’ve never had problems with them, personally.

## pr.probability – Limiting behavior of \$k^{th}\$ order statistics of n non-i.i.d chi square random variables

This is related to one of my previous questions here.

Let $$(Z_1, Z_2, ldots, Z_n)sim N(0, Omega)$$, where $$Omega = (1-mu) I_{ntimes n} + mu boldsymbol{1}_nboldsymbol{1}_n^top$$. Here $$boldsymbol{1}_n$$ denotes the vector of 1’s of length $$n$$. Let us define $$X_i = Z_i^2$$ and I am trying to investigate the asymptotic of $$k^{th}$$ order statistic $$X_{(k:n)}$$ where $$k/n to 1$$ as $$n to infty$$.

As a special case let $$k=n$$. When $$mu =0$$, it is known that $$T_n:=X_{(n:n)}/log n overset{p}{to} 2$$.

What can we say about $$T_n$$ when $$0leq mu<1$$? I have some conjectures about $$T_n$$. If I am not wrong then one can use Slepian’s lemma to conclude that $$P(X_{(n:n)}>t)leq P(Y_{(n:n)}>t)$$ for $$tin mathbb{R}$$, where $${Y_i}_{i=1}^n$$ are i.i.d $$chi^2_1$$. Though I am not completely sure of this fact. Here is the idea I am thinking about. Let $$(W_1, cdots,W_n)sim N(0, I_{ntimes n})$$. By Slepians’ lemma we know $$P(Z_{(n:n)}> t)leq P(W_{(n:n)}>t)$$. Now I am trying to take square of $$Z_{(n:n)}$$ and $$W_{(n:n)}$$. But squaring is not monotone and I am stuck. But I believe in asymptotic regime both of them are positive and hence my claim should work.

If this is true then I believe that $$P(T_n>2)to 0$$, i.e. heuristically speaking $$T_n leq 2$$ in limiting sense.

Any help will be appreciated. Also is any result for general $$Omega$$ known? Thanks.

## dnd 5e – How can I limit crafting and material-searching without limiting player agency?

1. Apply time pressure

One way to handle this is to give them quests that explicitly have timelines. If they go resource harvesting, they lose the quest.

This is similar to what you described with the demon attack but somewhat different. At least as I understand your description, you used a living world and the players suffered consequences for not being around. That is fine and good, but it doesn’t directly apply time pressure. It wasn’t a situation where the players knew explicitly that if they chose to go harvesting resources that there would be direct and clear consequences.

If you give them a specific quest with a clear timeline, then they have they have to make the choice of which to do.

1. Make most resource gathering a downtime activity

I’m reading between the lines a lot, but it sounds like your players are spending a lot of in-game time harvesting these resources.

I would tell the players that now that their characters are familiar with the process of harvesting those specific resources that they can just do it as a downtime activity. They can, whenever they choose to invest the time, spend X days to acquire Y amount of the crafting resources.

Of course, the ratio of X to Y should be much worse than adventuring time would be. While the occasional prospector gets rich by literally striking gold, the laborers doing the mining tend not to be well paid compared to the fabulous wealth an adventurer can pursue.

In fact, if the players already have reasonable wealth, you can point out that it might be a better use of their time to hire people to handle the tedious process of gathering the resources now that they have staked a claim in an area and they can just sit back and receive the resources on a regular basis without it impacting their adventuring.

1. Open the conversation by telling the player’s that this part is getting tedious for you and impacting your fun.

I think in this particular case you can avoid a direct discussion if you want to. You can nudge things by applying time pressure, making most resource gathering a downtime activity, and having a lot of common crafting components available on the market.

But if that doesn’t work, it is fine to directly tell the players that this particular part of the game is getting tedious for you. Everyone should be having fun at least most of the time, and if something is getting too tedious for someone then its time to address revisions.

You can then directly discuss options for how you want to change it.

## google cloud platform – Possible to create policy limiting firewall rules in GCP?

Does anyone know if it’s possible to create an organizational policy that would prevent the use of having a source set to ‘any’ for specific ports on firewall rules in GCP?

Fo example, I want to prevent users from creating firewall rules that use ‘any’ as a source for ports such as SSH, RDP, SQL, and so on.

Thx

## Limiting attributes returned for catalog_product.info on v1 SOAP API

Despite the SOAP documentation seemingly always showing v1 and v2 being the same with the exception of how you call a method and there being a ton of example of v2 calls. I cannot seem to get this to work in v1 of the SOAP API

``````\$attributes = new stdclass();
\$attributes->attributes = array('price');

\$result = \$mageConn2->catalogProductInfo(\$session2, \$sku, null, \$attributes, "sku"); // works as expected

\$result = \$mageConn1->call(\$session1, 'catalog_product.info', array(\$sku, null, \$attributes, "sku")); // returns all attributes
``````

I’ve tried all variations (I feel) and have had some luck in limiting the return to basic values, just not the ones I’ve requested.

How should I be calling a v1 SOAP API `catalog_product.info` to limit the returned attributes.

NB: I’m using v1 because of the multiCall method as this is ultimately used for a product sync.

## fa.functional analysis – Surjectivity of the limiting operator

Consider the operator
$$begin{eqnarray*} K_{n} &:&L^{2}(0,1)longrightarrow L^{2}(0,1)^{n}, \ u(x) &mapsto &A_{n}U_{n}(x)=A_{n}(u(frac{x}{n}),u(frac{x+1}{n}),…,u(% frac{x+n-1}{n}))^{t} end{eqnarray*}$$

where $$A_{n}$$ is $$ntimes n$$ matrix with $$det (A_n)neq 0,$$ $$forall ngeq 1.$$

It is clear that $$K_{n}$$ is onto since for any $$Yin L^{2}(0,1)^{n}$$ the
equation $$A_nU_n(x)=Y(x)$$ admits a unique solution in $$L^{2}(0,1)^{n}$$which
makes $$u$$ uniquely determined in this case.

I’m wondering about the surjectivity property for $$nmapsto infty .$$ If we
assume that $$underset{nlongrightarrow infty }{lim }det (A_n)neq 0$$ do we
obtain that the limit operator $$K_{infty }$$ is surjective?.

I was thinking of studying the adjoint $$K_{n}^{ast text{ }}$$ and proving
that $$K_{n}^{ast text{ }}v_{n}=0$$ for some $$leftVert v_{n}rightVert _{L^{2}(0,1)^{n}}=1$$ but I didn’t succeed. Any ideas?. Thank you in advance?

## security – Is there a pattern for limiting a user’s ability to create objects?

Let’s say that – as a mental exercise – I’m building a simple CRUD system from first principles.

We’ll assume that my server process has got a way to authenticate the user associated with each request.

But then, what is to stop an authenticated user from turning adversarial and bombarding my system with requests to instantiate a new object of some type, until either the memory or the storage is exhausted?

My first thought is that each user should have some kind of object creation quota.

I’ve been looking through books on security and application design, and I can’t find any advice on implementing such quotas.

There is this bit of advice on the Common Weakness Enumeration:
https://cwe.mitre.org/data/definitions/400.html

But it just says that the solution is to recognize the attack and disallow further requests from the user. Then it acknowledges that this can be quite challenging.

It also touches on universal throttling, which is a possibility. But I still can’t find much advice on how to impose a logical limit on the number of requests per user per (minute? second?) that my server will handle.

Are there any published sources that describe a solution to this design issue, or any generally accepted advice?

## Systemd restart rate limiting – Server Fault

I’d like to set up a systemd service so that it restarts quickly at first, but slows down after a short while. The reason is that a service might temporarily fail and come back up quickly for highest availability. But should the issue persist longer, this will generate lots of log entries. In that case the restart interval should be longer to not cause such a load but still come back if the issue resolves at some time.

This is my current service file:

``````[Unit]
StartLimitIntervalSec=120
StartLimitBurst=4

[Service]
ExecStart=/root/test.sh
Restart=always
RestartSec=10
``````

The test script loops as long as /root/test.run exists, then exits with an error code. I can cause the service to fail by deleting that file.

This is what I observed:

• Starting the service without the run file, it immediately fails. Then it retries 4 times and then fails forever. It will never try to start again.
• Starting the service with the run file, it runs at first. After deleting the file, the service fails. Then it restarts as above: 4 times and then never again.

This clearly isn’t a rate limiting, it’s just a limiting.

What do I have to change to get a rate limiting? After 4 restarts within 120 seconds have failed, the service should still try to start at a lower rate.

Running on latest Raspberry Pi OS, but this should apply to any Linux OS with systemd.