microservices – Where to place an in-memory cache to handle repetitive bursts of database queries from several downstream sources, all within a few milliseconds span

I’m working on a Java service that runs on Google Cloud Platform and utilizes a MySQL database via Cloud SQL. The database stores simple relationships between users, accounts they belong to, and groupings of accounts. Being an “accounts” service, naturally there are many downstreams. And downstream service A may for example hit several other upstream services B, C, D, which in turn might call other services E and F, but because so much is tied to accounts (checking permissions, getting user preferences, sending emails), every service from A to F end up hitting my service with identical, repetitive calls. So in other words, a single call to some endpoint might result in 10 queries to get a user’s accounts, even though obviously that information doesn’t change over a few milliseconds.

So where is it it appropriate to place a cache?

  1. Should downstream service owners be responsible for implementing a cache? I don’t think so, because why should they know about my service’s data, like what can be cached and for how long.

  2. Should I put an in-memory cache in my service, like Google’s Common CacheLoader, in front of my DAO? But, does this really provide anything over MySQL’s caching? (Admittedly I don’t know anything about how databases cache, but I’m sure that they do.)

  3. Should I put an in-memory cache in the Java client? We use gRPC so we have generated clients that all those services A, B, C, D, E, F use already. Putting a cache in the client means they can skip making outgoing calls but only if the service has made this call before and the data can have a long-enough TTL to be useful, e.g. an account’s group is permanent. So, yea, that’s not helping at all with the “bursts,” not to mention the caches living in different zone instances. (I haven’t customized a generated gRPC client yet, but I assume there’s a way.)

I’m leaning toward #2 but my understanding of databases is weak, and I don’t know how to collect the data I need to justify the effort. I feel like what I need to know is: How often do “bursts” of identical queries occur, how are these bursts processed by MySQL (esp. given caching), and what’s the bottom-line effect on downstream performance as a result, if any at all?

I feel experience may answer this question better than finding those metrics myself.

Asking myself, “Why do I want to do this, given no evidence of any bottleneck?” Well, (1) it just seems wrong that there’s so many duplicate queries, (2) it adds a lot of noise in our logs, and (3) I don’t want to wait until we scale to find out that it’s a deep issue.

networking – Rest api call taking 50 seconds on the client’s server, but on ours it takes milliseconds

We have a lamp stack. In our office, the code is deployed on a centos 6, we get a response after 1612ms and everything is good.

We deploy the code to our client who’s using centos 7. Postman response time is between 22s and 50s. According to postman “transfer start” takes the longest. I’ve enabled gzip, nothing changed. The response has only 2 parameters, “auth” and “msg”. We call the API over https

The client’s server is located behind a forticlient VPN and they use a Nginx proxy. They control those two, our server is in their datacenter.

Forticlient doesn’t affect the speed because I tried calling the API using curl by SSH to the server and then calling it like this

example.com/api.php and localhost/api.php, both had the same slow response time 41s.

What could the issue be? Could Nginx slow it that much? SElinux is disabled.

javascript – add large string milliseconds with number milliseconds

when I store Date.now() in the database as a bigint (postgresql), making a recall for the value returns a string instead of a number (due to javascript inability to handle large numbers).

Is there a way I can utilize the javascript Date library to compare the string of milliseconds with the Date.now() milliseconds?

Something like this…

const oldDate = "1590367617261";
const timout = 5 * 1000; // ms

console.log(oldDate + timout);

Expected Output:

1590367622261

Actual Output:

15903676172615000

ssh – (error code 28) Resolving timed out after 5000 milliseconds Ubuntu server – DNS?

Intent:

I am trying to load images programmatically using cUrl, into Prestashop in a XAMP via SSH.

What I tried:

I have looked around and the issue may be on the setup of the server.

I tried to change the in the php.ini file the values of max_execution_time, memory_limit, max_input_vars, max_input_time but did not work.

I tried to ping the website where I am collecting the image:

ping brandsdistribution.com
PING brandsdistribution.com (109.233.123.248) 56(84) bytes of data.

and keeps running until I key interrupt it and it returns:

--- brandsdistribution.com ping statistics ---
135 packets transmitted, 0 received, 100% packet loss, time 137194ms

while if I ping google.com:

ping google.com
PING google.com (172.217.168.206) 56(84) bytes of data.
64 bytes from ams16s32-in-f14.1e100.net (172.217.168.206): icmp_seq=1 ttl=53 time=56.4 ms
64 bytes from ams16s32-in-f14.1e100.net (172.217.168.206): icmp_seq=2 ttl=53 time=47.6 ms
64 bytes from ams16s32-in-f14.1e100.net (172.217.168.206): icmp_seq=3 ttl=53 time=43.7 ms
64 bytes from ams16s32-in-f14.1e100.net (172.217.168.206): icmp_seq=4 ttl=53 time=80.0 ms
--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 43.747/56.940/80.002/14.079 ms

Question

Is this a DNS issue? how can I fix it?

Error message

(1/1) Exception
file_get_contents_curl failed to download https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg : (error code 28) Resolving timed out after 5000 milliseconds

in Tools.php line 2162
at ToolsCore::file_get_contents_curl('https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', 5, null)

in Tools.php line 2235
at ToolsCore::file_get_contents('https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', false, resource)

in Tools.php line 2294
at ToolsCore::copy('https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', '/var/www/html/img/tmp/ps_importTmA1vB')

in productCreate.php line 107
at copyImg('68', '157', 'https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', 'products', false)

in productCreate.php line 66
at addImage(object(Product), 'https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', true)

in productCreate.php line 44
at createProduct(array('PRODUCT', '107510', 'Nike', 'W-ZoomGravity', 'BQ3203-006_W-ZoomGravity', '18', '101.00', '81.00', '57.00', 'Genere:Donna - Tipologia:Sneakers - Tomaia:materiale sintetico, materiale tessile - Interno:materiale sintetico, materiale tessile - Suola:gomma', '2.00', 'https://www.brandsdistribution.com/prod/stock_product_image_107510_1178872247.jpg', 'https://www.brandsdistribution.com/prod/stock_product_image_107510_2136019726.jpg', 'https://www.brandsdistribution.com/prod/stock_product_image_107510_1040197763.jpg', 'Vietnam', 'Nike', '', '', '', 'Scarpe', '', 'Sneakers', '', '', 'Continuativi', 'Rosa', '', 'pink,dimgray', 'Donna', '', '', '', '', ''))
in productCreate.php line 22

postgresql – postgres order after timestamp does not work properly with milliseconds

I use the following query to sort all rows by timestamp in ascending order

DELETE FROM @tableName
WO id = ANY (
SELECT id
FROM @tableName
WHERE source =: p1 AND target =: p2 @readCondition
ORDER BY Creation Date
LIMIT @limit
FOR UPDATE SKIP LOCKED
)
RETURN *;

But I get under results

, "MessageType": "AssignmentChange.v1", "CreatedDate": "2019-12-05T10: 55: 22.230886"

MessageType: AssignmentChange.v1, CreatedDate: 2019-12-05T10: 55: 22.279604

MessageType: AssignmentChange.v1, CreatedDate: 2019-12-05T10: 55: 22.276191
MessageType: AssignmentChange.v1, CreatedDate: 2019-12-05T10: 55: 22.202338

As you can see, the created object is not sorted by the milliseconds

java – Consume the REST API sequentially with at least 100 milliseconds between each call in a multithreaded environment. (with RestTemplate)

I have a very parallel environment that uses a REST web service. The REST service documentation states that every call to the API should be made with at least 100 milliseconds between each call (10 calls per second). The REST API also does not support concurrent calls. They should be created one at a time and wait for the response before another request is thrown. The approach I came up with is:

private static RestTemplate restTemplate; // Rest template configured and working.

public ResponseEntity consume() {
    ...
    try {
        synchronized (restTemplate) { // Locking on static RestTemplate.
            response = restTemplate.exchange(endpointUrl, httpMethod, request, classType, uriData);

            try {
                restTemplate.wait(100); // Is this OK?
            } catch (final InterruptedException e) {
                e.printStackTrace();
            }
        }

        return response;
    } catch (final HttpStatusCodeException e) {
        throw new RuntimeException(e.getResponseBodyAsString(), e);
    }
}

Is it right? Thanks a lot!

Algorithms – Calculating milliseconds and avoiding floating-point numbers

I have a variable that increases / decreases from 0 to 255 each (X) milliseconds, so:

(X) x 255 = time in milliseconds / 60000 = minute

(X) milliseconds is a variable that only increases by itself, for example after each key press:

(X) + (X) x 255 = time in milliseconds / 60000 = minute
(X) + (X) + (X) x 255 = time in milliseconds / 60000 = minute
etc

What number should we use in (X) to get a number each time it increases? +1 Minute in the issue.
I want an integer, not a float.

The language is C ++, if it helps anyway …

javascript – Pure JS countdown clock does not convert milliseconds correctly

I have a reasonably pure js countdown clock that I'm having trouble converting right to milliseconds.

Here's a working plunker (next to the date counter)

It should only be 38 days instead of 184.

//// Should be countdown to 4 February 2019 /////

let cd = new countdown ({
cont: document.querySelector (".container"),
end date: 1549263600000,
output translation: {
Year: "years",
Week: "weeks",
Day days",
Hour hours",
Minute: "minutes",
Second: "seconds"
},
endCallback: null,
outputFormat: "Day | Hour | Minute | Second"
});

Thanks, I appreciate the help!