microservices – Where to place an in-memory cache to handle repetitive bursts of database queries from several downstream sources, all within a few milliseconds span

I’m working on a Java service that runs on Google Cloud Platform and utilizes a MySQL database via Cloud SQL. The database stores simple relationships between users, accounts they belong to, and groupings of accounts. Being an “accounts” service, naturally there are many downstreams. And downstream service A may for example hit several other upstream services B, C, D, which in turn might call other services E and F, but because so much is tied to accounts (checking permissions, getting user preferences, sending emails), every service from A to F end up hitting my service with identical, repetitive calls. So in other words, a single call to some endpoint might result in 10 queries to get a user’s accounts, even though obviously that information doesn’t change over a few milliseconds.

So where is it it appropriate to place a cache?

  1. Should downstream service owners be responsible for implementing a cache? I don’t think so, because why should they know about my service’s data, like what can be cached and for how long.

  2. Should I put an in-memory cache in my service, like Google’s Common CacheLoader, in front of my DAO? But, does this really provide anything over MySQL’s caching? (Admittedly I don’t know anything about how databases cache, but I’m sure that they do.)

  3. Should I put an in-memory cache in the Java client? We use gRPC so we have generated clients that all those services A, B, C, D, E, F use already. Putting a cache in the client means they can skip making outgoing calls but only if the service has made this call before and the data can have a long-enough TTL to be useful, e.g. an account’s group is permanent. So, yea, that’s not helping at all with the “bursts,” not to mention the caches living in different zone instances. (I haven’t customized a generated gRPC client yet, but I assume there’s a way.)

I’m leaning toward #2 but my understanding of databases is weak, and I don’t know how to collect the data I need to justify the effort. I feel like what I need to know is: How often do “bursts” of identical queries occur, how are these bursts processed by MySQL (esp. given caching), and what’s the bottom-line effect on downstream performance as a result, if any at all?

I feel experience may answer this question better than finding those metrics myself.

Asking myself, “Why do I want to do this, given no evidence of any bottleneck?” Well, (1) it just seems wrong that there’s so many duplicate queries, (2) it adds a lot of noise in our logs, and (3) I don’t want to wait until we scale to find out that it’s a deep issue.

Linux – How can the priority for upstream packages be increased over downstream?

I tried capturing packets when I downloaded and uploaded them with tcpdump at the same time. I have noticed that downstream packages always have the priority that I would like to download

07:15:15.243304 IP (tos 0x0, ttl 128, id 4070, offset 0, flags (DF), proto TCP (6), length 52) > Flags (.), cksum 0x6521 (correct), ack 1169331, win 508, options (nop,nop,sack 1 {1170558:1174239}), length 0
07:15:15.243372 IP (tos 0x0, ttl 128, id 4071, offset 0, flags (DF), proto TCP (6), length 52) > Flags (.), cksum 0x3a92 (correct), ack 1025772, win 513, options (nop,nop,sack 1 {1026999:1031907}), length 0
07:15:15.243380 IP (tos 0x0, ttl 128, id 4072, offset 0, flags (DF), proto TCP (6), length 52) > Flags (.), cksum 0x8888 (correct), ack 869944, win 508, options (nop,nop,sack 1 {871171:876079}), length 0
07:15:15.243418 IP (tos 0x0, ttl 128, id 30984, offset 0, flags (DF), proto TCP (6), length 1420) > Flags (.), cksum 0x8a29 (correct), seq 1480491:1481871, ack 94, win 508, length 1380
07:15:15.243574 IP (tos 0x0, ttl 55, id 0, offset 0, flags (DF), proto TCP (6), length 1267) > Flags (.), cksum 0x6090 (correct), seq 939883:941110, ack 0, win 256, length 1227: HTTP
07:15:15.243848 IP (tos 0x0, ttl 55, id 0, offset 0, flags (DF), proto TCP (6), length 1267) > Flags (.), cksum 0x773f (correct), seq 1031907:1033134, ack 1, win 256, length 1227: HTTP
07:15:15.243940 IP (tos 0x0, ttl 55, id 0, offset 0, flags (DF), proto TCP (6), length 1267)

rest – HTTP status code if the downstream validation fails

I have an API that charges a fee for an order. It accepts the orderId and the amount as inputs. Next, a & # 39; / charge & # 39; call is sent to the downstream, which returns a 202. Immediately after this call, a & # 39; / verify & #; endpoint to ensure that the previous boot was successful.

Now it can happen that the indictment has been rejected. One of the reasons for this may be that the user has used an expired card. What should the error code be in this scenario?

In my view, I can not send 4xx because the request for my API perspective was correct. A bad request can be corrected by the user. In this case, he can not fix anything because the API uses only the & # 39; orId & # 39; and the total amount to be charged.

If I send a 5XX, 500 does not make sense, because this was not an "unexpected state" on my server. I can not send a 503 because my server is not overloaded or has failed due to maintenance.

Currently, I'm returning a 503 with an app code that reads: Payment confirmation failed.

Routing – Does FRR or Quagga support "Mpl's LDP Downstream On-Demand Mode"?

We test MPLS LDP in a Linux-based environment with quagga daemon configuration.
We can not retrieve the "Label Request Message".
To receive this message, the LDP implementation must be in Downstream On-Demand mode.
What we do not know how to activate this mode.
Can someone tell me if it is supported or not.