microservices – Where to place an in-memory cache to handle repetitive bursts of database queries from several downstream sources, all within a few milliseconds span

I’m working on a Java service that runs on Google Cloud Platform and utilizes a MySQL database via Cloud SQL. The database stores simple relationships between users, accounts they belong to, and groupings of accounts. Being an “accounts” service, naturally there are many downstreams. And downstream service A may for example hit several other upstream services B, C, D, which in turn might call other services E and F, but because so much is tied to accounts (checking permissions, getting user preferences, sending emails), every service from A to F end up hitting my service with identical, repetitive calls. So in other words, a single call to some endpoint might result in 10 queries to get a user’s accounts, even though obviously that information doesn’t change over a few milliseconds.

So where is it it appropriate to place a cache?

  1. Should downstream service owners be responsible for implementing a cache? I don’t think so, because why should they know about my service’s data, like what can be cached and for how long.

  2. Should I put an in-memory cache in my service, like Google’s Common CacheLoader, in front of my DAO? But, does this really provide anything over MySQL’s caching? (Admittedly I don’t know anything about how databases cache, but I’m sure that they do.)

  3. Should I put an in-memory cache in the Java client? We use gRPC so we have generated clients that all those services A, B, C, D, E, F use already. Putting a cache in the client means they can skip making outgoing calls but only if the service has made this call before and the data can have a long-enough TTL to be useful, e.g. an account’s group is permanent. So, yea, that’s not helping at all with the “bursts,” not to mention the caches living in different zone instances. (I haven’t customized a generated gRPC client yet, but I assume there’s a way.)

I’m leaning toward #2 but my understanding of databases is weak, and I don’t know how to collect the data I need to justify the effort. I feel like what I need to know is: How often do “bursts” of identical queries occur, how are these bursts processed by MySQL (esp. given caching), and what’s the bottom-line effect on downstream performance as a result, if any at all?

I feel experience may answer this question better than finding those metrics myself.

Asking myself, “Why do I want to do this, given no evidence of any bottleneck?” Well, (1) it just seems wrong that there’s so many duplicate queries, (2) it adds a lot of noise in our logs, and (3) I don’t want to wait until we scale to find out that it’s a deep issue.

usability study – Is a repetitive, three seconds response time the absolute worst?

My very first IT manager back in the 90s once stated that the absolute worst repetitive response time an application can have is three seconds. His argument was that this is long enough to be a significant annoyance, and too short to get a (cognitive) break. So if you want to design a system for maximum frustration, make every action have an average response time of three seconds (and allow for some variation too, just to make it unpredictable as well).

This was my managers anecdotal input, but I have always thought of it as valid. And it seems to make sense given Jakob Nielsen’s thoughts on the matter.

Is there any research to back up (or invalidate) the claim?

What is the best way to present the repetitive task of a "user filling out a doctor schedule" 3 to 5 times?

This task needs to be done by an admin to create a doctor schedule and fill the working hours and shifts for the doctor each week. I am facing the issue of how to present each week’s schedule in a good user experience view?

In the given scenario the image shows a scheduling period of one month and shows one week only. How can I show the other three weeks and decrease the repetitive task to fill each week?

scheduling plan

python – Codewars kata – “Repetitive Sequence”

I am trying to solve the following Codewars kata.

We are given a list as
seq = (0, 1, 2, 2)

We will have to write a function that will, firstly, add elements in the list using the following logic.
if n = 3 and as seq(n)=2, the new list will be seq = (0, 1, 2, 2, 3, 3)
if n = 4 and as seq(n)=3, the new list will be seq = (0, 1, 2, 2, 3, 3, 4, 4, 4)
if n = 5 and as seq(n)=3, the new list will be seq = (0, 1, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5) and so on.

Then the function will return the n-th element of the list.

Some elements of the list:
(0, 1, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 9,
10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13,
14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16,
17, 17, 17, 17, 17, 17, 17, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19,
20, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 21, 21, 21, 21)

Constraint for Python:
0 <= n <= 2^41

My code runs successfully in my system for any value of n, including n=2^41 (within 2.2s). But it times out in Codewars. Can anyone help me in optimizing my code? Thanks in advance.

My code:

def find(n):
    arr = (0, 1, 2, 2)
    if n <= 3:
        return arr(n)
    else:
        arr_sum = 5
        for i in range(3, n+1):
            arr_sum += i * arr(i)
            if arr_sum >= n:
                x = (arr_sum - n) // i
                return len(arr) + arr(i) - (x+1)
            else:
                arr += (i) * arr(i)

opengl – How to compress repetitive information when uploading mesh data?

I want to avoid sending repetitive information when drawing a mesh.

If I use a single point for each face and two vectors as additional attributes that represent the travel of each vertex, I can use that information in a Geometry Shader to produce the normal and the two points followed by any additional attributes for that face.

Is the computation cost worth the saved data needing to be transferred? Or is there a better way to optimize the pipeline for sending over a mesh to draw?

Use RSolve for the repetitive equation

I want to use Mathematica to check the solution of a repetitive equation. I have the following equation:

$ Q_ {k + 1} = Q_k + alpha (r_ {k + 1} – Q_k) $.

I also have a derivative that shows how to get a solution for each $ k $::

$ Q_k = Q_ {k-1} + alpha (r_k – Q _ {- 1}) $

$ = alpha r_k + (1 – alpha) Q_ {k-1} $

$ = alpha r_k + (1 – alpha) alpha r_ {k-1} + (1 – alpha) ^ 2Q_ {k-2} $

$ = (1 – alpha) ^ kQ_0 + sum_ {i = 1} ^ k alpha (1 – alpha) ^ {k-i} r_i $,

Where $ Q_0 $ is an arbitrary constant. However, if I use RSolveI get another answer.

RSolve(Q(k) == Q(k - 1) + (Alpha) (Subscript(r, k) - Q(k - 1)), Q(k), k)

gives me the solution:

$ (1 – alpha) ^ {k-1} mathbb {c} _1 + (1- alpha) ^ {- 1 + k} sum_ {K (1) = 0} ^ {- 1 + k} (1- alpha) ^ {- K (1)} alpha r_ {1 + K (1)}. $

This is close, but not exactly what I want. What am I missing here?

PHP – Store repetitive elements in the Laravel pivot table

I work with Laravel and have these models (a summary) with the following relationships:

Table products
  increment id;
  string    description;

Table materials
  increment id;
  string    name;  

Table material_product
  increment id;
  integer   product_id;
  integer   material_id;
  string    dimension;  

class Product extends Model
{
    public function materials()
    {
        return $this->belongsToMany(Material::class)->withPivot('dimension');
    }
}

class Material extends Model
{
    public function products()
    {
        return $this->belongsToMany(Product::class)->withPivot('dimension');
    }
}

I use attach (to save) and sync (to update) the data in my pivot table. The problem is that in the business model of my app, I have to save repeated items in my pivot table, for example:

products
+-----+--------------+
|  id | description  |
|-----|--------------+
|  1  | dining table |
+-----+--------------+

materials
+-----+------+
|  id | name |
|-----|------+
|  1  | wood |
+-----+------+

material_product
+-----+------------+-------------+-----------+
|  id | product_id | material_id | dimension |
|-----|------------+-------------+-----------+
|  1  |    1       |      1      |   10 x 2  |
+-----+------------+-------------+-----------+
|  2  |    1       |      1      |   5 x 3   |
+-----+------------+-------------+-----------+

So none of these methods work for me. If I send more than one repeat material, only one will be saved in my database. How can I do that? Any ideas?

Plugins – How do I write dynamic repetitive and restricted blocks in WordPress?

I am trying to implement a dynamic WP block and want to create a quick links block that allows me to set up up to 7 links. I currently have the following:


edit: props => {

    const {
      attributes: { 
        link1,
        link2
      },
      setAttributes
    } = props;

    const onChangeLink1 = newLink => {
      setAttributes({ link1: newLink });
    };

    const onChangeLink2 = newLink => {
      setAttributes({ link2: newLink });
    };

    return (      
      

Quick Links

  • ...
); }, save(props) { return null; }

So my questions are:

1) How would I write the code so that onChangeLink1 works for more than 1 value?

const {
  attributes: {
    links: {
      ???
    }
  }
}
const onChangeLink = newLink => {
  setAttributes({ links(??): newLink });
};

2) Is it possible to limit the amount of

  • so it was a maximum of 7? I would have 7 hardcode the seven
  • within the editing function?

    3) If the user only enters 3 or less, how can I only display these 3 instead of 7?

    Should I repeat my call back and create the string before I can view it?

        mystr = "";
    
        for($i=0;$i<=????;i++) {
         mystr .= sprintf('%2$s', $link($i).url, __($link($i).text, "myplugin"));
        }
    
        return sprintf(
            '

    Quick Links

    %1$s
    ', mystr );