Encryption – Encrypt / decrypt any data provided in a custom attribute of IdP in the SSO scenario

I'm considering the following design for the application with client-side encryption

Suppose we have a web application that manages / stores users' confidential business data, and we do not want to know what data is in the backend, no need to count on it, just a store retrieval service running in the front end of the browser most of the work.

The application is also Single Sign On, which is integrated with some of the user account management identity providers and uses one of the standard technologies, such as: B. OIDC, SAML2 (which is exactly my question).

We let the user's browser generate an RSA key pair that is used to protect its own record, possibly using mouse entropy.
Next, I do not want to store the user's private key in the browser, but instead encrypt it with the public IdP key and store it along with the user's simple public key in the application backend, pointing to the user ID.

If an application wants to store user data, it must be encrypted with the public key of the user ID.

When the user needs to retrieve their data, the application asks to log in to IdP, passing the user's encrypted private key into a custom attribute. I want IdP to decrypt and return that user's private key as part of the claim only if the authentication succeeds.
Next, this private key can be used in the browser to decrypt the user data blob and proceed according to business logic

Is it technically possible and which protocol and which function can be used with SSO?

I understand that IdP would know the user's private key. This may be alleviated by running an additional KEK-DEK encryption scheme on the user's private key in the browser.
Do you have any other potential security issues that you can see here?


Applications – How can a stationary process be derived from a self-similar process in a practical scenario?

I know theoretically if $ X (t) $ If an H-self-similar process, then we can make a temporal deformation $ X (t) $ get a stationary process $ Y (t) = e ^ {- tH} X (e ^ t) $,

My question in real life is whether I have insights $ X_1, ldots, X_n $So how do you turn the series into a stationary one? Common sense says that you can "smooth out" the series $ X $ and interpolate at these deformed times to obtain $ Y $, But exactly which smoothing and interpolation techniques should I apply here?

Suggestions and related articles are greatly appreciated.

How can I cut my sprite with OpenGL in a 2D scenario on the way through a portal?

This is a very simple question, but after a quick search and review of "similar questions" I could not find an answer.

I'm creating a 2D game with OpenGL and have encountered a problem: Let's say I wanted to create portals. An entity can move into a portal A and come out of the corresponding portal B. Transforming the entity's coordinates and velocity is easy, but this would result in "hard" texture teleportation.
That is, as long as half of the entity is in the A portal, it will not be rendered on the corresponding B page.
How would I make a smooth transition?

My first idea was: I define "exit" and "entry" zones that contain the transformation function (matrix). Now, when an entity enters the exit zone, the entity is also drawn at the entrance zone using the transformation matrix relative to the entry / exit zones. But drawing it twice would still give me the difficulty of fixing the object on the "back" of the portal. (The unit disappears when entering behind portal A).

My second thought was that there should be a solution with (vertex?) Shaders. But what would a shader look like? (Sample code would be nice, but not necessary, I'd rather have an answer to what the shader would do.) Would I define uniforms (panels) where the shader truncates / transforms the vertices ("move") would?

What does a typical solution look like for this clipping? Or where could I read information about these types of problems? I only found a couple of 3D versions that would render Portal B's view in Portal A. However, this does not solve the problem of object movement … I think so
In my opinion, changing the order of the characters or rendering something on the back of the portals is not correct, as this can lead to strange behavior in certain situations. (For example, Portal B is directly behind Portal A. I think one of them would be oversubscribed.)

ux designer – Context Scenario

Since this year I am studying Interaction Design at the University.
We were asked to answer the following questions:

Discuss how context scenarios are developed and how they are used in the design process.

I tried to Google this, but the only results I found were "scenarios" in general.

I could not find anything specific context scenarios

For example, an article gave a general overview of "scenarios" and briefly mentioned various types such as context, key path, etc., but nothing in detail.

Can you please help me and perhaps point out some useful resources that could help me answer the above question?

Thank you for any help.

Unity – Debug this spherecasting scenario

I currently have a problem with my custom drawing controller. I have code that moves the transform.postion.y of a GameObject to simulate gravity, and the ground is detected with SphereCast. SphereCast is thrown down from the middle of the body. And then I have a big cube as a reason.

If I position the character very high above the cube and then press Play, the character is dropped and SphereCast detects a hit (Scenario 1). But if I still position it high above the cube, but not as high as in the first example, the SphereCast will not detect a hit (Scenario 2). So it only recognizes a hit if I start it really high. And this happens only on the first start, because I have a jump function, and if I do the first scenario (impact) and then a jump, the landing after the jump is recognized by SphereCast, even though the jump is the same height as in scenario 2.

My next step is adding a breakpoint in SphereCast Hit while playing Scenario 2 frame by frame. The thing is, if frame by frame, Scenario 2 is detected by SphereCast, so I do not know how to debug this.


  • SphereCast reaches the ground when the character is very high above the ground and then presses the play button (Scenario 1)
  • SphereCast does not hit the ground when the character is moderately high above the ground and then presses the play button (Scenario 2)
  • If Scenario 2 runs frame by frame, SphereCast runs
  • Need ideas for debugging

Locking – Why table lock escalation happens in my scenario

I have a Table1 table that is updated with the following query in small blocks:

update top (1000) Table1
    set VarcharColumn1 = 'SomeValue'
from Table1
where ID in (select ID from Table2)
      and VarcharColumn1 is NULL

More details:

Table 2 contains 90000 rows, and the total number of rows that must be updated in Table1 is also 90000 (1: 1 relationship).

Even if table rows1 are updated, there is a trigger for tables1, which inserts rows into table Table1History as they were before the update
So, when I update 1000 rows in Table1, 1000 rows are inserted into Table1History


When I update top 100 rowsThere is no escalation of the table lock

I oversee this Extended Events "lock_escalation" eventand also in Performance Monitor - SQLServer:Access Methods - Table Lock Escalations / sec

When I refresh the first 1000 or 500 rows, the table lock escalation for Table1 occurs

So I wonder, what is the mechanism or formula that is used by SQL Server     
to escalate locking to table level ?  

Google states that 5000 rows are the threshold, but in my case, 1000 or 500 rows are apparently sufficient to cause Table1 to escalate to Table1

object-oriented – sequence diagram, main scenario of the use case "register company"

Enter image description here

From the sequence diagram above, you judge the following points:

I. The internal actor is the "company" responsible for informing the business registration data.

II. As the initial interface the class "Register Company" is used. For the registration of companies, the objects "Branch of Activity" and "Zip Code" are already registered.

For each object represented, it can be seen that a "lifeline" has begun, and when the object begins to interact, the control focus is used whose messages are in a numbered and ordered manner.

IV. After the operations have been performed, the receiving object "company" sends a response to the "Form_Cadastrar Empresa" transmitter interface shown in the dashed line.

The right elements are:

Now, the question arises, which of the following techniques is best for the above scenario and why?

For example, suppose you work for a software company with a limited memory system and have assigned a task. You use the recursive function to complete the task, but you find that you can use the tail recursion instead of the recursive function, and then find that this tail recursion can be converted to a loop. Now, the question arises, which of the following techniques is best for the above scenario and why? 1. Recursive function. 2. Tail recursion. 3rd loop

Complexity Theory – The relationship between matrix inversion, the HHL algorithm, and the unlikely scenario $ BQP = PSPACE $

I examine the quantum computer algorithm presented in the thesis Quantum algorithm for linear systems of equations}.

Without going through all the details, the HHL algorithm can apply an inverted matrix to a normalized vector created in a quantum state in time complexity $ tilde {O} ( log (N) s ^ 2 kappa ^ 2 / epsilon) $to solve $ A | x rangle = | B $ rangle $it calculates an estimate $ | x rangle = A ^ {- 1} | B $ rangle $ from where

$ N $ is the dimension of the matrix

$ s $ i the sparse nature of the matrix

$ kappa $ is the condition number of the matrix

$ epsilon $ the desired error is bound

In an argument for the optimality of the algorithm, the authors construct a reduction from a general quantum shift to a matrix inversion problem with a proof (page 4).

Here I am confused, the authors write:

The reduction from general quantum switching to a matrix inversion problem also implies that our algorithm (under standard assumptions) can not be significantly improved. If the runtime could be made in polylogarithmic $ kappa $, then none
Problem solved on $ n $ Qubits could be solved in poly (n) time (i.e. $ BQP = PSPACE $), a highly unlikely possibility

Why does this imply this? $ BQP = PSPACE $? All findings very much appreciated.