operating systems – How many different values can a shared variable take in concurrent computing?


Is there a way to find out number of different values that a shared variable can take in concurrent computing, in general, without listing all the possibilities and then counting the ones that are different (like done in two examples below)?

I know that, it is possible to count number of ways in which atomic operation(s) of n different threads can be interleaved. It is also possible to find those values. For example,

  1. The number of ways in which statements(s) of the two threads can be interleaved is 6 and the number of values that shared variable $B$ can take is 3, they are 85, 75, and 110. (Note that despite their unitary appearance, a program statement (READ B or WRITE B) is actually composite)
    enter image description here
    Image taken from Principles of Computer System Design An Introduction Part 2 by Saltzer and Kaashoek
  2. The number of different values shared variable c can take in this concurrent execution of two threads is 5. Since Thread 1 code has conditionals, I am not sure about the number of ways in which operation(s) of the two threads can be interleaved. Is the value equal to $frac{4!}{2!2!}$?

operating systems – Factors that make threads interleave nondeterministically

While reading
Cui, Heming & Wu, Jingyue & Tsai, Chia-che & Yang, Junfeng. (2010). Stable Deterministic Multithreading through Schedule Memoization. 207-221.
I came across the following statement:

“Two main factors make threads interleave nondeterministically. The first is scheduling (…) The second is input” (emphasis mine)

Are there other factors? Or, put it differently, what are the conditions that need to be satisfied in order for threads to interleave deterministically?

As an example, let’s assume I use threads and my program does not take any inputs. Also assume that my operating system provides a way to create a confined environment where I can reserve a specific number of processors just for myself. I then create exactly as many threads as the number of processors I reserved and start my computation. Importantly, my threads do not need to be scheduled since each can be assigned to a dedicated processor and do not need to be preempted since this is the only computation I am using my confined environment for. Would I get determinism in this case?

If so, is it correct that I (let me put it this way) eliminated race conditions but not data races? In other words, it is possible that my code contains a data race but I will deterministically always see it if it manifests itself on my machine. Is that correct?

Also, if so, why operating systems do not provide such a confined environment? As far as I can see, it could be really helpful for writing parallel (not concurrent) programs.

type systems – What are the problems of subtyping?

I’ve heard often that Subtyping breaks some important and useful properties: many nice innovations developed by pure programming language researchers can’t be brought to Java or C++ because of subtyping. They say that the language Rust avoided Subtyping for this reason.

Is such a claim correct?

What are some cool things that cannot be applied to languages with subtyping?

Is any language offering Subtyping completely cursed and incompatible with a lot of cool features? Or only the pieces of code that use subtyping are incompatible?

Could you try to explain what it means to someone coming from C++ with little theoretical knowledge?

I searched for explanations and found:

tls – Should I redirect http request to https from my system’s application or from DNS or other?

I want to create a web server that does redirects HTTP to HTTPS. What is the simplest method to do this but also secure? Should the DNS handle this? (For example, Route53)

I used to do this with my app built from Node/Express but now that I am using a compiled language, I want to be able to do this by hand instead of relying on a framework.

If I configure DNS to redirect http to https, is that more secure than the server program? (My thinking here is that since the server never sent a response, the potential attacker’s req never arrived and thus has no message to receive.)

Would it matter which DNS does this? (For example, if you purchased your domain from domain.com but your server is on AWS linked through Route53?)

calculus – A problem of systems of linear equations (very basic).

Wheat flour is packaged in a supermarket in bags of 2 kg, 5 kg and 12 kg. In September, 250 bags were used and 5500 kg of wheat flour were packed. In October, due to problems with the 12 kg bags, 50 more bags of 2 kg and 5 kg were used, so only 4250 kg of flour were packaged.

a) How many bags of each type were used in September?

b) How many percent of the total flour packaged between September and October was made in 2 kg bags?


I do not know how to transform each sentence into equations, and put together the system of equations.

1- “In a supermarket, wheat flour is packaged in bags of 2 kg, 5 kg and 12 kg.” I guess it is: (2x + 5y + 12z = 0), but I’m not sure.

2- “In the month of September, 250 bags were used and 5500 kg of wheat flour were packed.” I guess it is: (250 = 5500), but I’m not sure.

3- “In October due to problems with the 12 kg bags, 50 more 2 kg and 5 kg bags were used, so only 4250 kg of flour were packed.” I guess it is: (100x + 250y = 4250), but I’m not sure.

Please help me, thank you.

lo.logic – Are there “typical” formal systems that have mutual constency proofs? How long a chain of these can we build?

Sufficiently powerful theories (Peano arithmetic, ZFC, and so on – this question came from thinking about Coq) can’t prove their own consistency. However, are there cases of two theories, $A$ and $B$, where $A$ proves $B$ is consistent and $B$ proves $A$ is consistent? (To make up a potential example, “Peano arithmetic proves ZFC is consistent, and ZFC proves Peano arithmetic is consistent”.) If so, are there long chains of these sorts of proofs we can build, so that, if any of $k$ theories was inconsistent, all of them would be?

(The context here is idle curiosity about whether we can get in-practice reassurances about our theories by noting that many separate systems would need to have “bugs” at once)

operating systems – Can the sandboxing technique prevent a buffer overflow attack?

Buffer overflow Attack: Sample Authorization

void A( ) {
  int authorized;
  char name (128);
  authorized = check credentials (...); /* the attacker is not authorized, so returns 0 */
  printf ("What is your name?n");
  gets (name);
  if (authorized != 0) {
      printf ("Welcome %s, here is all our secret datan", name)
      /* ... show secret data ... */
  } else
      printf ("Sorry %s, but you are not authorized.n");

The code is meant to do an authorization check. Only users with the right credentials
are allowed to see the top secret data. The function check credentials is
not a function from the C library, but we assume that it exists somewhere in the
program and does not contain any errors. Now suppose the attacker types in 129
characters. As in the previous case, the buffer will overflow, but it will not modify
the return address. Instead, the attacker has modified the value of the authorized
variable, giving it a value that is not 0. The program does not crash and does not
execute any attacker code, but it leaks the secret information to an unauthorized

=> Can the sandboxing technique prevent attack? How?