proof of work – Byzantine Fault Tolerance. The Byzantine Generals Problem

Maybe someone can answer this simple question for me.

They say that, there is no solution for three generals in the presence of one traitor (oral messages), so why does the first situation, in which the Commander is loyal seem to satisfy the conditions mentioned?

There are two Interactive Consistency conditions mentioned in the published article.

IC1: All loyal lieutenants obey the same order

IC2: If the Commanding General is loyal, then every loyal lieutenant obeys the order he sends.

Now in the situation where Lieutenant 1 and the Commander are loyal and all loyal lieutenants have to obey the commander’s order, both the conditions IC1 and IC2 seem to be satisfied, albeit by chance or dumb luck.

(Note: This is different from the earlier scenario in the article, where the generals get to vote and the majority vote wins. In this case, all loyal lieutenants have to follow the orders given.)

Did Lieutenant 2 (the traitor) succeed in confounding Lieutenant 1 ? Yes, most likely, but that didn’t appear to stop the conditions from being met.

Also, not to sound disrespectful of the authors, because they make great points, but it appears that certain parts are ambiguous and rather vague at times. I think this might be what Richard Feynman was referring to back in the day.

Did anyone else notice this?

numerics – Constraints and Tolerance

So I’m running NMaximize for optimizing a value and for constraints, I need the parameters to belong to a discrete set of elements. Basically, the constraint looks like,

And @@Table({Subscript(x, i), Subscript(y, i)} (Element) Table(Subscript(e, i, n), {i, 1, n}),{i, 1, k})

,where {Subscript(x, i), Subscript(y, i)} are my parameters satisfying the constraint that they should belong to Table(Subscript(ex, i, n), {i, 1, n})

But NMaximize do not consider this as a constraint. Then I changed the constraint to the convex hull of Table(Subscript(ex, i, n), {i, 1, n}) with an additional constraint to pick out the extreme points aka the vertices. Now my code looks like,

And @@ Table({Subscript(x, i), Subscript(y, i)} (Element) 
ConvexHullMesh(Table(Subscript(ex, i, n), {i, 1, n})) && 
Subscript(z, i) == 1/2 && 
Subscript(x, i)^2 + Subscript(y, i)^2 == Subscript(r, n)/2, {i, 1, 

But, when I run this it outputs this error

Obtained solution does not satisfy the following constraints within
Tolerance -> 0.001`

What do I do?


Subscript(r, n_) := Sqrt(Sec(Pi/n));
Subscript(w, i_, 
n_) := {Subscript(r, n) Cos(2 Pi i/n), 
Subscript(r, n) Sin(2 Pi i/n), 1};
Subscript(e, i_, n_) := 
1/2 {Subscript(r, n) Cos((2 i - 1) Pi/n), 
Subscript(r, n) Sin((2 i - 1) Pi/n), 1};
Subscript(ex, i_, n_) := 
1/2 {Subscript(r, n) Cos((2 i - 1) Pi/n), 
Subscript(r, n) Sin((2 i - 1) Pi/n)};
u = {0, 0, 1};
f = (u - #) &;

Factors = Times @@@ Subsets(Transpose@Tuples({1, -1}, 3), {1, 3});
(*Rearrange the numbers in the RHS to obtain different 

Factors(({1, 2, 3, 4, 5, 6, 7})) = Factors(({1, 2, 3, 4, 5, 6, 7}));
Factors = Transpose(Factors);
Vec(j_) := {Subscript(x, j), Subscript(y, j), Subscript(z, j)};
AllParameters(k_) := 
Flatten(Table({Subscript(x, i), Subscript(y, i), Subscript(z, 
  i)}, {i, 1, k})));
AllConstraints(n_, k_) := 
And @@ Table({Subscript(x, i), Subscript(y, i)} (Element) 
   ConvexHullMesh(Table(Subscript(ex, i, n), {i, 1, n})) && 
  Subscript(z, i) == 1/2 && 
  Subscript(x, i)^2 + Subscript(y, i)^2 == Subscript(r, n)/2, {i, 
  1, k}));

 GPT(n_, k_) := Module({ro, co, ve, i},
 FunFactor = Factors((1 ;; 8, 1 ;; k)) /. {1 -> Identity, -1 -> f};
 vec = Table(Subscript(v, 
 i), {i, 1, k}) /. {Subscript(v, j_) -> Vec(j)};
 vecs = Table(
 Total(Table(FunFactor((ro, co))(vec((co))), {co, 1, k})), {ro, 1, 
 max = Total(
  Map(vecs((ve)).# &, Table(Subscript(w, i, n), {i, 1, n}))), {ve,
   1, 8}));
 {time, out} = 
 Timing(NMaximize({max, AllConstraints(n, k)}, AllParameters(k), 
 Method -> "NelderMead"));
 Print(out, out((1))/(k 8), "   ", time); {time, out} = 
 Timing(NMaximize({max, AllConstraints(n, k)}, AllParameters(k), 
 Method -> "DifferentialEvolution"));
 Print(out, out((1))/(k 8), "   ", time); {time, out} = 
 Timing(NMaximize({max, AllConstraints(n, k)}, AllParameters(k), 
 Method -> "SimulatedAnnealing"));
 Print(out, out((1))/(k 8), "   ", time);)

 GPT(4, 7)

embedded – ROCKSDB fault tolerance

ROCKSDB writes a put operation into on disk write ahead log WAL and as well to the memory. Given the fact that RocksDB writes and save operations into a WAL, does ROCKSDB guarantee out of the box fault tolerance. In case the process that embeds ROCKSDB dies can I assume that no write/update operation is going to be missed and the DB will be in consistent state once the process returns alive? can it recover to a consistent state out of the WAL?

Otherwise, what is the best way to achieve fault tolerance in a ROCKSDB DB?

Closest number between two lists within a given tolerance

If I have two large list with numbers such as:

list1={number1,number2,number3..........etc} list2={number1,number2,number3..........etc}


How can I find the number or numbers that are the closest between the two list within a given tolerance?.

As an example with two short lists, if I have something like list1={1,2,3,4,5} and list2={5.5,6,15,20,30} and my tolerance is 0.5, then the numbers would be “5” (from list 1) and “5.5” (from list 2). If my tolerance was 1, then the numbers would be “5”(from list 1) and “5.5” and “6” (from list 2) and so on.

mathematical optimization – NMinimize: How to avoid solutions that do not satisfy constraints within a certain tolerance?

Here’s how you can do it by adding some slack into the constraints and punishing slack in the objective:


(* the function you're trying to minimize *)
objective = ((e*(1 - Sqrt((g - e)^2 + (f - h)^2)) + (g - e)*(1 - 
          Sqrt(f^2 + e^2))) + (h*(1 - 
          Sqrt((g - e)^2 + (f - h)^2)) + (f - h)*(1 - 
          Sqrt(g^2 + h^2))))/((g + f)*
     Max(1 - Sqrt((g - e)^2 + (f - h)^2), 1 - Sqrt(g^2 + h^2)));

(* these are the hard constraints *)
constraints = {
   0 <= e <= 1,
   0 <= f <= 1,
   e^2 + f^2 == 1,
   e <= g <= 1,
   0 <= h <= f,
   Sqrt((g - e)^2 + (f - h)^2) <= 1,
   g^2 + h^2 <= 1

(* these constraints are softer and allow for a bit of slack *)
slackedConstraints = {
   0 - se <= e <= 1 + se,
   0 - sf <= f <= 1 - sf,
   -sef1 < e^2 + f^2 - 1 < sef1,
   e <= g <= 1,
   0 - sh <= h <= f + sh,
   Sqrt((g - e)^2 + (f - h)^2) <= 1,
   g^2 + h^2 - 1 <= 0
variables = {e, f, g, h};
slackterms = {se, sf, sh, sef1};

(* solve it and harshly punish too much total squared slack *)
sol = Last(
  NMinimize({objective + 10^10*Total(slackterms^2), 
    slackedConstraints}, Join(variables, slackterms)))

{e -> 0.25283, f -> 0.967511, g -> 0.944242, h -> 0.329154, 
 se -> 4.51664*10^-14, sf -> -2.52757*10^-13, sh -> 3.93093*10^-14, 
 sef1 -> 1.92914*10^-7} *)

objective /. sol
(* result: 0.304607 *)

(* Substitute back into the hard constraints to check if any violated *)
constraints /. sol
(* {True, True, False, True, True, True, True} *)

(* hard constraint #3 is violated, but only by a tiny amount: *)
e^2 + f^2 /. sol
(* result 1. *)

replication – Fault tolerance for Database sharding and Database partitioning

I’m aware that database sharding is splitting up of datasets horizontally into various database instances, whereas database partitioning uses one single instance.

In Database Sharding, what if one of the database crashes? we would lose that part of the data completely. We won’t be able to read or write on it. I’m assuming we are keeping a replica of all the databases we have shared? Is there any better approach? That would be too expensive, I believe if we have many database instances.

In Database partition, we could create a replica of the main database (that would be just one replica) since data partition splits dataset in the same database.

One last question would be, why would we go for a master-slave approach? Do the slaves have complete data or are the data partitioned among the slaves? I believe that the Master database has complete data, but I’m not sure about the slaves? If the slaves have different data partitioned, let’s say, how would the fault tolerance. Would it just read from the Master database then?

I know these are a lot of questions. Could you please help me? I am interested in this and that’s why I have so many questions, which I am not able to grasp.

Setting the tolerance for `Equal[]`

I would like to control how Equal() works and allow a certain error associated with it. I would like numerical values which are, say, within one-thousandth of one another to be considered the same.

For example, I would like

1.001 == 1.00

I’ve tried the following – but I’m worried this might not be possible.

SetPrecision(1.0000 == 1.0001, 2)
(* False *) 

SetAccuracy(1.0000 == 1.0001, 2)
(* False *) 

Block({Internal`$EqualTolerance = Log10(2.^28)},
  Print(1.0001 == 1.000);
(* False *) 

Networking – What kind of solution tracks dependencies for fault tolerance?

I'm looking for the noun that describes a methodology, organizational method, mindset, or category of use that tracks dependencies in terms of capacity planning and fault tolerance.

Ideally, it has an inventory function that I can use to replace a network card and track the exchange of HP support.

Is there anything that will help me track or organize this?