protocol – Why is the full blockchain required, effectively forever?

I do understand that every block is validated in terms of the blocks before it by way of the previous block’s hash, all the way back to the genesis block. However, could the protocol be modified such that the blockchain is a sliding window over the most recent N number of blocks? How likely is it that some block from 1 year ago later turns out to be invalid?

hash – Since GPUs have gigabytes of memory, does Argon2id need to use gigabytes of memory as well in order to effectively thwart GPU cracking?

The common advice of benchmarking a password hashing algorithm and choosing the slowest acceptable cost factor doesn’t work for algorithms with more than one parameter: adding a lot of iterations at the expense of memory hardness makes the benchmark time go up, but if the attacker can still grab off-the-shelf hardware (like a regular gaming graphics card) then I might as well have used pbkdf2 instead of Argon2id. There must be some minimum amount of memory that makes Argon2id safer than a well-configured algorithm from the 90s, otherwise we could have saved ourselves the effort of developing it.

I would run benchmarks and just see for myself at what point hashing isn’t faster anymore on a GPU than on a CPU, but Hashcat lacks Argon2 support altogether. (Though any software I choose isn’t necessarily going to be the fastest possible implementation to give me a correct answer anyway.)

Even my old graphics card from 2008 had 1GB of video memory and 2020 cards seem to commonly have 4-8GB. Nevertheless, the default Argon2id setting in PHP is to use 64MiB of memory.

If you set the parallelism of Argon2id to your number of CPU cores (say, 4) and use this default memory limit, is either of the following (roughly) true?

  • A GPU with 8192MB of memory can still use 8192/64 = 128 of its cores, getting a 32× speed boost compared to your CPU. The slowdown on GPUs due to increased memory requirements is a linear scale. The only way to thwart GPU cracking effectively, is to make Argon2id use more RAM than a high-end GPU has when divided by the number of parallelism you configure in Argon2id (i.e. in this example with 4 cores, Argon2id needs to be set to use 8192/4 = 2048MB).

or

  • This amount, 64MiB, already makes a common consumer GPU completely ineffective for cracking because it is simply too big to efficiently use from a simple GPU core (because if the core has to access the card’s main memory then it’s not worth it, or something).

agile – Effectively running large front-end platform with multiple teams contributing

I work at a company where we have built much of our own e-commerce platform from the ground up. We have a growing number of teams who effectively operate as stream-aligned teams, based on functional areas (Growing x capability, improving customer retention, etc etc). We have 10-15 teams containing a Product owner, delivery manager, business analyst, software dev in test, and engineers to give an idea for the size of the engineering department.

Our estate is made up of a number of large services built by the teams to meet the needs of the business. We have a target architecture within the business, however, the teams are given a lot of autonomy as to how they would like to work within this architecture and which tools they want to use. This works really effectively in the majority of cases – bar one area.

Our front-end platform, which is a mono-repo React application, is a large and growing project that houses the business’s front-end website. This project is contributed to by a number of different stream-aligned teams, who may not be working even within the same functional area. An example being one team working on my account feature, whilst others work on an improved search capability.

The project is version controlled via GIT and has a working pipeline with volatile environments for feature branches to allow for testing. Our teams all operate using either scrum or kanban and we do our best to invest in agile practices.

The issue we are having is this project does not have a clear owner, and is rapidly growing in complexity and diverging in standards of code, approach to e2e testing, etc. This is having an impact on our ability to release frequently and with confidence, as we are now frequently having issues.

I think Conway’s law is playing a large part in this, but it would be really interesting to see how others have managed this problem (open-source projects could be a good inspiration for a model we could use potentially).

I’d appreciate any suggestions for patterns or practices we can implement to better manage this repo. The engineering department has grown quite rapidly over the last two years, so some mistakes have been made along the way and I’m keen to address them.
Thanks for reading my question!

How can I effectively model a property in a relational database?

I have seen two approaches to modeling properties in relational databases:

  1. Create arbitrary tables that assume all the necessary components of the property. For example, have tables for room, unit, floor, building.

  2. Have a single table ‘asset’. Use a linking table to create relations between assets.

In either case, the building is represented as a general tree structure. This seems painful to query.

I need to represent this tree in order to implement an RBAC system. Now I’m not so sure that a relational DB is the best solution.

Does the higher-leve testing mentioned in the book Working Effectively with Legacy Code belong to integration test?

  1. is higher-level testing a standard term in software engineering?

No, I don’t think so. It is just a description, which distinguishes those tests from unit tests in the specific context of the book chapter where you found it.

  1. does higher-level testing belong to the integration test as it pins down behavior for a set of classes?

To my understanding, it is the other way round: the term “integration test”, when used in opposition to unit tests, can be seen as one kind of “higher-level” tests (among other types of tests).

database – How to effectively sperate CRUD operations from the main SQLiteOpenHelper – android studio

I’m conflicting about how I want my database architecture to be built.
First of all, I use a singleton pattern for the database in order to insure one instance of it (so its thread-safe), and also for me to get a workable database reference wherever I have a context.

All across the application I make many different db operations, for example, some activities need to change only the ‘Meals’ table and some need to change ‘Meals’ & ‘MealFoods’ for example.

For each one of this tables I’ve built a helper class in order to separate the CRUD operations of each table apart from the DatabaseManager class (which extends SQLiteOpenHelper). This, of course is for the sake of simplicity and to get a cleaner code.

First approach:

This approach saves all the helper classes inside the DatabaseManager.

DatabaseManager.java:

public class DatabaseManager extends SQLiteOpenHelper {
    private static String dbName = "logs.db";
    private static final int dbVersion = 1;

    private final Context context;
    public MealsDBHelper mealsDBHelper;
    public MealFoodsDBHelper mealFoodsDBHelper;

    private DatabaseManager(@NonNull Context context) {
        super(context, dbName, null, dbVersio);
        this.context = context.getApplicationContext();//Saving as application context to avoid leaks
        mealsDBHelper = new MealsDBHelper(context);
        mealFoodsDBHelper = new MealFoodsDBHelper(context);
    }

    private static DatabaseManager instance;
    public static synchronized DatabaseManager getInstance(Context context) {
        if (instance == null) {
            instance = new DatabaseManager(context);
        }
        return instance;
    }

    @Override
    public void onCreate(SQLiteDatabase db) {
        //...
    }

    @Override
    public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
        //...
    }
}

Let’s look at the MealsDBHelper class, which pretty much all it does is to help communicate CRUD operations with the database. (For example a user wants to change his meal name)

public class MealsDBHelper {
    public static final String MEALS_TABLE_NAME = "Meals";
    public static final String MEAL_ID_COLUMN = "Meal_ID";
    public static final String MEAL_NAME_COLUMN = "Meal_Name";
    public static final String MEAL_POS_COLUMN = "Meal_Pos";
    public static final String MEAL_DATE_COLUMN = "Date";


    private Context context; //A context object to pass on to the DatabaseManager.getInstance method in all the different methods inside this class
    public MealsDBHelper(Context context){
        this.context = context;
    }

    //For example one of few methods that do operations on the 'Meals' table in the database.
    public void updateMealName(long mealId, String meal_name) {
        UserDataDB.getInstance(context).getWritableDatabase().execSQL("UPDATE " + MEALS_TABLE_NAME + " SET " + MEAL_NAME_COLUMN + " = '" + meal_name + "' WHERE " + MEAL_ID_COLUMN + " = " + mealId);
    }
}

Now, no matter if the activity is modifying 1 or 2 or even 3 tables, I’m able to update the meal’s name like so:

DatabaseManager.getInstance(context).mealsDBHelper.updateMealName(mealId, mealName);

Thats because the DatabaseManager contains a reference to all the other helper classes.

What I like about this approach is that I can simply access every table and do operations on it according to my needs, & what I don’t like is that the DatabaseManager class holds reference for all the helpers, and I’m not sure if its best to do so..

Second approach:

This approach does not saves all the helper classes inside the DatabaseManager.

DatabaseManager.java:

public class DatabaseManager extends SQLiteOpenHelper {
    private static String dbName = "logs.db";
    private static final int dbVersion = 1;

    private final Context context;

    private DatabaseManager(@NonNull Context context) {
        super(context, dbName, null, dbVersio);
        this.context = context.getApplicationContext();//Saving as application context to avoid leaks
    }

    private static DatabaseManager instance;
    public static synchronized DatabaseManager getInstance(Context context) {
        if (instance == null) {
            instance = new DatabaseManager(context);
        }
        return instance;
    }

    @Override
    public void onCreate(SQLiteDatabase db) {
        //...
    }

    @Override
    public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
        //...
    }
}

Now, if my activity needs to modify both ‘Meals’ & ‘MealFoods’ tables I can construct the helpers in onCreate, such as:

public class AddFoodActivity extends AppCompatActivity{

    MealsDBHelper mealsDBHelper;
    MealFoodsDBHelper mealFoodsDBHelper;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(..);

        mealsDBHelper = new MealsDBHelper(this);
        mealFoodsDBHelper = new MealFoodsDBHelper(this);
    }

    //Then whenever I need to modify the table I use:
    mealsDBHelper.updateMealName(mealId, mealName);
}

What I like about this approach is that I can use a simple line to modify a table according to my needs,
& what I don’t like is that I need to define helper references for every activity, and it kind of makes the code inconsistent.

Basically are there any downsides for using one of the methods?

I’ll admit I did leave a big chunk of code out of this post, but its only because I think it won’t add a lot to your understanding of the problem, because its a more general one.

Thank you very much for any kind of help.

dnd 5e – Is a Wild Shaped druid effectively immune to the Detect Thoughts spell?

RAW, if a form’s Languages section is blank, that form is immune to detect thoughts.

This is the rules as written ruling. A druid wildshaped into a wolf cannot speak any languages, so detect thoughts does not apply, as detect thoughts says:

You can’t detect a creature with an Intelligence of 3 or lower or one that doesn’t speak any language.

As for telepathy, it probably depends on the wording of the particular feature granting telepathy. If I’m DMing, it doesn’t matter because…

Some beasts have a language, which may still protect them from detect thoughts.

For example, the Giant Elk:

Languages Giant Elk , understands Common, Elvish, and Sylvan but can’t speak them

So our druid uses Wildshape to become a Giant Elk. Now they can speak the Giant Elk language.

This means they can think in Giant Elk.

If the caster of detect thoughts does not themselves speak Giant Elk, then they wouldn’t be able to understand the thoughts of the druid.

Of note, the answer in this Q&A argues to the contrary, so it would not be unreasonable for a DM to rule that thinking in Giant Elk doesn’t help.

I would rule that the druid is not immune to detect thoughts.

This is a situation where I would rule against the RAW ruling. Wildshape has this feature:

you retain your alignment, personality, and Intelligence, Wisdom, and Charisma scores.

Despite taking the form of a Beast, you retain all of your mental ability scores – you are still just as wise and intelligent as you were in your normal form. To me, this means that a druid wildshaped into a bear does not think like a bear, rather thinks just as they did in their normal form. I would rule that you still think in whatever language you usually think in, so are still vulnerable to detect thoughts (unless you think in a language the caster does not know).

differential equations – Step size is effectively zero

I’ve been trying to solve Bodewadt flow equations which is a system of differential equations.

F² – G² + HF’ – F”+1 = 0

2GF + HG’-G” = 0

2F + H’ = 0

Boundary conditions:

F(0)=G(0)=H(0)=0

F(∞)=0,G(∞)=1

I turned them into a system of first order ordinary differential equations where F=x;F’=y;G=z,G’=s and H=p. Thing is, I don’t have the derivatives F'(0) and G'(0) so I should make use of boundary conditions in infinite, instead, I use a sufficiently large number instead of infinite, aproximately “14” works fine.

sol = {x(t), y(t), z(t), s(t), p(t)} /.
NDSolve({x'(t) == y(t), y'(t) == x(t)^2 - z(t)^2 + s(t) y(t) + 1, 
s'(t) == 2 z(t) x(t) + p(t) s(t), z'(t) == s(t), p'(t) == -2 x(t),
 x(0) == 0, z(0) == 0, p(0) == 0, x(14) == 0, z(14) == 1}, {x, y, 
z, s, p}, {t, 0, 14})

The problem is that when I run the code I have the next problem:

NDSolve: At t == 3.4508216573870163`, step size is effectively zero; singularity or stiff system suspected.

I really am not acquainted with Mathematica, so I really don’t how to solve a system of ODEs with boundary conditions in infinite.

Thanks!