dnd 5e – Is it ok to lie to players rolling an insight?

A successful Insight check should reveal useful information to players, and an unsuccessful check should emphasize uncertainty

My guiding principles with Insight are:

  • That it reveals information about the immediate situation being
    examined, and not necessarily about the world itself
  • Success suggests a keen understanding of the circumstance, while
    failure indicates poor understanding

I feel it is important to include both elements.

In your specific example, I would present these elements as the bloodthirsty servant being sincere in his belief that he works for the paladin on a successful Insight check. Players should not learn the true state of the world (definitively discovering the relationship between these NPCs just based on evaluating the servant’s claim) from examining what the servant says.

In the case of a failed Insight check I would narrate the outcome as a lack of information, rather than being certain that incorrect information is definitely accurate. The failure of insight just means that they have no particular understanding of meta-information about the servant and the assertion the servant has made. The narration I favor would emphasize that– “he doesn’t seem to be obviously lying to you, but you can’t get a good read on him at all.”

I recommend not forbidding checks because there is no dishonesty to discover. This directly reveals to players the same information they would get on a successful Insight check, but without having to roll. When something seems off to players, or they want to double-check information they receive, they should be encouraged to try to find out via their characters’ ability to examine what they know and perceive.

I suggest not giving false information on a poor roll for similar reasons: players will know they rolled poorly (unless you use a hidden-roll mechanic), and so telling them definitive information pretty clearly marks that information as unreliable. I also advocate not directly lying to players more broadly, but that’s out of scope here.


My experiences with running Insight checks this way have been that they help situate players in the game, even if it doesn’t shed much light on the plot. They want more information, and if an Insight check doesn’t provide it they either have to hope for the best and stay wary, or they try to verify claims in other ways (like investigating the claim after the conversation). The opportunity for NPCs to lie to or otherwise deceive the PCs is a part of the adventure, distinct from the players’ dependence on me, the GM, to provide the information necessary for the players play the game at all.

Using drush config import and update db in a rolling deployment (k8s) environment

Disclaimer: complete Drupal newbie here. I’m the infrastructure guy in the team.

We are running various Drupal 8.9 instances on Kubernetes using a managed MariaDB outside k8s.

We build Docker containers for each instance on a build server using composer. Then, once deployed, a script initializes Drupal using a combination of drush cim, drush cr, drush updb and a few conditionals.

Kubernetes provides a rolling deployment so the current instance keeps running until the new container is up.
Now here’s the issue I can’t wrap my head around: the new and old instance both write to the same database. So drush cim from the new instance runs against the same DB as the live environment, possibly breaking the live instance.

I’m currently trying to catch this in my init script, but it proves to be non trivial. Ideas I came up with like cloning the db, doing the import there, testing if it works and then reimporting on the live instance appear overly complicated, Right now I do a dump before the drush cim so I can at least restore the instance if it fails but the site might still go down for a few minutes … is there a Drupal way of doing this properly? Maybe a config import dry run or forcing drush cim to use a SQL transaction? Do you guys have any idas or input? Anything helps. Thank you!

sql server – Extended Events Session Rolling Files Too Quickly on Managed Azure SQL Instance

We have a new extended events session we are kicking off to record all SQL and RPC run on the database.
This is running on an Azure Managed SQL instance and writing event files to Azure Blob Storage.
For some reason the files are rolling when they reach 27 kilobytes. I’m trying to get them to fill to 100 megabytes before rolling like our old SQL Trace files used to function.

Any ideas why?

CREATE
         EVENT SESSION AllQueries
         ON DATABASE
          ADD EVENT sqlserver.begin_tran_starting(
              ACTION(sqlserver.session_id)),
          ADD EVENT sqlserver.commit_tran_completed(
              ACTION(sqlserver.session_id)),
          ADD EVENT sqlserver.error_reported(
              ACTION(package0.callstack,sqlserver.database_id,sqlserver.session_id,sqlserver.sql_text,sqlserver.tsql_stack)
              WHERE ((severity)>=(20) OR ((error_number)=(17803) OR (error_number)=(701) OR (error_number)=(802) OR (error_number)=(8645) OR (error_number)=(8651) OR (error_number)=(8657) OR (error_number)=(8902) OR (error_number)=(41354) OR (error_number)=(41355) OR (error_number)=(41367) OR (error_number)=(41384) OR (error_number)=(41336) OR (error_number)=(41309) OR (error_number)=(41312) OR (error_number)=(41313)))),
          ADD EVENT sqlserver.existing_connection(
              ACTION(package0.event_sequence,sqlserver.client_hostname,sqlserver.session_id)),
          ADD EVENT sqlserver.login(SET collect_options_text=(1)
              ACTION(package0.event_sequence,sqlserver.client_hostname,sqlserver.session_id)),
          ADD EVENT sqlserver.logout(
              ACTION(package0.event_sequence,sqlserver.session_id)),
          ADD EVENT sqlserver.physical_page_read(
              ACTION(sqlserver.session_id)),
          ADD EVENT sqlserver.rollback_tran_completed,
          ADD EVENT sqlserver.rpc_completed(SET collect_statement=(1)
              ACTION(sqlserver.database_id,sqlserver.database_name,sqlserver.query_hash,sqlserver.query_plan_hash,sqlserver.session_id,sqlserver.sql_text,sqlserver.username)),
          ADD EVENT sqlserver.sql_batch_completed(
              ACTION(sqlserver.database_id,sqlserver.database_name,sqlserver.query_hash,sqlserver.query_plan_hash,sqlserver.session_id,sqlserver.sql_text))
         ADD TARGET
             package0.event_file
                 (
                 SET filename =
                'https://storage.blob.core.windows.net/storage/sqlxeventfile.xel',
                max_file_size = 100,
                max_rollover_files = 200
                 )
         WITH
             (MAX_MEMORY = 200 MB,
             MAX_DISPATCH_LATENCY = 3600 SECONDS)
GO

statistics – Dice rolling mechanic where modifiers have a predictable and consistent effect on difficulty

I am looking for a dice rolling mechanic that makes it such that increasing or decreasing a modifier on the roll has a constant multiplier effect on the probability of the outcome.

Say you have to make a roll for STAT. Such a roll has a probability of success of 50%. Now say you roll with a mod of -1, this roll has a probability of success of 25%. -2 has a probability of 12.5%. -3 is 6.25% and so on, always halving. The other way around it should be the same but for the probability of failure, always being divided by the same factor.

It doesn’t have to be a multiplier of 0.5, in fact I’d much rather it was a multiplier of 0.66-0.75, not such an extreme change.

Is there any kind of dice rolling mechanic I can use to simulate something like this?

dnd 5e – If you can only hit an enemy by rolling a natural 20, will using the Chronurgy wizard’s Convergent Future feature result in a critical hit?

I disagree with both answers. That is, I feel that the rules do adequately answer the question, and do turn the attack into a critical hit.

There are two scenarios of interest: the actual target AC, taking into account the attacker’s attack bonus, requires a die roll of at least 20, or it requires something higher than 20. Let’s consider the latter case first. In that scenario, the only way to hit is to apply the rule under “Rolling 1 or 20”. Let’s also assume that Convergent Future allows success in every possible scenario (which seems to be the intent). Then…

From PHB 194:

If the d20 roll for an attack is a 20, the attack hits regardless of any modifiers or the target’s AC. In addition, the attack is a critical hit, as explained later in this chapter.

The description of Convergent Future specifies that the player can “ignore the die roll and decide whether the number rolled is the minimum needed to succeed”.

Now, admittedly this is a difficult sentence to parse. It first says you ignore the die roll, and then says you decide whether that very die roll is the minimum number to succeed.

One way to interpret that is that they don’t actually mean you ignore the die roll, but rather you get to selectively adjust the target number (i.e. AC) so that it temporarily equals what you rolled. That would be consistent with “decide whether the number rolled is the minimum needed”.

But that’s clearly in conflict with the idea that the die roll is ignored. You can’t ignore the result and then say that the result does apply and you’re just changing the target value for success.

Another way to interpret the sentence is that by “decide whether the number rolled is the minimum”, they mean that rather than the number shown on the die itself, you select a different number as “the number rolled”, one that is equal to the minimum required for success. I find that this interpretation is much more sensible, requiring a lot less in the way of linguistic contortion to justify. The sentence structure is awkward, but the intent seems clear to me.

Okay, so having established that you are in fact changing what’s rolled, now we look at the rule for critical hit. The only way to apply the rule to successfully hit at all is to assume that the “Rolling 1 or 20” paragraph applies. If it applies, then necessarily it must be the case that “the d20 roll for an attack is 20”. There is no other rule that would allow for a hit in this scenario, and so we must be applying that paragraph. If we are not, then there is no way for Convergent Future to result in success, but we took as a given that it always can.

Since we are applying that paragraph, then there’s no reason to think that the whole paragraph does not apply. So the second sentence must also apply, meaning that when the die roll was taken to be a 20, that necessarily means it’s a critical hit as well.

Okay, so what about the scenario where the AC required at least a 20. In that case, the rules do allow for success even without the language under “Rolling 1 or 20”. It can be treated as a normal hit, based on the actual die roll meeting the target number required.

Suppose we say that in this case, it’s not a critical hit. Well, technically that’d be fine according to the rules as written. Except that we’ve already determined that if the AC requires a die roll higher than 20, you still hit and it’s a critical.

I don’t think it makes sense that you can’t have a critical hit result with the AC at exactly 20 plus the attacker’s bonus, but can when it’s higher. If you could, that would mean that higher AC is actually more dangerous for the target for an identical attack roll. That seems illogical to me, and so I find it contradictory with the IMHO logically sound conclusion of the other scenario.

So for logical consistency, it must also be the case that if AC requires a die roll of 20, Convergent Future also results in a critical hit.

That covers both scenarios, and in both cases you wind up with a critical hit.

Now, all that said: the above all relies on the assumption that Convergent Future is intended to always guarantee success. If you don’t take that as a given, then the conclusion could be the opposite: Convergent Future doesn’t actually change the roll as used in “Rolling 1 or 20”, and so that part of the rules would never apply. Only by actually rolling a 20 could one hit a target with AC higher than 20 plus the attacker’s bonus, and even for targets with AC of exactly 20 plus the attacker’s bonus, Convergent Future can only give you a hit, not a critical hit.

I don’t personally think that’s the right way to interpret Convergent Future, but I would have to admit that if someone chose to do so, it would change the analysis above.

javascript – Rolling average over predictions

I am using a Deep Learning model I trained and want to improve its accuracy by implementing a rolling average prediction. I have already done it but want to improve it as I am making predictions on every 3 frames.

Input:
Each index represents a class and the respective prediction the model makes.
Order is not alphanumeric as its based on my folder structure during training

1st Index : Class 0

2nd Index : Class 10

3rd Index : class 5

(
  0.9288286566734314,
  0.008770409040153027,
  0.062401000410318375,
)

So the goal is to store the highest probability and its index at every frame. Based on the structure I mentioned above I add the probability to its class. Every time a new probability gets added I increase a counter. Once the counter reaches N, I find the class which has the most predictions, I sum the probabilities and return the average of it and the respective class it belongs to.

N = 5
Prediction {
"0": (0.9811,0.9924, 0.8763),
"5": (0.9023),
"10": (0.9232)
}

Code in React(model is loaded on a mobile phone)

Rolling Prediction does the averaging of the predictions and is passed in the array of predictions mentioned in the input.

There are 2 helper functions to find the sum and to find the max value and max index in an array.

  const (allPredictions, setAllPredictions) = useState({
    "0": (),
    "5": (),
    "10": ()
  });
  let queueSize = 0;
  let total = 0;

  const rollingPrediction = arr => {
    const { max, maxIndex } = indexOfMax(arr);
    const maxFixed = parseFloat(max.toFixed(2));
    if (maxIndex === 0) {
      allPredictions("0").push(maxFixed);
      queueSize += 1;
    } else if (maxIndex === 1) {
      allPredictions("10").push(maxFixed);
      queueSize += 1;
    } else if (maxIndex === 2) {
      allPredictions("5").push(maxFixed);
      queueSize += 1;
    }
    console.log(`Queue : ${queueSize}`);
    if (queueSize > 4) {
      console.log("Queue Size Max");
      const arr1 = allPredictions("0").length;
      const arr2 = allPredictions("5").length;
      const arr3 = allPredictions("10").length;

      if (arr1 > arr2 && arr3) {
        const sum = sumOfArray(allPredictions("0"));
        const prob = sum / arr1;
        console.log(`Awareness level 0 | Probability: ${prob}`);
      } else if (arr2 > arr1 && arr3) {
        const sum = sumOfArray(allPredictions("5"));
        const prob = sum / arr2;
        console.log(`Awareness level 5 | Probability: ${prob}`);
      } else if (arr3 > arr2 && arr1) {
        const sum = sumOfArray(allPredictions("10"));
        const prob = sum / arr3;
        console.log(`Awareness level 10 | Probability: ${prob}`);
      } else {
        console.log("No rolling prediction");
      }
      queueSize = 0;
      allPredictions("0") = ();
      allPredictions("5") = ();
      allPredictions("10") = ();
    }
  };

  const sumOfArray = arr => {
    for(let i = 0; i < arr.length; i++){
        total += arr(i);
    }
    return total; 
  };

  const indexOfMax = arr => {
    if (arr.length === 0) {
      return -1;
    }
    let max = arr(0);
    let maxIndex = 0;

    for (let i = 0; i < arr.length; i++) {
      if (arr(i) > max) {
        max = arr(i);
        maxIndex = i;
      }
    }
    return {
      max,
      maxIndex
    };
  };

mysql 5.6 – Rolling back transaction with trx_mysql_thread_id of 0 create deadlocks on update

I’m using MySQL 5.6.41 on AWS RDS.

I have seen, recently, lot of transactions ending as deadlocks.

Using
SELECT * FROM information_schema.innodb_trx;

I found that a transaction was always there.

+--------------+--------------+---------------------+-----------------------+------------------+------------+---------------------+-----------+---------------------+-------------------+-------------------+------------------+-----------------------+-----------------+-------------------+-------------------------+---------------------+-------------------+------------------------+----------------------------+---------------------------+---------------------------+------------------+----------------------------+
|    trx_id    |  trx_state   |     trx_started     | trx_requested_lock_id | trx_wait_started | trx_weight | trx_mysql_thread_id | trx_query | trx_operation_state | trx_tables_in_use | trx_tables_locked | trx_lock_structs | trx_lock_memory_bytes | trx_rows_locked | trx_rows_modified | trx_concurrency_tickets | trx_isolation_level | trx_unique_checks | trx_foreign_key_checks | trx_last_foreign_key_error | trx_adaptive_hash_latched | trx_adaptive_hash_timeout | trx_is_read_only | trx_autocommit_non_locking |
+--------------+--------------+---------------------+-----------------------+------------------+------------+---------------------+-----------+---------------------+-------------------+-------------------+------------------+-----------------------+-----------------+-------------------+-------------------------+---------------------+-------------------+------------------------+----------------------------+---------------------------+---------------------------+------------------+----------------------------+
| 294492387379 | ROLLING BACK | 2020-09-10 09:03:09 |                       |                  |   60603568 |                   0 |           |                     |                 0 |                 0 |             1911 |                194088 |            1676 |          60601657 |                       0 | REPEATABLE READ     |                 1 |                      1 |                            |                         0 |                     10000 |                0 |                          0 |
+--------------+--------------+---------------------+-----------------------+------------------+------------+---------------------+-----------+---------------------+-------------------+-------------------+------------------+-----------------------+-----------------+-------------------+-------------------------+---------------------+-------------------+------------------------+----------------------------+---------------------------+---------------------------+------------------+----------------------------+

I have found a ROLLING BACK transaction with a trx_mysql_thread_id of 0.
This transaction was the one blocking others.

To be sure, I ran this:
SELECT * FROM information_schema.innodb_lock_waits;

+-------------------+-----------------------------+-----------------+-----------------------------+
| requesting_trx_id |      requested_lock_id      | blocking_trx_id |      blocking_lock_id       |
+-------------------+-----------------------------+-----------------+-----------------------------+
|      294906405784 | 294906405784:426:27705081:4 |    294492387379 | 294492387379:426:27705081:4 |
|      294906405563 | 294906405563:426:16826188:4 |    294492387379 | 294492387379:426:16826188:4 |
+-------------------+-----------------------------+-----------------+-----------------------------+

This command
XA RECOVER;
gives no results.

Is there a way to terminate the blocking transaction? Or find what is causing this?

dnd 5e – What is the mechanical reason for rolling the same initiative for a group of creatures?

If you are facing 20 Goblins, it’s a lot quicker to have them all act at the same time, rather than having them act one by one, which involves tracking an extra 19 initiative slots.

The flip side to this is that once your group gets too large, there’s a very real possibility that they are going to be able to take down a player from full HP while not a single player got to act inbetween.

That’s why it’s often smarter to divide the monsters up in smaller groups, such as having 3 groups of 5 goblins each, instead of having one initiative block of 15 goblins. That way, you still keep the game going quickly, but you don’t run the risk of 2 players dying before they even got to act because the goblins happened to roll a 20.