dnd 5e – Does spike growth inflict cumulative damage on large and bigger creatures?

Spike growth:

The ground in a 20-foot radius centered on a point within range twists and sprouts hard spikes and thorns. The area becomes difficult terrain for the duration. When a creature moves into or within the area, it takes 2d4 piercing damage for every 5 feet it travels.
The transformation of the ground is camouflaged to look natural. Any creature that can’t see the area at the time the spell is cast must make a Wisdom (Perception) check against your spell save DC to recognize the terrain as hazardous before entering it.

“That day, the druid cast spike growth where a gargantuan, half-burrowed Sandworm stood… and for the next half hour, everybody stopped playing and started frantically browsing through the manuals to figure out what to do.”

So, the question does size matter…?

I take for granted that you can choose as the epicenter of the spell the point where the creature touches the ground: it won’t influence the space where the body of the creature is, nor anything below, but the surrounding terrain on the ground level should be influenced (although you don’t really see such point, the spell doesn’t require you too, contrary to the usual routine). So, the token of this sizeable creature occupies a 16 (4×4) squares space on the grid (12 hexagons if you’re into that). At the start of its turn, it’s gonna find himself in the middle of a semi-hidden spike field. Since the spell only hurts whoever moves into or within the area, I infer that creatures who find themselves already in it and decide not to move won’t get hurt, all the more if they’re half-burrowed. Now, if the creature notices the danger (and it should since it was there) but still decides to move above the terrain (although I guess he could return underground where half his body lies without repercussions), how much does he get hurt? The spell mentions a damage x movement ratio, and with smaller creatures it’s no problem. But what about bigger monsters? Is this 2nd level spell a colossus bane, which indirectly does x4 damage to large monsters, x9 to huge ones and x16 to gargantuan ones per square (=5 feet)?

RAW, I’d rule against it: bigger creatures aren’t affected multiple times by effects that target more than one of their squares (think fireball: no matter the size, if the spell hits just a square or the whole circumference of a token, the damage only hits once). That said, seems to me like this huge AoE spell should indeed scale with the size of its victims as more spikes pierce through their flesh. Also, it wouldn’t be the first time that low level spells were hugely effective against specific creatures (heat metal against full-plated enemies comes to mind).

What do you think?

dnd 5e – Can you use a large creature’s dead body as a means to walk over a grease spell so that you are unaffected by that spell?

Here’s the spell description:

Slick grease covers the ground in a 10-foot square centered on a point within range and turns it into difficult terrain for the duration.

When the grease appears, each creature standing in its area must succeed on a Dexterity saving throw or fall prone. A creature that enters the area or ends its turn there must also succeed on a Dexterity saving throw or fall prone.

There is no rule “grease spell effect is negated when a large creature is lying upon” in any official source book. We on the internet can only tell you that spells do based on their description only, because we are not DMing your game. We cannot change or expand upon the rules and say “this is part of how the spell works”. We do not have that authority; the DM of the game does, and we aren’t that.

The Grease spell covers 10-foot square, and a large creature does not cover 10-foot square:

A creature’s space is the area in feet that it effectively controls in combat, not an expression of its physical dimensions. (PHB p. 191)

Nothing states that you can’t though

“There is no rule for that” does not equal “you can’t do that”. Player characters can (and probably should) do actions, not detailed elsewhere in the rules:

When you take your action on your turn, you can take
one of the actions presented here, an action you gained
from your class or a special feature, or an action that
you improvise.

When you describe an action not detailed elsewhere
in the rules
, the DM tells you whether that action is
possible and what kind of roll you need to make, if any

(PHB p. 192 , emphasis mine)

You can try to walk over it, but the result will depend on the situation — how exactly the creature is lying, what body does that creature have, et cetera. This is the DMs job to adjudicate such things, so it becomes exactly the “ask your DM” type of question.

One thing we can say though — there is no “fluff” ignorable text in that description. It says “slick grease covers the ground”, therefore this slick grease is the exact reason why a walking creature “must succeed on a Dexterity saving throw or fall prone”. If you completely cover the grease with something big and heavy enough to provide a sound surface, it probably negates the spell effect (but still, ask your DM).

google analytics – How to calculate a conversion goal value for a large sample size?

I need to assign a conversion goal value for user actions made on a website.

I have already processed the data for a sample size of 210 paid invoices. The statistical analysis of the income values can be seen in the results below…

enter image description here
enter image description here

Which parameter should I use as the conversion value and why?

Please Note:
I have posted the same question to Mathematics.SE and stackoverflow due to the feedback on similar: https://meta.stackexchange.com/questions/260823/which-site-should-i-ask-this-math-question-on

performance tuning – Pair-wise equality over large sets of large vectors

I’ve got an interesting performance tuning/algorithmic problem that I’m running into in an optimization context.

I’ve got a set of ~16-50 lists of integers (usually in Range(0, 5) but no restricted to that).
The data might look like this (although obviously not random)

maxQ = 5;
ndim = 16;
nstates = 100000;
braVecs = RandomInteger({0, maxQ}, {ndim, nstates});
ketVecs = braVecs + RandomInteger({0, 1}, {ndim, nstates});

now for every element q ∈ Subsets(ndim, 4) I need to determine where every pair of braVecs and ketVecs are the same, except for the indices in q, i.e. I for every possible q I need to calculate this

qComp = Complement(Range(ndim), q);
diffs = braVecs((qComp)) - ketVecs((qComp));
Pick(Range(nstates), Times @@ (1 - Unitize(diffs)), 1)

Just as an example, this is the kind of thing I expect to get out

q = RandomSample(Range(ndim), 4);
qComp = Complement(Range(ndim), q);
diffs = braVecs((qComp)) - ketVecs((qComp));
{q, Pick(Range(nstates), Times @@ (1 - Unitize(diffs)), 1)}

{{2, 9, 6, 4}, {825, 1993, 5577, 5666, 9690, 9856, 11502, 13515, 15680, 18570, 
  19207, 23131, 26986, 27269, 31889, 39396, 39942, 51688, 52520, 54905, 55360, 
  60180, 61682, 66258, 66458, 68742, 71871, 78489, 80906, 90275, 91520, 93184}}

This can obviously be done just by looping, but I sure there is an algorithmically more efficient way to do this, maybe using a Trie or maybe using some Boolean algebra to reuse prior calculations? This is important because my ndim can get up to ~50 and there are a huge number of elements in Subsets(50, 4)… I just don’t know what the best way to approach this kind of thing is.

convex optimization – Conjugate gradient and the eigenvectors corresponding to the large eigenvalues

I am working on an optimization problem (for example, conjugate gradient) to solve $Ax=b$, where $A$ is a symmetric positive definite matrix. I can understand that the CG (conjugate gradient) has better performance when the matrix $A$ has a smaller conditioner number. But I am wondering is there a relationship between the eigenvectors corresponding to the largest few eigenvalues and the first few update directions of the CG? Any suggestions would be helpful. Thanks!

C# / Unity: Large numbers for idle games using a list

For learning purposes, I am developing the first steps of an idle game with exponentially higher income numbers. Usual int, long, etc. can’t store big enough numbers for that.

I looked through many threads which I found on google and using a list (or an array) which is separated in e.g. bronze coins, silver coins, gold coins, etc. seems to be the cleanest version.

In my first approaches I created this list:

namespace Coins
{
    public class script_coinHandler : MonoBehaviour
    {
        public List<Coin> coinList = new List<Coin>() // list based on the "Coin" class
        {
            new Coin() { Name = "bronze"},
            new Coin() { Name = "silver"},
            new Coin() { Name = "gold"},
            new Coin() { Name = "diamond"},
        }; 
    }

    public class Coin
    {
        public string Name { get; set; }
        public int Amount { get; set; }
        public int Multiplier { get; set; }
    }
    
}

I want each coin type to go up to 999999. This means 1 silver = 1000000 bronze, 1 gold = 1000000 silver.

I can’t get my head around a way to do maths with this list.
Example:

Player has 3000 GOLD, 320000 SILVER, 524321 BRONZE

Player wants to buy something for 20 GOLD, 120000 SILVER, 300000 BRONZE

Just subtracting 20 from GOLD, 120000 from SILVER, and 300000 from BRONZE won’t always work. For example when a Player has no BRONZE but some SILVER – you can’t subtract 300000 BRONZE without converting SILVER to BRONZE first.

Q: What would be an efficient function to subtract the price from the total amount? What about multiplying and dividing?
Accuracy is not important. When spending a few GOLD, nobody will care about 10000 BRONZE.

algorithms – How to test large tree data structure shape properly?

Here is a sort of B+tree data structure:

class KeyValue {
  constructor(key, value) {
    this.key = key;
    this.value = value;
  }
}

class Node {
  constructor(capacity) {
    // Mimic fixed-size array (avoid accidentally growing it)
    this.children = Object.seal(Array(capacity).fill(null));
    this.childCount = 0; // Number of used slots in children array
    this.treeSize = 0
    // The algorithm relies on that fact that both KeyValue & Node have a key property:
    this.key = null; // Here it is a property for supporting a search
    // Maintain back-link to parent.
    this.parent = null;
    // Per level in the tree, maintain a doubly linked list
    this.prev = this.next = null;
  }
  updateTreeSize(start, end, sign=1) {
    let sum = 0;
    if (this.isLeaf()) {
        sum = end - start;
    } else {
        for (let i = start; i < end; i++) sum += this.children(i).treeSize;
    }
    if (!sum) return;
    sum *= sign;
    // Apply the sum change to this node and all its ancestors
    for (let node = this; node; node = node.parent) {
        node.treeSize += sum;
    }
  }
  setCapacity(capacity) {
    if (capacity < 1) return;
    // Here we make a new array, and copy the data into it
    let children = Object.seal(Array(capacity).fill(null));
    for (let i = 0; i < this.childCount; i++) children(i) = this.children(i);
    this.children = children;
  }
  isLeaf() {
    return !(this.children(0) instanceof Node);
  }
  index() {
    return this.parent.children.indexOf(this);
  }
  updateKey() {
    for (let node = this; node; node = node.parent) {
      node.key = node.children(0).key;
    }
  }
  wipe(start, end) {
    this.updateTreeSize(start, end, -1);
    this.children.copyWithin(start, end, this.childCount);
    for (let i = this.childCount - end + start; i < this.childCount; i++) {
      this.children(i) = null;
    }
    this.childCount -= end - start;
    // Reduce allocated size if possible
    if (this.childCount * 2 <= this.children.length) this.setCapacity(this.children.length / 2);
    // Update key if first item changed
    if (start === 0 && this.childCount > 0) this.updateKey();
  }
  moveFrom(neighbor, target, start, count = 1) {
    // Note: `start` can have two meanings:
    //   if neighbor is null, it is the value/Node to move to the target
    //   if neighbor is a Node, it is the index from where value(s) have to be moved to the target
    // Make room in target node
    if (this.childCount + count > this.children.length) this.setCapacity(this.children.length * 2);
    this.children.copyWithin(target + count, target, Math.max(target + count, this.childCount));
    this.childCount += count;
    if (neighbor !== null) {
      // Copy the children
      for (let i = 0; i < count; i++) {
        this.children(target + i) = neighbor.children(start + i);
      }
      // Remove the original references
      neighbor.wipe(start, start + count);
    } else {
      this.children(target) = start; // start is value to insert
    }
    this.updateTreeSize(target, target + count, 1);
    // Set parent link(s)
    if (!this.isLeaf()) {
      for (let i = 0; i < count; i++) {
        this.children(target + i).parent = this;
      }
    }
    // Update key if first item changed
    if (target === 0) this.updateKey();
  }
  moveToNext(count) {
    this.next.moveFrom(this, 0, this.childCount - count, count);
  }
  moveFromNext(count) {
    this.moveFrom(this.next, this.childCount, 0, count);
  }
  basicRemove(index) {
    if (!this.isLeaf()) {
      // Take node out of the level's linked list
      let prev = this.children(index).prev;
      let next = this.children(index).next;
      if (prev) prev.next = next;
      if (next) next.prev = prev;
    }
    this.wipe(index, index + 1);
  }
  basicInsert(index, value) {
    this.moveFrom(null, index, value);
    if (value instanceof Node) {
      // Insert node in the level's linked list
      if (index > 0) {
        value.prev = this.children(index - 1);
        value.next = value.prev.next;
      } else if (this.childCount > 1) {
        value.next = this.children(1);
        value.prev = value.next.prev;
      }
      if (value.prev) value.prev.next = value;
      if (value.next) value.next.prev = value;
    }
  }
  pairWithSmallest() {
    return this.prev && (!this.next || this.next.childCount > this.prev.childCount) ?
      (this.prev, this) : (this, this.next);
  }
  toString() {
    return "(" + this.children.map(v => v ?? "-").join() + ")";
  }
}

class Tree {
  constructor(nodeCapacity = 32) {
    this.nodeCapacity = nodeCapacity;
    this.base = new Node(1);
    this.first = this.base; // Head of doubly linked list at bottom level
  }
  locate(key) {
    let node = this.base;
    let low;
    while (true) {
      // Binary search among keys
      low = 1;
      let high = node.childCount;
      while (low < high) {
        let index = (low + high) >> 1;
        if (key >= node.children(index).key) {
          low = index + 1;
        } else {
          high = index;
        }
      }
      low--;
      if (node.isLeaf()) break;
      node = node.children(low);
    }
    if (low < node.childCount && key > node.children(low).key) return (node, low + 1);
    return (node, low);
  }
  get(key) {
    let (node, index) = this.locate(key);
    if (index < node.childCount) {
      let keyValue = node.children(index);
      if (keyValue.key === key) return keyValue.value;
    }
  }
  set(key, value) {
    let (node, index) = this.locate(key);
    if (index < node.childCount && node.children(index).key === key) {
      // already present: update the value
      node.children(index).value = value;
      return;
    }
    let item = new KeyValue(key, value); // item can be a KeyValue or a Node
    while (node.childCount === this.nodeCapacity) { // No room here
      if (index === 0 && node.prev && node.prev.childCount < this.nodeCapacity) {
        return node.prev.basicInsert(node.prev.childCount, item);
      }
      // Check whether we can redistribute (to avoid a split)
      if (node !== this.base) {
        let (left, right) = node.pairWithSmallest();
        let joinedIndex = left === node ? index : left.childCount + index;
        let sumCount = left.childCount + right.childCount + 1;
        if (sumCount <= 2 * this.nodeCapacity) { // redistribute
          let childCount = sumCount >> 1;
          if (node === right) { // redistribute to the left
            let insertInLeft = joinedIndex < childCount;
            left.moveFromNext(childCount - left.childCount - +insertInLeft);
          } else { // redistribute to the right
            let insertInRight = index >= sumCount - childCount;
            left.moveToNext(childCount - right.childCount - +insertInRight);
          }
          if (joinedIndex > left.childCount ||
            joinedIndex === left.childCount && left.childCount > right.childCount) {
            right.basicInsert(joinedIndex - left.childCount, item);
          } else {
            left.basicInsert(joinedIndex, item);
          }
          return;
        }
      }
      // Cannot redistribute: split node
      let childCount = node.childCount >> 1;
      // Create a new node that will later become the right sibling of this node
      let sibling = new Node(childCount);
      // Move half of node node's data to it
      sibling.moveFrom(node, 0, childCount, childCount);
      // Insert the item in either the current node or the new one
      if (index > node.childCount) {
        sibling.basicInsert(index - node.childCount, item);
      } else {
        node.basicInsert(index, item);
      }
      // Is this the base?
      if (!node.parent) {
        // ...then first create a parent, which is the new base
        this.base = new Node(2);
        this.base.basicInsert(0, node);
      }
      // Prepare for inserting the sibling node into the tree
      index = node.index() + 1;
      node = node.parent;
      item = sibling; // item is now a Node
    }
    node.basicInsert(index, item);
  }
  remove(key) {
    let (node, index) = this.locate(key);
    if (index >= node.childCount || node.children(index).key !== key) return; // not found
    while (true) {
      node.basicRemove(index);

      // Exit when node's fill ratio is fine
      if (!node.parent || node.childCount * 2 > this.nodeCapacity) return;
      // Node has potentially too few children, we should either merge or redistribute

      let (left, right) = node.pairWithSmallest();

      if (!left || !right) { // A node with no siblings? Must become the base!
        this.base = node;
        node.parent = null;
        return;
      }
      let sumCount = left.childCount + right.childCount;
      let childCount = sumCount >> 1;

      // Check whether to merge or to redistribute
      if (sumCount > this.nodeCapacity) { // redistribute
        // Move some data from the bigger to the smaller node
        let shift = childCount - node.childCount;
        if (!shift) { // Boundary case: when a redistribution would bring no improvement
          console.assert(node.childCount * 2 === this.nodeCapacity && sumCount === this.nodeCapacity + 1);
          return;
        }
        if (node === left) { // move some children from right to left
          left.moveFromNext(shift);
        } else { // move some children from left to right
          left.moveToNext(shift);
        }
        return;
      }

      // Merge:
      // Move all data from the right to the left
      left.moveFromNext(right.childCount);
      // Prepare to delete right node
      node = right.parent;
      index = right.index();
    }
  }
}

module.exports = Tree
Tree.Node = Node
Tree.KeyValue = KeyValue

It has some initial tests like this:

const Tree = require('./keyValueTree')
const { Node, KeyValue } = Tree

  /* Below this point: these methods are optional */
Tree.prototype(Symbol.iterator) = function*() { // Make tree iterable, yielding key/value pairs
    for (let node = this.first; node; node = node.next) {
        for (let i = 0; i < node.childCount; i++) yield (node.children(i).key, node.children(i).value);
    }
}

Tree.prototype.verify = function() {
    // Raise an error when the tree violates one of the required properties
    if (!this.base || !this.base.childCount) return; // An empty tree is fine.
    if (this.base.parent) throw "base should not have a parent";
    // Perform a breadth first traversal
    let q = (this.base);
    while (q.length) {
        if (q(0).isLeaf() && this.first !== q(0)) throw "this.first is not pointing to first leaf";
        let level = ();
        let last = null;
        for (let parent of q) {
            if (!(parent instanceof Node)) throw "parent is not instance of Node";
            if (parent.children.length > this.nodeCapacity) throw "node's children array is too large";
            if (parent.childCount > 0 && parent.childCount * 2 <= parent.children.length) throw "node's fill ratio is too low";
            for (let i = parent.childCount; i < parent.children.length; i++) {
                if (parent.children(i) !== null) throw "child beyond childCount should be null but is not";
            }
            if (parent.isLeaf()) {
                if (parent.children(0).key !== parent.key) throw "key does not match with first child value";
                for (let value of parent.children.slice(0, parent.childCount)) {
                    if (value === null) throw "leaf has a null as value";
                    if (!(value instanceof KeyValue)) throw "leaf has a non-KeyValue item";
                }
            } else {
                if (parent.children(0).key !== parent.key) throw "key does not match with first child's key";
                for (let node of parent.children.slice(0, parent.childCount)) {
                    if (node === null) throw "internal node has a null as value";
                    if (!(node instanceof Node)) throw "internal node has a non-Node as value";
                    if (node.parent !== parent) throw "wrong parent";
                    if (node.prev !== last) throw "prev link incorrect";
                    if (last && last.next !== node) throw "next link incorrect";
                    if (last && last.children.length + node.children.length <= this.nodeCapacity) {
                        throw "two consecutive siblings have a total number of children that is too small";
                    }
                    if (node.childCount * 2 < this.nodeCapacity) {
                        throw "internal node is too small: " + node;
                    }
                    level.push(node);
                    last = node;
                }
            }
        }
        if (last && last.next) throw "last node in level has a next reference";
        q = level;
    }
}

Tree.prototype.test = function(count=100) {
    const isEqual = () =>
        JSON.stringify((...map).sort((a,b) => a(0)-b(0))) === JSON.stringify((...this));
    // Create Map to perform the same operations on it as on the tree
    let map = new Map;
    let max = count*2;
    // Perform a series of insertions
    for (let i = 0; i < count; i++) {
        // Choose random key
        let key = Math.floor(Math.random() * max);
        let value = key*2;
        // Perform same insertion in array and tree
        map.set(key, value);
        this.set(key, value);
        // Verify tree consistency and properties
        this.verify();
        // Verify the order of key/values in the array is the same as in the tree
        console.assert(isEqual(), "tree not same as array");
    }
    // Perform a series of retrievals and updates
    for (let i = 0; i < count; i++) {
        // Choose random key
        let key = Math.floor(Math.random() * max);
        // Perform same retrieval in array and tree
        let value = map.get(key);
        if (value !== this.get(key)) throw "get() returns inconsistent result";
        if (value === undefined) { // value is not in tree
            this.remove(key); // should not alter the tree
        } else { // value is in tree: update it
            map.set(key, value+10);
            this.set(key, value+10);
        }
        // Verify tree consistency and properties
        this.verify();
        // Verify the order of key/values in the array is the same as in the tree
        console.assert(isEqual(), "tree not same as array");
    }
    // Perform a series of deletions
    for (let i = map.size; i > 0; i--) {
        // Choose random deletion value
        let j = Math.floor(Math.random() * i)
        let key = (...map.keys())(j);
        // Perform same deletion in array and tree
        map.delete(key);
        this.remove(key);
        // Verify tree consistency and properties
        this.verify();
        // Verify the order of key/values in the array is the same as in the tree
        console.assert(isEqual(), "tree not same as array");
    }
}

// Perform 1000 calls of set (some duplicates),
//    1000 calls of get and updating set calls,
//    and remove calls to remove all nodes,
//    on a tree with node capacity of 8
let tree = new Tree(8).test(1000);
console.log("all tests completed");

However, I don’t feel very comfortable with having Math.random used in the tests and not being able to really see the expected output clearly, which is more of what I’m used to.

So I am starting to put together some new tests, which happen to use this allocate function:

const Tree = require('./keyValueTree')

module.exports = allocate

function allocate(bins, bits) {
  if ((bits & (bits - 1)) != 0) {
    throw "Parameter is not a power of 2";
  }

  if (bits < 32 || bits > 4194304) {
    throw "Bits required out of range";
  }

  var startBinIndex = Math.log2(bits >> 5);
  var lastBin = bins.length - 1;

  for (var binIndex = startBinIndex; binIndex <= lastBin; binIndex++) {
    var bin = bins(binIndex);

    //
    // We have found a bin that is not empty...
    //
    if (bin.base.treeSize != 0) {
      //
      // Calculate amount of memory this bin takes up
      //
      var thisBinMemorySize = (32 << binIndex);
      var binBlock = bin.first.children(0);
      var memoryAddress = binBlock.key;

      //
      // We are going to return this block
      //
      var allocatedMemoryBlock = memoryAddress

      //
      // Before we return the above block, we need to remove the block if count is 1 otherwise decrease count and adjust memory start pointer by bin size
      //
      if (binBlock.value == 1) {
        bin.remove(binBlock.key)
      } else {
        binBlock.value--;
        binBlock.key += thisBinMemorySize;
        bin.first.updateKey()
      }

      //
      // if we want 1024 bits and it takes it from bin 15, we simply subtract 1024 from 4194304 which gives us 4193280
      // if we then populate bin 3 (1024 bits) onward, until bin 14, the exact number we end up populating those bins with is 4183280
      //
      var remainingUnsedMemory = thisBinMemorySize - bits;
      var adjustmentSize = bits;
      while (remainingUnsedMemory != 0) {
        memoryAddress += adjustmentSize;

        bins(startBinIndex).set(memoryAddress, 1)
        startBinIndex++;
        remainingUnsedMemory -= bits;
        adjustmentSize = bits;
        bits <<= 1;
      }

      return allocatedMemoryBlock;
    }
  }
  return null; // out of memory...
}

Here are the tests for that:

const Tree = require('./keyValueTree')
const allocate = require('./allocate')
const assert = require('assert')

test('allocate blocks', () => {
  const bins = createBaseBins()
  var address = allocate(bins, 128)
  assert.strictEqual(address, 0)
  var address = allocate(bins, 256)
  assert.strictEqual(address, 256)
  var address = allocate(bins, 128)
  assert.strictEqual(address, 128)
  var address = allocate(bins, 256)
  assert.strictEqual(address, 512)
  var address = allocate(bins, 128)
  assert.strictEqual(address, 768)
  var address = allocate(bins, 4096)
  assert.strictEqual(address, 4096)
  // console.log(bins(15).first)
  // assert.deepStrictEqual(bins, (
  //   (),
  //   (),
  //   ( { start: 896, count: 1 } ),
  //   (),
  //   (),
  //   ( { start: 1024, count: 1 } ),
  //   ( { start: 2048, count: 1 } ),
  //   (),
  //   ( { start: 8192, count: 1 } ),
  //   ( { start: 16384, count: 1 } ),
  //   ( { start: 32768, count: 1 } ),
  //   ( { start: 65536, count: 1 } ),
  //   ( { start: 131072, count: 1 } ),
  //   ( { start: 262144, count: 1 } ),
  //   ( { start: 524288, count: 1 } ),
  //   ( { start: 1048576, count: 99 } )
  // ))
})

function test(name, fn) {
  // console.log(name)
  fn()
}

function createBaseBins() {
  let bins = (
    new Tree, // 128 bits each chunk
    new Tree, // 256
    new Tree, // 512
    new Tree, // 1024
    new Tree, // 2048
    new Tree, // 4096
    new Tree, // 8192
    new Tree, // 16384
    new Tree, // 32768
    new Tree, // 65536
    new Tree, // 131072
    new Tree, // 262144
    new Tree, // 524288
    new Tree, // 1048576
    new Tree, // 2097152
    new Tree, // 4194304
  )
  bins(bins.length - 1).set(0, 100)
  return bins
}

I commented out what I had in the test before I started using the tree data structure. It was nice to test the full structure of the “bins” layout because in this case I am working on both the implementations for the tree and the allocate function and seeing that everything is correctly laid out is helpful.

What I’m wondering though is what is the best way to write tests for this system? Should I be testing for the full structure of the “bins” layout? Or is there a more standard approach? I didn’t opt for testing specific nodes in the tree because it’s hard to read and follow after a few dozen nodes are tested and the test gets large.

Besides how to test the bins, how do I test the permutations of the tree? This tree changes shape every power of two nodes, and should shrink down if it gets below power of two. So it’s like I would take snapshots of the tree and expect or assert that I see a complete tree structure at different checkpoints. Is that the right general approach? If not, what is?

Once I have tested a few snapshots of the tree, how do I know if I have covered all of the things that can happen to the tree, is there an easy way to tell, or do I have to deeply re-understand how the implementation works and figure out what should be tested from that?

Just wondering general approaches here, not super specifics of what specifically to test. Thinking about methodology and workflow and debugging above the rest.

reference request – Given an input sequence of real numbers, select (one of) the closest sequence(s) from a fixed large set of sequences

We are given a set $S$ of $mgg 1$ sequences of $n$ elements, where each sequence $sin S$ belongs to $mathbb{R}^n$.

In the problem I am trying to solve, in a sequential fashion, we obtain a new sequence $s_r$ at each round $rge 1$ and the goal is to find the sequence closest to $s_r$ in $S$, possibly in an approximate way. The distance between two sequences is the Euclidean distance. How can we preprocess and organize the information of the sequences in $S$, to solve this problem focusing on the trade-off between time complexity and distance minimization?

I guess we can use sampling and randomized algorithm/data structures. Is there in the related literature any solution already found for this problem?

database – Suitable low code platforms for viewing large amounts of data

Currently we use Tableau, which is great for data visualization. However, we have some usecases where we want to view large amounts of data individually in tables, or search for specific database rows. Tableau isn’t made for this and doesn’t perform well enough for it.

I tried a Microsoft PowerApps trial and it worked quite well, except that it will not pull more than 500 rows from the database.

Does anyone have experience with low-code platforms or tech stacks that support reading data from databases, larger than 500GB, and allowing for search functionalities?

probability – Weak large deviation principle with non-unique rate function

In the book Large deviations techniques and applications by Dembo and Zeitouni, page 118, it is stated that

if $X$ is a locally compact space, the rate function is unique as soon as a weak LDP holds. As shown in Exercise 4.1.30, if X is a Polish space, then also the rate function is unique as soon as a weak LDP holds.

We recall that, by definition, the weak LDP holds if the lower LDP bound holds for all sets (as in the standard LDP) and the upper LDP bound holds for compact sets (page 7 of the book).

Ma question is: is there an example of a weak LDP on a regular Hausdorff topological space which admits several (lower semicontinuous) rate functions? Such example would necessarily involve a family of measures that is not exponentially tight, since exponential tightness combined with the weak LDP implies the full LDP (Lemma 4.1.4 in the book), for which the rate function is always unique.