dnd 5e – Is the max HP reduction from the Diseased Giant Rat permanent?

The variant Diseased Giant Rat (MM pg. 327) has a feature that when it hits with an attack and the target fails a saving throw, they contract a disease. The text says

Until the disease is cured the target can’t regain hit points except by magical means, and the target’s hit point maximum decreases by 3 (1d6) every 24 hours. (…)

Is this reduction permanent or will the character regain their original maximum HP when cured of the disease?

My confusion stems from the fact that all information about the effect follows the “Until the disease is cured” statement, which could mean that the reduction only lasts until the target is cured. On the other hand one could argue that it is only further reductions that are mentioned and thus prevented by curing the target and the previous ones are separate effects that has already taken place.

computer architecture – K-Map Reduction Grouping Question

I have a simple question regarding reduction using K-Maps.

My professor gave this example:

example given by professor

While I somewhat understand that we can only group quantities of base 2 numbers, why did my professor group the K-Map values like the image above instead of the image below (what I drafted):

k-map that I drafted

Why did my professor include elements 0 & 1? Why not just a group of 2 consisting of elements 4 & 5?

Thank you very much.

Reduction for the proof that COMBI $:= {langle G,k rangle | G$ has Clique $geq k$ or Independent Set $geq k}$ is NP complete

Given the Language $COMBI := {langle G,k rangle | G$ has Clique $geq k$ or Independent Set $geq k}$. Proof that Combi is NP-complete.

I tried to reduce Clique <=p Combi. I had two different ideas:

  • Let $f(langle G,krangle) = langle G,krangle$ so $langle G,krangle in$ CLIQUE $Rightarrow f(langle G,krangle)$ is trivial.
    Though I am puzzled with the case $f(langle G,krangle) in$ COMBI $Rightarrow langle G,krangle $. What is when I have only an Independent set $geq k$ within $langle G,krangle$ and no Clique $geq k$. Then it would be in COMBI but not within CLIQUE.

  • Let $f(langle G,krangle) = langle G´,krangle$ where $G´= overline{G} cup G$ as disjunctive union (two different components) and $overline{G}$ is the complement graph so $langle G,krangle in$ CLIQUE $Rightarrow f(langle G,krangle) in$ COMBI is trivial again, but the other case seems to be flawed due to the same reason as in the first try.

Where am I going wrong or do I have to make a completely different reduction $f$?

Reduction on the gpu

This program adds together all elements in an array with the function parallelReduce. Includes testing on an initialization with all 1’s and a calculation of the speed up. I only tried to optimize the kernel itself, not the data transfer by initializing on the gpu for example.

#include "cuda_runtime.h"
#include "device_launch_parameters.h"

#include <stdio.h>
#include <iostream>
#include <chrono>
#include <assert.h>

// To make error checking easier
inline cudaError_t checkCuda(cudaError_t result)
{
    if (result != cudaSuccess) {
        fprintf(stderr, "CUDA Runtime Error: %sn", cudaGetErrorString(result));
        assert(result == cudaSuccess);
    }
    return result;
}

/*
Parallel reduce helper function. When run
with n/2 threads, changes array a to a' such that
the sum of the first n elements of a is equal to 
the sum of the first lceil n/2 rceil elements of a'
*/
__global__ void reduce(int* a, int n)
{
    int i = threadIdx.x + blockDim.x * blockIdx.x;
    int stride = gridDim.x * blockDim.x;

    for (int j = i; j < n / 2; j += stride)
    {
        a(j) += a(n - 1 - j);
    }
}

/*
For an array a of length n, puts the sum of all elements in a(0)
*/
void parallelReduce(int* a, int n)
{
    // Get some information about the GPU
    cudaDeviceProp prop;
    cudaGetDeviceProperties(&prop, 0);
    int multiProcessors = prop.multiProcessorCount;

    // Repeatedly use the helper function reduce to condense
    // a to its first element
    while (n > 1) {
        int threadsPerBlock = 256;
        int numberOfBlocks = 32 * multiProcessors;
        reduce << <numberOfBlocks, threadsPerBlock >> > (a, n);
        checkCuda(cudaGetLastError());
        checkCuda(cudaDeviceSynchronize());
        n = (n + n % 2) / 2; // Rounds n/2 up.
    }
}

int main()
{
    // Initialize vector with N 1's.
    int N = (2 << 27) + 1; // The array size gets too big before we would need a long or long long for N.
    size_t size = N * sizeof(int);
    int* h_a;
    checkCuda(cudaMallocHost(&h_a, size));

    for (int i = 0; i < N; i++) {
        h_a(i) = 1;
    }

    // Copy to device (can be done asynchronically to hide transfer time, but 
    // that messes up the timing of the kernel).
    int* d_a;
    checkCuda(cudaMalloc(&d_a, size));
    checkCuda(cudaMemcpy(d_a, h_a, size, cudaMemcpyHostToDevice));

    // Calculate the sum sequentially and time it.
    auto tic = std::chrono::high_resolution_clock::now();
    int hostSolution = 0;
    for (int i = 0; i < N; i++)
    {
        hostSolution += h_a(i);
    }
    auto toc = std::chrono::high_resolution_clock::now();
    int duration = std::chrono::duration_cast<std::chrono::milliseconds>(toc - tic).count();

    std::cout << "The sequential function says the answer is " << hostSolution << " this took " << duration
        << " ms." << std::endl;

    // Kernel computation
    tic = std::chrono::high_resolution_clock::now();
    parallelReduce(d_a, N);
    toc = std::chrono::high_resolution_clock::now();
    int parallelDuration = std::chrono::duration_cast<std::chrono::milliseconds>(toc - tic).count();

    // Copy result back to host
    int solution;
    checkCuda(cudaMemcpy(&solution, &d_a(0), sizeof(int), cudaMemcpyDeviceToHost));

    // Print the parallel result and speed up:
    std::cout << "The parallel function says the answer is " << solution << " this took " << parallelDuration
        << " ms." << std::endl;

    std::cout << "This means we have achieved a speed up of " << duration / parallelDuration << std::endl;
}

algorithms – How does reduction of a decision problem work?

I am given the following problem description:

Given $l$ lists, $L_1$, $L_2$, . . .$L_l$ each containing $N$ bit vectors of $n$ bits each, we want to find tuples $(x_1,···,x_l)$ with $x_i$ in the corresponding $L_i$, such that:$$⊕_{i=1}^l x_i = S $$ for some target n-bit vector $S$. In other words, we want to find one element from list such that the XOR of the chosen elements is the target $S$

Now I am required to show that if we can solve the equation in the case where $S=0$, we can for the same cost solve it for an arbitrary value (with exact same parameters, $N$, $n$, and $l$). I am given a hint to proceed by reducing an instance with $S neq 0$ to an instance with $S=0$.

What am I missing that I am not able to reduce it?

computability – Condition to prove $f$ is a reduction

A theorem says if $f$ is a computable function and we can prove $x in A Leftrightarrow f(x) in B$, then we can use reduction so $A leq_m B$.

But i’m confused if should I prove if :

  1. $(x in A Rightarrow f(x) in B )land (f(x) in B Rightarrow x in A)$

Or

  1. $(x in A Rightarrow f(x) in B ) land(x notin A Rightarrow f(x) notin B)$

in which circunstances because I’ve seen both in demonstration. Or maybe are they equivalent ?

For example, here is a solution to the empty string problem. But with $f$ such as, $f: text{M accepts w} to text{M accepts } epsilon \ f(langle M, epsilon rangle) = langle M rangle$, we can proove 1. Is it valid ?

ag.algebraic geometry – Reduction theory of higher dimensional algebraic varieties

If $X$ is a nonsigular curve over a number field $K$, one can obtain several arithmetic models of $X$. Namely, we can construct an arithmetic surface $mathcal Xtooperatorname{spec} O_K$, such that $mathcal X_0cong X$ and with certain properties: minimality, regularity etc etc. This theory is well known an beautifully explained in Liu’s book (Chapter 10). Then the arithmetic properties of $X$ are reflected on the fibres $mathcal X_{mathfrak p}$.

Is this theory well developed also when $X$ is a variety of dimension $d>1$? So far I have seen papers treating just very special cases: K3 surfaces, del Pezzo…

What is the theoretical obstruction to have a general theory like in the case of curves? At least I would expect that something can be said about principally polarized varieties of general type

stock photography – Noise reduction, upload to shutterstock declined

The image is a bit “jpeggy”, there is artefacting in the flat colour areas which a machine AI system might reject.
The image size is also a bit small for a modern stock library; it’s about 3mp compared to the 20mp or more that they might be expecting to see these days.

I picked one area of the photo to concentrate on – plain sky with one small cloud – where it’s easiest to see what’s going on.
These are screenshots from Adobe CameraRAW, showing the controls on the right.

Click any of these to see full size, too small to see detail in here.

Original pic as uploaded
enter image description here

With added Sharpening – this really picks out the ‘jpeggy’ edges, something an AI might do to see how much compression was used.
enter image description here

Removing the sharpening & adding some smoothing instead has the unfortunate side-effect of also removing detail from the cloud
enter image description here

You could perhaps try to balance the two together to retain some detail whilst flattening the broad coloured areas (this isn’t perfect, just a hint in the right direction)
You could also get in there with a more detailed brush approach to smooth & sharpen specific parts.
enter image description here

There’s also some colour aberration – a lens defect – which is giving green & purple fringing to the image edges.
Again, PhotoRAW can have a go at trying to reduce this effect…

enter image description here enter image description here

enter image description here enter image description here

If your original photo was shot RAW rather than jpg & also if it was shot at a higher resolution than 3mp, then go back to that & see what it looks like. When exporting to jpg, save at full size & 100% quality (or save as png) if the site will accept that file size (they ought to do.)

complexity theory – Reduction from $A$ to $B$ as execution of Turing machines

What confuses you is that the words in the languages are encodings of machines that simulate runs of other machines, but at the end of the day, these are just words. Specifically,
given input $x = langle M, xrangle$ for the reduction, the reduction itself does not simulate the run of $M$ on $x$, the reduction only outputs $f(x) = langle M’rangle$ which is an encoding of a machine. The machine $M’$, by definition, simulates the run of $M$ on $x$, given any input $w$ for $M’$. You can think of this reduction as a python code $f$ that is given as input another python code $M$ and an input $x$ for $M$. Then, $f$ halts and outputs a python code $M’$. Note that $M’$ may never halt on any input $w$, but this is okay as $f$ never runs $M’$.

Please note that the first attached figure is irrelevant here as it does not describe how the reduction operates (how $M_f$ computes $f(x)$, given $x$), it only describes how you can define a machine $M_A$ for $overline{HP}$ assuming that you already have: 1) a machine $M_B$ for $L_2$, and 2) a machine $M_f$ that computes the reduction $f$.