## information theory – Why is “CNOT” gate the only non trivial for two input bits?

Just yesterday, I found the theory about quantum computing and I am studying by myself.
While trying to understand Toffoli gate on wiki (https://en.wikipedia.org/wiki/Toffoli_gate),
I faced the sentence ‘CNOT’ gate is the only non trivial for two input bits
like 00 -> 00, 01 -> 01, 10 -> 11, 11-> 10. At this point,

### Question 1

the question popped up that why not 00 -> 01, 01 -> 00, 10 -> 10, 11 -> 11. I think this matrix is presented by
$$begin{bmatrix} 0 & 1 & 0 & 0 \ 1 & 0 & 0 & 0\ 0 & 0 & 1 & 0\ 0 & 0 & 0 & 1\ end{bmatrix} quad$$

is different with $$begin{bmatrix} 1 & 0 & 0 & 0 \ 0 & 1 & 0 & 0\ 0 & 0 & 0 & 1\ 0 & 0 & 1 & 0\ end{bmatrix} quad$$

and is also unitary.

### Question 2.

Is the order of the basis matter whenever to present the operation as a matrix? If then, what is the rule?

### Question 3.

I am studying with this lecture note- https://homes.cs.washington.edu/~oskin/quantum-notes.pdf
Page 12 of the note, $$frac{1}{sqrt{2}}(a |0 rangle (|00rangle +|11rangle)+b |1 rangle (|00 rangle + |11 rangle))= frac{1}{sqrt{2}} begin{bmatrix} a \ 0 \ 0 \ a \ b \ 0 \ 0 \ b \ end{bmatrix} quad$$

but I think $$|0 rangle in mathbb{C}^2$$ and $$|00 rangle, |11 rangle in mathbb{C}^4$$ so the product of two vector is non-sense. Should I consider it as a tensor product of the two vectors? If we consider it as a tensor product, then it’s okay and the order of basis vectors looks important..

## bip9 version bits – Should block height or MTP or a mixture of both be used in a soft fork activation mechanism?

Using block heights for the start and timeout parameters has the advantage of giving miners a known number of signaling periods. Loss of hashpower doesn’t reduce the number of retarget periods available for activation. Especially for an activation mechanism over a shorter time horizon (e.g. the Speedy Trial proposal) it may be important to ensure miners have the maximum number of signaling periods. Block heights are also arguably easier to communicate and easier to reason about as blockchain developers are used to working with them.

Using MTP (median time past) has the advantage of being able to schedule an activation at a specific time in the day to avoid activation occurring in the middle of the night for some region in the world. Concerns around hashrate decreases or increases affecting the number of signaling periods can be mitigated with selecting mid signaling period MTPs. The short duration of proposals like Speedy Trial are less sensitive to hashrate drifts changing the number of periods (a large hashrate decrease would be required to decrease the number of signalling periods).

A concern with MTP is that a coalition of miners could scale down their mined blocks at nTime to MTP + 1 to prevent reaching a MTP start time at the expected real world time. This concern seems minor as it could impact difficulty adjustments and would require broad participation from miners to limit MTP.

It is also subject to debate whether using block height consistently or using a mixture of both block heights and MTP is preferable for making the implementation and release of an alternative competing (compatible or incompatible) activation mechanism (e.g. a UASF release) more difficult or for avoiding a scalp for marketing purposes.

AJ Towns explains a disadvantage with using MTP for the minactivation point here. If activation time falls near a difficulty retarget block activation could happen the next day or in two weeks. This presents some communication challenges.

The height at which you transition from LOCKED_IN to ACTIVE must be fully determined as soon as you transition from STARTED to LOCKED_IN. That way the entire LOCKED_IN period has to be re-orged if you want to steal funds protected by both nLocktime and the new rules.

In summary, there appears to be consensus that block heights should be used exclusively in activation mechanisms for future soft forks but it is less clear whether there is consensus to use them exclusively for the proposed Taproot activation mechanism, Speedy Trial.

For more details on the timewarp attack on MTP see this from Mark Friedenbach and this from Andrew Chow.

This answer was taken from comments on GitHub and the mailing list from Andrew Chow, AJ Towns, Jeremy Rubin, Sjors Provoost, Antoine Riard and David Harding.

## coding theory – Prove that probabilistic adaptive algorithm that can explore only k bits of n bit input can’t distinguish k-independent distribution from uniform

Definition: Distribution $$D$$ on $${0,1}^n$$ is called k-independent if for every random variable $$X$$ with distribution $$D$$ and for all $$i_1, dots, i_k in {1,2,dots,n}$$ random variable $$X_{i_1,dots, i_k}$$ has distribution $$U_k$$ (uniformal).

Problem: Consider probabilistic algorithm A that has an oracle access to input of length $$n$$. It means that algorithm $$A$$ can adaptively request $$k$$ bits of input (in more detail, $$A$$ can request one bit, then based on the oracle answer request another bit and repeat it not more than $$k$$ times).
Prove that if $$D$$ is k-independent distribution than $$Pr_{x sim D} (A(x) = 1) = Pr_{x sim U_n}(A(x) = 1)$$

First of all I don’t understand how adaptivity can potentially help Algorithm to distinguish uniformal distribution from k-independent. Second question is in the problem.

## java – Como adicionar códigos nativos de 64 e 32 bits em um app Android? Solucionada!

Obrigado por contribuir com o Stack Overflow em Português!

• Certifique-se de responder à pergunta. Entre em detalhes sobre a sua solução e compartilhe o que você descobriu.

Mas evite

• Pedir esclarecimentos ou detalhes sobre outras respostas.
• Fazer afirmações baseadas apenas na sua opinião; aponte referências ou experiências anteriores.

Para aprender mais, veja nossas dicas sobre como escrever boas respostas.

## computer architecture – How to calculate (physical address, tag bits, block index, cache index dan block offset)

i want to know how to calculate computer system with byte-address has a memory of 235. If the number of lines in the cache is 4096 lines and the block memory size is 16KB, and the mapping uses Direct Mapping. Calculate the mappings (physical address, tag bits, block index, cache index and block offset)!

## Hi, why did I roll in again after rolling and winning 099-999 bits? It says you are suspected of cheating. What should I do to fix the problem? [closed]

why did I roll in again after rolling and winning 099-999 bits? It says you are suspected of cheating. What should I do to fix the problem?

## bit manipulation – Shift Operation with Shifted Bits Greater Than or Equal To the Operand’s

In Computer Systems A Programmer’s Perspective (3rd Edition), the author says, “For a data type consisting of `w` bits, if we shift by some value `k (k ≥ w)`, on many machines we eventually shift the data type by `k mod w` bits.” However, when I code with C, like this and run on my laptop (x86-64, Windows 10):

``````#include <stdio.h>
int main() {
unsigned char c = 0b01100010;
c <<= 8;
printf("%x", c);
return 0;
}
``````

The result is `0`, which means it really shifted by 8 bits, but if we consider that `w` is 8 and `k` is 8, in this case the machine would shift by nothing (since 8 mod 8 == 0) and it is supposed to get the original value c. Why did this happen?

## bit manipulation – Shift Operation with Shifted Bits Greater Than or Equal To the Operand

In Computer Systems A Programmer’s Perspective (3rd Edition), the author says, “For a data type consisting of `w` bits, if we shift by some value `k (k ≥ w)`, on many machines we eventually shift the data type by `k mod w` bits.” However, when I code with C, like this and run on my laptop (x86-64, Windows 10):

``````#include <stdio.h>
int main() {
unsigned char c = 0b01100010;
c <<= 8;
printf("%x", c);
return 0;
}
``````

The result is `0`, which means it really shifted by 8 bits, but if we consider that `w` is 8 and `k` is 8, in this case the machine would shift by nothing (since 8 mod 8 == 0) and it is supposed to get the original value c. Why did this happen?

## math – Is it possible to store N bits of unique combinations, in N-1 bits? If not; why does MD5 get reprimanded for collissions?

Regarding cryptography and the issue of collisions, I posed a question as to whether it was ever possible to store every single possible combination of a bit array of a particular size, in a bit array that was at least one bit smaller, with apodictic certainty that no collision would occur.

The answer given to me by one fellow was no, and he used the following example:

``````Given 4 bits
0000
It has 16 possible combinations
Try storing 16 possible combinations in 3 bits:
000
``````

While this was seemingly obvious on the smaller scale, I wonder that if you scaled up, whether this would remain true, given that more bits will offer far more flexibility and options ( and yet conversely you requiring more combinations to account for ). I have a hard time imagining it NOT remaining true, however perhaps there is something I am overlooking.

Working under the presumption that this is not possible; Why is it that md5 is reprimanded for generating collisions:
https://en.wikipedia.org/wiki/MD5#Collision_vulnerabilities

When frankly given the principle, that literally no hash should be immune to this problem?

## math – Is it possible to store N bits of unique combinations, in N-1 bits, guaranteeing with apodictic certainty that no collisions would occur?

Regarding cryptography and the issue of collisions, I posed a question as to whether it was ever possible to store every single possible combination of a bit array of a particular size, in a bit array that was at least one bit smaller, with apodictic certainty that no collision would occur.

The answer given to me by one fellow was no, and he used the following example:

``````Given 4 bits
0000
It has 16 possible combinations
Try storing 16 possible combinations in 3 bits:
000
``````

As an aside, I imagined that if “Qubits” ever took off, that this could perhaps be done.

In any case, while this was seemingly obvious on the smaller scale, I wonder that if you scaled up, whether this would remain true, given that more bits will offer far more flexibility and options ( and yet conversely you requiring more combinations to account for ). I have a hard time imagining it NOT remaining true, however perhaps there is something I am overlooking.

So again:

Is it possible to store N bits of unique combinations, in N-1 bits? Unique meaning that you guarantee with apodictic certainty that no collision is possible?

Thanks.