SQL Server 2016 – Optimize XOR search in the int list

I have a large table (TableA) with about 120 million. Records. This table contains a single primary key column (type: int).

Another large table (Table B) contains the same ID column with some other fields. There is currently no foreign key restriction or unique index defined in this table (although I definitely want to consider adding a primary key restriction here).

The main task is to determine whether one of the ID values ​​in Table A is NOT shown in Table B.

SELECT * FROM TableA WHERE Id NOT IN (SELECT Id FROM TableB)  

I have this task on our database server for about 4 minutes. Now I wondered if there was a way to improve it. As far as I know, the lists may not be orderable, BUT it is guaranteed that the ID is unique for both tables. Is there a method I can use to sort these fields first (or better still have a sorted index) and then simply compare those sorted lists, or is there another approach to this?

Probability – XOR combination of two different offsets from Pi for cryptographically random numbers

With that in mind, I had another question.

If two random strings of the same length were XOR-linked together by Pi, would this result in a cryptographically secure random number suitable for use in a one-time block? If so, would that mean that using a secure, low-bandwidth channel, you could suppress three digits (two offsets and one length) to provide any pad length?

Bit manipulation – If XOR of n different numbers is ANDed using one of the numbers, the result is zero

For example, it doesn't work if x2 = 0 then xoredResult & x2 = 0 all the time, no matter what else the set of numbers contains.

Or the set for another example { 1, 2, 3 } xors to zero (so that it is ANDed by one of its members or by something else results in zero), but does not contain any repeated elements.

Algorithms – Minimum XOR for queries

In an interview I was asked the following question.

Given an array ON With N Elements and an array B With M Elements. For each B (X), return A (I), where XOR of A (I) and B (X) is minimal.

For example:

entrance

A = (3, 2, 9, 6, 1)
B = (4, 8, 5, 9)

output

(6, 9, 6, 9)

For if 4 is XORed to an element in A, a minimum value occurs if A (I) = 6

4 ^ 3 = 7
4 ^ 2 = 6
4 ^ 9 = 13
4 ^ 6 = 2
4 ^ 1 = 5

Here is my brute force solution in Python.

def get_min_xor(A, B):

    ans = ()

    for val_b in B:
        min_xor = val_b ^ A(0)

        for val_a in A:
            min_xor = min(min_xor, val_b ^ val_a)
            # print("{} ^ {} = {}".format(val_b, val_a, val_b ^ val_a))

        ans.append(min_xor ^ val_b)

    return ans

Any ideas on how this could be solved in sub-O (MxN) time complexity?

I had the following idea.
I would sort the array A in O (NlogN) time, then for each element in B. I would try to find its place in the array A using the binary search. Say, B (X) would fit at i-th position in A, then I would check the minimum XOR of B(X) ^ A(i-1) and B(X) ^ A(i+1), However, this approach does not work in all cases. For example, the following input

A = (1,2,3)
B = (2, 5, 8)

Output:

(2, 1, 1)

Algorithms – Find the number of adjacent subsequences with the same XOR

Given an order $ A_1, A_2, A_3, points, A_n $find the number of threefold $ i, j, k $ so that $ 1 le i <j le k le n $ and $ A_i oplus A_ {i + 1} oplus cdots oplus A_ {j-1} = A_ {j} oplus A_ {j + 1} oplus cdots oplus A_ {k} $, from where $ oplus $ is the bitwise xor operation.

I've tried to solve it with dynamic programming similar to the https://www.geeksforgeeks.org/count-number-of-subsets-having-a-particular-xor-value/, but its temporal complexity, if $ O (nm) $, from where $ m $ is the maximum element in the array. Can we do it better than $ O (n ^ 2) $ or $ O (nm) $?

c ++ – Maximum XOR of two elements in an array

I was looking for an efficient algorithm to calculate the maximum or possible X of two numbers in an array …
I found two efficient approaches, one of which uses experiments, and an algorithm I found in LeetCode that promises results in O (nlog (U)). U is the number of bits in array elements. This algorithm works fine. However, I have trouble understanding how it works.

Please explain how it works.

Here is the code snippet

int findMaximumXOR (vector& nums) {
int mask = 0;
int test_max = 0;
int max = 0;
unordered_set s;
for (long i = 30; i> = 0; --i) {
Mask | = 1ll << i;

printf (" n" BYTE_TO_BINARY_PATTERN, BYTE_TO_BINARY (mask));

for (int num: nums) {
at insert (num & mask);
}

test_max = max | 1 << i;
for (int s_val: s) {
if (s.find (s_val ^ test_max)! = at the end ()) {
max = test_max;
break;
}
}
on clear ();
}
return max;
}

** The byte_to_binary is a macro I have defined for printing a binary representation of a number

** I was not sure if this type of question would be better for StackOverflow or code review … if it's not code review … let me know.

Programming Challenge – LeetCode: Maximum XOR of two numbers in an array C #

https://leetcode.com/explore/learn/card/trie/149/practical-application-ii/1057/

Check how clear my code is and comment on the performance.

For a non-empty array of numbers, a0, a1, a2, …, an-1, where 0 ≤ ai
<2 ^ 31.

Find the maximum result of ai XOR aj, where 0 ≤ i, j <n.

Could you do this in O (n) runtime?

Example:

Entrance: [3, 10, 5, 25, 2, 8]

Issue: 28

Explanation: The maximum result is 5 ^ 25 = 28.

using system;
using Microsoft.VisualStudio.TestTools.UnitTesting;

Namespace TrieQuestions
{
/// 
    /// https://leetcode.com/explore/learn/card/trie/149/practical-application-ii/1057/
/// 
    [TestClass]
    
    
    
    public class FindMaxXorInArray
{
        [TestMethod]
        public void XorTrieTreeTest ()
{
int[] nums = {3, 10, 5, 25, 2, 8};
Assert.AreEqual (28, FindMaximumXOR (nums));
}

// x or mean if 0 ^ 0 == 0 and 1 ^ 1 == 0
// So if we want maximum, we want the maximum opposites
public int FindMaximumXOR (int[] nums)
{
XorTrieTree tree = new XorTrieTree ();
tree.Insert (nums);
return tree.GetMax (nums);
}
}

public class XorTrieTree
{
private XorTreeNode _root;

public XorTrieTree ()
{
_root = new XorTreeNode ();
}
/// 
        /// For each of the numbers we find, we'll set flags to all 32 bits with the right mouse button
/// and bitwise AND to understand if the bit is enabled or disabled
/// 
        /// 
        
        
        
        public void insert (int[] nums)
{
foreach (var num in nums)
{
XorTreeNode cur = _root;
for (int i = 31; i> = 0; i--)
{
int bit = (num >> i) & 1;
if (cur.children[bit] == null)
{
cur.children[bit] = new XorTreeNode ();
}
cur = cur.children[bit];
}
}
}


/// 
        /// For each of the numbers we try to understand which bits are set.
/// If the bit is set, we search for the NOT bit,
/// We add the NOT bit to the temporary xorValue variable because the prefix of these two numbers is xor-ed
/// and on to the next node with the NOT bit.
/// At the end we make max of current max and xorValue
/// 
        /// 
        
        
        
        /// 
        
        
        
        public int GetMax (int[] nums)
{
int max = int.MinValue;
foreach (var num in nums)
{
XorTreeNode cur = _root;
int xorValue = 0;
for (int i = 31; i> = 0; i--)
{
int bit = (num >> i) & 1;
if (cur.children[bit == 1 ? 0 : 1] ! = zero)
{
xorValue + = (1 << i);
cur = cur.children[bit == 1 ? 0 : 1];
}
otherwise
{
cur = cur.children[bit];
}
}
max = Math.Max ​​(xorValue, max);
}

return max;
}
}

// The root has 2 nodes for 0 and 1
public class XorTreeNode
{
public int Val;
public XorTreeNode[] Children;
public XorTreeNode ()
{
children = new XorTreeNode[2];
}
}
}

Encryption – XOR SHA hash and base64

What does it mean to XOR an SHA-256 hash (an input string) with a base64 value?
I'll go over a scenario that specifies an input string and a base64 value, and I need to create an output string after doing this. As I understand it, a hash function creates a byte array, and a base64 value can also give me a byte array (in C #, it's Convert.FromBase64String ()). Can both be XORed and the result converted to a UTF8 string?

Does UTF8 not need a byte array in a specific format? I may be missing something basic here.