Passwords – Write a simple SHA256 Salted Hash Generator

I saw a video that describes how to write a simple salted hash program in C #. Here.

Below is the code they wrote (easily edited for console applications):

using System;
using System.Text;
using System.Security.Cryptography;

namespace MyApplication
{
    class Program
    {
        const int SALT_SIZE = 10;

        static void Main(string() args)
        {                                
            string salt = CreateSalt();
            string password = "securePassword";
            string hashedPassword = GenerateSHA256Hash(password, salt);

            Console.WriteLine("salt: " + salt);
            Console.WriteLine("hashedPassword: " + hashedPassword);                                   
        }

        private static string CreateSalt()
        {
            var rng = new RNGCryptoServiceProvider();
            var buffer = new byte(SALT_SIZE);
            rng.GetBytes(buffer);

            return Convert.ToBase64String(buffer);
        }

        private static string GenerateSHA256Hash(string input, string salt)
        {
            byte() bytes = Encoding.UTF8.GetBytes(input + salt);
            var hashManager = new SHA256Managed();
            byte() hash = hashManager.ComputeHash(bytes);

            return ByteArrayToHexString(hash);
        }

        private static string ByteArrayToHexString(byte() bytes)
        {
            StringBuilder sb = new StringBuilder(bytes.Length * 2);

            foreach (byte b in bytes)
                sb.AppendFormat("{0:x2}", b);

            return sb.ToString();
        }
    }
}

From what I've read online, salted hashes are one of the safest ways to save passwords.

However, I have a few questions:

  1. I have read that hashing a salted password is not enough. You have to go through it thousands of times to make it difficult for the attackers to brutally force them.

Would it be safer to do something like this and what would be a good number of times to repeat the hashing?

var hash = hashManager.ComputeHash(bytes);

for (int i = 0; i < 10000; i++)
    hash = hashManager.ComputeHash(hash);

I also read that you need to include the salt when you warm up, but I don't understand how to add it properly.

  1. For the salt Buffer size, is 10 a good number to use or would a higher / lower number be safer (e.g. 16)?

  2. I take this with a grain of salt, but I've read that SHA256 is no longer a safe choice because it is too fast, which means that raw forces can be done faster.

This means that fast algorithms like SHA are out of date and have to be replaced by slower ones like bcrypt?

  1. I'm assuming that hex strings are a safe way to store salted hashes. Is that correct?

  2. After applying all the changes from the above questions (if any), would the code above be safe enough to be used in a production environment?

go – infinite generator with eviction

I have created a structure that iterates through a list and returns the next item when I call the Get () method.
It also allows items to be removed from the list by calling the Evict () method and is safe for simultaneous use.

I would appreciate feedback: what is okay, what is not, what could be improved or changed.

package generators

import (
    "errors"
)

var (
    ErrEmpty    = errors.New("no elements were given")
    ErrConflict = errors.New("found the same element twice")
)

// InfiniteWithEvict will circle through a list of elements until cancelled,
// allowing to remove elements if needed. It is safe for concurrent use.
// It needs to be created with NewInfiniteWithEvict(),
// started with
// InfiniteWithEvict.Start()
// and cancelled by calling the function returned by InfiniteWithEvict.Start().
type InfiniteWithEvict struct {
    elems   ()interface{}
    evictC  chan interface{}
    getC    chan interface{}
    indices map(interface{})int
}

// NewInfiniteWithEvict returns an InfiniteWithEvict object.
// The contents of the slice elems will be modified by calls to Evict() method.
func NewInfiniteWithEvict(elems ()interface{}) (*InfiniteWithEvict, error) {
    if len(elems) == 0 {
        return nil, ErrEmpty
    }
    m := make(map(interface{})int)
    for idx, elem := range elems {
        if _, ok := m(elem); ok {
            return nil, ErrConflict
        }
        m(elem) = idx
    }
    return &InfiniteWithEvict{
        elems:   elems,
        evictC:  make(chan interface{}),
        getC:    make(chan interface{}),
        indices: m,
    }, nil
}

// Start starts the generator.
// Before calling start, calls to Get() and Evict() will block.
// It returns a function that must be called when you are sure that
// no more calls to Get() or Evict() will be made.
func (p *InfiniteWithEvict) Start() func() {
    cancel := make(chan struct{})
    go p.loop(cancel)
    return func() {
        cancel <- struct{}{}
    }
}

// Get returns the next element of the list.
func (p *InfiniteWithEvict) Get() interface{} {
    return <-p.getC
}

// Evict removes an element from the list.
func (p *InfiniteWithEvict) Evict(v interface{}) {
    p.evictC <- v
}

func (p *InfiniteWithEvict) loop(cancel <-chan struct{}) {
    i := 0
    for {
        select {
        case <-cancel:
            close(p.getC)
            return
        case v := <-p.evictC:
            index, ok := p.indices(v)
            if !ok {
                continue
            }
            p.elems(index) = p.elems(len(p.elems)-1)
            p.elems = p.elems(:len(p.elems)-1)
            delete(p.indices, v)
            if len(p.elems) == 0 {
                continue
            }
            p.indices(p.elems(index)) = index
            if i == len(p.elems) {
                i = 0
            }
        case p.getC <- func() interface{} {
            if len(p.elems) == 0 {
                return nil
            }
            return p.elems(i)
        }():
            if i == len(p.elems)-1 {
                i = 0
            } else {
                i = i + 1
            }
        }
    }
}

Python – Create generator object with image enlargement to train convolutional neural networks with Keras

I am currently learning a Python generator object myself and use it to generate training data and to carry out the expansion during operation, and then to feed it into Convolutional Neural Networks.

Could someone please help me check my code? It is working fine, but I need to check it out to make it more efficient and structured. How can I also verify that using the generator uses less memory (compared to only passing a regular numper array to the model)?

Thanks a lot!

from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
import pandas as pd
import os
import cv2
import numpy as np
from sklearn.model_selection import train_test_split
import tensorflow as tf
from augment import ImageAugment

class Generator():
    def __init__(self, feat, labels, width, height):
        self.feat = feat
        self.labels = labels
        self.width = width
        self.height = height

    def gen(self):
        '''
        Yields generator object for training or evaluation without batching
        Yields:
            im: np.array of (1,width,height,1) of images
            label: np.array of one-hot vector of label (1,num_labels)
        '''
        feat = self.feat
        labels = self.labels
        width = self.width
        height = self.height
        i=0
        while (True):
            im = cv2.imread(feat(i),0)
            im = im.reshape(width,height,1)
            im = np.expand_dims(im,axis=0)
            label = np.expand_dims(labels(i),axis=0)
            yield im,label
            i+=1

            if i>=len(feat):
                i=0


    def gen_test(self):
        '''
        Yields generator object to do prediction
        Yields:
            im: np.array of (1,width,height,1) of images
        '''
        feat = self.feat
        width = self.width
        height = self.height
        i=0
        while (True):
            im = cv2.imread(feat(i),0)
            im = im.reshape(width,height,1)
            im = np.expand_dims(im,axis=0)
            yield im
            i+=1


    def gen_batching(self, batch_size):
        '''
        Yields generator object with batching of batch_size
        Args:
            batch_size (int): batch_size
        Yields:
            feat_batch: np.array of (batch_size,width,height,1) of images
            label_batch: np.array of (batch_size,num_labels)
        '''
        feat = self.feat
        labels = self.labels
        width = self.width
        height = self.height
        num_examples = len(feat)
        num_batch = num_examples/batch_size
        X = ()
        for n in range(num_examples):
            im = cv2.imread(feat(n),0)
            try:
                im = im.reshape(width,height,1)
            except:
                print('Error on this image: ', feat(n))
            X.append(im)
        X = np.array(X)

        feat_batch = np.zeros((batch_size,width,height,1))
        label_batch = np.zeros((batch_size,labels.shape(1)))
        while(True):
            for i in range(batch_size):
                index = np.random.randint(X.shape(0),size=1)(0) #shuffle the data
                feat_batch(i) = X(index)
                label_batch(i) = labels(index)
            yield feat_batch,label_batch

    # def on_next(self):
    #     '''
    #     Advance to the next generator object
    #     '''
    #     gen_obj = self.gen_test()
    #     return next(gen_obj)
    #
    # def gen_show(self, pred):
    #     '''
    #     Show the image generator object
    #     '''
    #     i=0
    #     while(True):
    #         image = self.on_next()
    #         image = np.squeeze(image,axis=0)
    #         cv2.imshow('image', image)
    #         cv2.waitKey(0)
    #         i+=1

    def gen_augment(self,batch_size,augment):
        '''
        Yields generator object with batching of batch_size and augmentation.
        The number of examples for 1 batch will be multiplied based on the number of augmentation

        augment represents (speckle, gaussian, poisson). It means, the augmentation will be done on the augment list element that is 1
        for example, augment = (1,1,0) corresponds to adding speckle noise and gaussian noise
        if batch_size = 100, the number of examples in each batch will become 300

        Args:
            batch_size (int): batch_size
            augment (list): list that defines what kind of augmentation we want to do
        Yields:
            feat_batch: np.array of (batch_size*n_augment,width,height,1) of images
            label_batch: np.array of (batch_size*n_augment,num_labels)
        '''
        feat = self.feat
        labels = self.labels
        width = self.width
        height = self.height

        num_examples = len(feat)
        num_batch = num_examples/batch_size
        X = ()
        for n in range(num_examples):
            im = cv2.imread(feat(n),0)
            try:
                im = im.reshape(width,height,1)
            except:
                print('Error on this image: ', feat(n))
            X.append(im)
        X = np.array(X)

        n_augment = augment.count(1)
        print('Number of augmentations: ', n_augment)
        feat_batch = np.zeros(((n_augment+1)*batch_size,width,height,1))
        label_batch = np.zeros(((n_augment+1)*batch_size,labels.shape(1)))

        while(True):
            i=0
            while (i<=batch_size):
                index = np.random.randint(X.shape(0),size=1)(0) #shuffle the data
                aug = ImageAugment(X(index))
                feat_batch(i) = X(index)
                label_batch(i) = labels(index)

                j=0
                if augment(0) == 1:
                    feat_batch((j*n_augment)+i+batch_size) = aug.add_speckle_noise()
                    label_batch((j*n_augment)+i+batch_size) = labels(index)
                    j+=1

                if augment(1) == 1:
                    feat_batch((j*n_augment)+i+batch_size) = aug.add_gaussian_noise()
                    label_batch((j*n_augment)+i+batch_size) = labels(index)
                    j+=1

                if augment(2) == 1:
                    feat_batch((j*n_augment)+i+batch_size) = aug.add_poisson_noise()
                    label_batch((j*n_augment)+i+batch_size) = labels(index)
                    j+=1

                i+=1


            yield feat_batch,label_batch

def CNN_model(width,height):
    # #create model
    model = Sequential()
    model.add(Conv2D(64, kernel_size=3, activation="relu", input_shape=(width,height,1)))
    model.add(Conv2D(32, kernel_size=3, activation="relu"))
    model.add(Flatten())
    model.add(Dense(labels.shape(1), activation="softmax"))

    model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=('accuracy'))
    return model


if __name__ == "__main__":
    input_dir = './mnist'
    output_file = 'dataset.csv'

    filename = ()
    label = ()
    for root,dirs,files in os.walk(input_dir):
        for file in files:
            full_path = os.path.join(root,file)
            filename.append(full_path)
            label.append(os.path.basename(os.path.dirname(full_path)))

    data = pd.DataFrame(data={'filename': filename, 'label':label})
    data.to_csv(output_file,index=False)

    labels = pd.get_dummies(data.iloc(:,1)).values

    X, X_val, y, y_val = train_test_split(
                                            filename, labels,
                                            test_size=0.2,
                                            random_state=1234,
                                            shuffle=True,
                                            stratify=labels
                                            )

    X_train, X_test, y_train, y_test = train_test_split(
                                                        X, y,
                                                        test_size=0.025,
                                                        random_state=1234,
                                                        shuffle=True,
                                                        stratify=y
                                                        )

    width = 28
    height = 28

    test_data = pd.DataFrame(data={'filename': X_test})


    image_gen_train = Generator(X_train,y_train,width,height)
    image_gen_val = Generator(X_val,y_val,width,height)
    image_gen_test = Generator(X_test,None,width,height)


    batch_size = 900
    print('len data: ', len(X_train))
    print('len test data: ', len(X_test))

    #augment represents (speckle, gaussian, poisson). It means, the augmentation will be done on the augment list element that is 1
    #for example, augment = (1,1,0) corresponds to adding speckle noise and gaussian noise
    augment = (1,1,1)
    model = CNN_model(width,height)

    model.fit_generator(
                        generator=image_gen_train.gen_augment(batch_size=batch_size,augment=augment),
                        steps_per_epoch=np.ceil(len(X_train)/batch_size),
                        epochs=20,
                        verbose=1,
                        validation_data=image_gen_val.gen(),
                        validation_steps=len(X_val)
                        )
    model.save('model_aug_3.h5')
    model = tf.keras.models.load_model('model_aug_3.h5')

    #Try evaluate_generator
    image_gen_test = Generator(X_test,y_test,width,height)
    print(model.evaluate_generator(
                            generator=image_gen_test.gen(),
                            steps=len(X_test)
                            ))

    #Try predict_generator
    image_gen_test = Generator(X_test,None,width,height)
    pred = model.predict_generator(
                            generator=image_gen_test.gen_test(),
                            steps=len(X_test)
                            )
    pred = np.argmax(pred,axis=1)
    # image_gen_test = Generator(X_test,pred,width*3,height*3)
    # image_gen_test.gen_show(pred)
    wrong_pred = ()
    for i,ex in enumerate(zip(pred,y_test)):
        if ex(0) != np.argmax(ex(1)):
            wrong_pred.append(i)
    print(wrong_pred)

    # for i in range(len(X_test)):
    #     im = cv2.imread(X_test(i),0)
    #     im = cv2.putText(im, str(pred(i)), (10,15), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2)
    #     print(i)
    #     cv2.imshow('image',im)
    #     cv2.waitKey(0)
```

GPU – which is the best bitcoin vanity address generator?

Can someone please suggest a Bitcoin Vanity Address Generator that can generate at least six characters (ideally case sensitive). It would be even better if the generator could create seven or more characters. I don't mind paying a little, but I would prefer the service to be free.

I want to create a vanity address like this 1Bitcoin... but my system is not powerful enough. I probably need one or more GPU cards.

There was a post on this topic a few years ago; I want something more contemporary, if possible.

Thank you very much.

Orthogonal Matrices – Is every generator of $ Z ({ rm Spin} _n ^ { epsilon} (q)) $ a quadratic element in the finite spin group $ { rm Spin} _n ^ { epsilon} (q) $?

On page 80 of & # 39; The Finite Simple Groups & # 39; from Robert A. Wilson we have the following two results:

  1. It is easy to find elements of the spin group that match them $ -1 $and therefore the spin group is a suitable double covering of the orthogonal group. We write $ { rm Spin} _n ^ { epsilon} (q) $ for this group of shapes $ 2. { Omega_n ^ epsilon} (q) $.

  2. If $ n $ is strange or if $ n = 2m $ and $ q ^ m equiv – epsilon ~ ({ rm mod} ~ 4) $, then $ Omega_n ^ epsilon (q) $ is easy and the spin group has the structure $ 2. Omega_n ^ epsilon (q) $. If $ n = 2m $ and $ q ^ m equiv epsilon ~ ({ rm mod} ~ 4) $, then $ Omega_n ^ epsilon (q) $ has a center of order 2 and the spin group has the structure $ 4. { rm P Omega} _n ^ epsilon (q) $ if $ m $ is strange and the structure $ 2 ^ 2. { rm P Omega} _n ^ epsilon (q) $ (necessarily with $ epsilon = + $) is $ m $ is just.

A. I hike when every generator is off $ Z ({ rm Spin} _n ^ { epsilon} (q)) $ is a square element in $ { rm Spin} _n ^ { epsilon} (q) $?

B. When $ Z ( Omega_ {2m} ^ { epsilon} (q)) neq 1 $is the unique second order element of $ Z ( Omega_ {2m} ^ { epsilon} (q)) $ a square element in $ Omega_ {2m} ^ { epsilon} (q) $

linear algebra – shows the existence of a cyclic generator for $ T $ for the vector space $ V $, provided minimal poly = characteristic poly

I'm trying to understand the evidence for that $ V $ is cyclic, provided that the minimal polynomial is equal to the characteristic polynomial.

I understand that the technique is to break down $ c (x) = f_1 (x) ^ {r_1} f_2 (x) ^ {r_2} … f_n (x) ^ {r_n} $ in irreducible factors and consider the invariant subspace $ ker (f_1 (T) ^ {r_1}) $. The minimal polynomial of the operator is equal to charpoly of $ T | _ { ker (f_1 (T) ^ {r_1})} $ and is $ f_1 (x) ^ {r_1} $. The dimension of $ { ker (f_1 (T) ^ {r_1})} $ then $ deg (f_1) * r_1 $. What I don't understand about the answer in the link is the following:

Enter the image description here

There are some things I don't understand about this evidence. What is $ Q $? Is it the minimal polynomial of $ T | _ { ker (f_1 (T) ^ {r_1})} $? I don't understand why finding a vector is enough $ v in V $ so that $ P_i ^ {m_1-1} (T) (v_i) neq 0 $. How does that mean? $ v $ generates the entire subspace of $ { ker (f_1 (T) ^ {r_1})} $?

Randomness – How likely is a pseudorandom number generator to generate a long series of similar numbers?

How likely is it that a pseudorandom number generator will generate a long series of similar numbers? "Similar numbers" can be the same numbers or numbers from a certain range.

For example, if we consider the PRNG algorithm as a simple counter that counts from 0 to MAX, the distribution is even and there is a guarantee that numbers in a sequence will not be repeated. So if you don't repeat numbers, the uniformity will not be affected. But it probably breaks randomness, doesn't it? To what extent? If so, does this mean that the better the algorithm, the less guarantee that we won't have to generate similar numbers one after the other?

I am particularly interested in the answers to Mersenne Twister as the most popular PRNG in the implementation of programming languages. It would also be great to know what the cryptosafe PRNGs of operating systems look like – Yarrow (macOS), Fortuna (FreeBSD) or ChaCha20 (Linux).