tensorflow – How does Keras determine the batch_size automatically in layers (i.e. Conv2D)?

I created a custom Keras layer class that takes in a tensor and returns a tensor (it’s used as an intermediate layer in the middle of my model). In trying to fix shaping errors, I have been trying to look at Keras docs and other StackOverflow errors such as Keras LSTM input dimension setting and Custom Layer in Keras – Dimension Problem.

My question is how does Keras automatically regulate the batch_size and in what step does it adjust it?

For further clarification, I mean how does it populate the None value in (None, 1, 128, 1) gained from using model.summary(), for example.

P.S. sorry if the formatting or explanation is not clear, it’s my first StackOverflow question. Thank you for the help! 🙂

tensorflow – Keras, tape.gradient return None values

I am trying to retrieve the gradient values computed by Tensorflow. The loss value that I pass to the tape.gradient function, is based on some values from the different layers of my network controller (I am using the network as a controller for some NAS algorithm, but it’s not important).
Here is the part of my code where I retrieve values (a softmax value) from layers, add them up and pass the sum (loss_value) to the tape.gradient function:

with tf.GradientTape(persistent=True) as tape:



   # FEEDFORWARD
    inp = controller.input  # input placeholder
    outputs = (layer.output for layer in controller.layers if ( layer.__class__.__name__ == "Dense" or layer.__class__.__name__ == "Lambda" ))  # all layer outputs
    functor = K.function((inp), outputs )   # evaluation function
    test = np.random.random((1,1))(np.newaxis,...)
    layer_outs = functor((test)) # Here are all the outputs of all the layers


    #########################################################################
    # RETRIEVE SOME VALUES FROM THE RNN/CONTROLLER LAYERS IN ORDER TO COMPUTE A 'LOSS' #
    #########################################################################

    log_proba = 0
    loss_value = 0

    # loop over each layer (choice)
    for i in range(0, len(layer_outs), 2):

        # Class that was chosen by the RNN
        classe = layer_outs(i+1)(0)(0)

        # Proba of having chosen class 'classe' knowing the previous choices
        proba = layer_outs(i)(0)(0)(layer_outs(i+1)(0)(0))
        log_proba = tf.math.log(proba)
      
    loss_value += log_proba


grads = tape.gradient(loss_value, controller.trainable_weights)

print(grads)

Which always print None values.

You can find the complete code here on Google Colab and change it.

Thank you

tensorflow – Implementing Tensor SVD in Matlab

I am working on Face recognition algorithm, Tensor SVD. It is based on this article: https://www.researchgate.net/publication/267473105_Facial_Recognition_Using_Tensor-Tensor_Decompositions and Yale extended database which can be downloaded from here http://databookuw.com/.

There are 38 people in the dataset, and the idea is to train algorithm on the first 36, and to leave the other two for test.

In the article, algorithm is mentioned: enter image description here

My job is to implement this in Matlab, but I am having trouble with storing data into 3D tensor (this is my first time working with tensors). So far, I tried this:

load allFaces.mat

allPersons = zeros(n*6,m*6,o*6);
count = 1;
for i=1:6
    for j=1:6
      for k = 1:6
        allPersons(1+(i-1)*n:i*n,1+(j-1)*m:j*m, 1+(k-1)*o:k*o) = faces(:,1+sum(nfaces(1:count-1)));
        count = count + 1;
      end
    end
end
figure(1), axes ('position', (0 0 1 1)), axis off
imagesc(allPersons), colormap gray

Can someone please explain/show how to do this? Thank you!

tensorflow2.0 – Tensorflow dataset interleave from_generator throws InvalidArgumentError

I have a generator which I am trying to interleave:

def hello(i):
  for j in tf.range(i):
    yield j

ds = tf.data.Dataset.range(10).interleave(
       lambda ind: tf.data.Dataset.from_generator(lambda: hello(ind), output_types=(tf.int32,)))

for x in ds.take(1):
  print(x)

But I get this error:

TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
  @tf.function
  def has_init_scope():
    my_constant = tf.constant(1.)
    with tf.init_scope():
      added = my_constant * 2
The graph tensor has name: args_0:0


     (({{node PyFunc}}))

Tensorflow version: 2.3.2

TensorFlow 2 Python 3 Fully Connected Neural Network

I would really appreciate constructive feedback and suggestions for this fully-connected neural network I have written with TensorFlow 2, Python 3. It estimates the period and amplitude of a sine curve given 100 sample y-points on the curve.

I am mainly interested in how this can be optimised for speed, improving the styling & readability, if I have made any strange programming choices, if there’s a powerful DL technique I am missing which might be useful (etc). Snippets of this script will be used in a pedagogical document to introduce DNN concepts alongside the TensorFlow methods so improvements given this context would be very beneficial.

One immediate flaw I will can address later is that currently the model size is far too big so it will not actually generalise outside the range of parameters in the training examples. That’s fine; for now I just wanted to get everything working and will tune hyperparameters and do regularisation later.

Any advice would be deeply appreciated. Additionally, I am yet to implement a normalisation layer (I would like this to normalise (in the context of them being curves however) using the whole training data and automatically inputs to the model once trained. I am also yet to vectorise the make_curve function. Suggestions for either of these next steps would be fantastic also.

This is of course a toy problem and I will adapt the network to a different problem in which I will be interested in the efficiency and high-dimensional inputs. I have access to both cluster CPU and GPU cores, as well as my fairly laptop with GeForce GTX 1050 Ti Max-Q GPU so I would be interested in optimising this for taking advantage of the parallel computing availability.

The 3d plot is just for fun and shows how the squared error of a prediction blows up for degenerate cases such as zero period or amplitude. Would I be right to assume that a network which has generalised well would have better error specifically at the boundaries?

With the current settings, this take 2.5 mins to run on my 2018 Dell XPS laptop (‘s CPU?) with the following output:

Average test loss:  8.497130045554286
Average val loss:  7.136056077585638
Time taken:  146.38214015960693

Here is the code:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, Dropout, BatchNormalization
import tensorflow.keras.backend as kb
from tensorflow import math as tm

import math, time
import numpy as np
from datetime import datetime

#import warnings
#warnings.filterwarnings("ignore")
kb.set_floatx('float64')

start = time.time()

num_sample_pts = 100
train_size     = (5*10**2)**2  # Funky format so it is a square and easily configurable during development
train_sqrt     = int(train_size**0.5)
epoch_nums     = 2**3
minibatch      = 2**6          # Remember to have minibatch << train_size*epochs
callbacks      = False         # TensorBoard logging is much slower than the learning itself 

learning_rate  = 0.00063
num_layers     = 28
reg1_rate      = 0.001
reg2_rate      = 0.001
act_func       = 'elu'
dropout        = 0.2
units          = 37

def make_curve(period, amp, aug=False):
    if False:
        a = 2
        # Make this vectorised so randoms are a vector of randoms.
    return ( amp * np.sin(( np.linspace(-4,4, num_sample_pts) + aug*np.random.rand() ) * (2*np.pi/period))
            + aug*np.random.rand() ).reshape(num_sample_pts)

def data(sample_interval = (-4, 4), amp_interval = (0,30), aug=False, gridsize=train_sqrt):
    sample = np.linspace(*sample_interval, num_sample_pts)
    mesh   = np.meshgrid(0.05 + 10*np.pi*np.random.rand(gridsize),          # Periods
                         amp_interval(0) + (amp_interval(1) - amp_interval(0)) *
                         np.random.rand(gridsize))                          # Amplitudes
    pairs  = np.array(mesh).T.reshape(-1, 2)
    curves = np.array((make_curve(w,a,aug) for w,a in pairs))               # Change when make_curve is vectorised
    glob_centre, glob_max = np.mean(curves), max(amp_interval)              # Globally centre all curves within pm 1
    curves = (curves - glob_centre) / glob_max                              # - Replace with a normalisation layer
    return (curves, pairs)
# Returns list of 2 arrays (x,y):
# - x is an array of each curve sample list
# - y is an array of the period-amplitude pairs corresponding to curve samples in x

df = data(aug=True)

# Not used. Another possibility could be rmse against a sample from predicted curve
def custom_mean_percentage_loss(y_true, y_pred): #Minimax on the percentage error
    diff = y_pred - y_true
    non_zero = y_true + 10**-8
    res = tm.reduce_mean(tm.abs(tm.divide(diff,non_zero)))
    return res

if callbacks:
    logdir = "logs\scalars\" + datetime.now().strftime("%Y%m%d-%H%M-%S")
    tensorboard_callback = (keras.callbacks.TensorBoard(log_dir=logdir, update_freq='epoch'))
else:
    tensorboard_callback = ()
kb.set_floatx('float64')


def model_builder():
    initializer = keras.initializers.TruncatedNormal(mean=0., stddev=0.5)
    model = keras.Sequential()
    model.add( Dense( units = num_sample_pts,                           # Number of input nodes equals input dimension
                                   kernel_initializer = initializer,    # Initialize weights
                                   activation = act_func,
                                   kernel_regularizer = keras.regularizers.l1_l2(reg1_rate, reg2_rate),
                                   dtype='float64'))
    BatchNormalization()

    for layer in range(num_layers):
        Dropout(dropout)
        model.add( Dense( units = units,                                # Number of nodes to make number of inputs
                         activation = act_func,
                         kernel_regularizer = keras.regularizers.l1_l2(reg1_rate, reg2_rate),
                         dtype='float64'))
        BatchNormalization()
        
    model.add( Dense(units = 2, activation = 'linear', dtype='float64'))
    # Outputting amplitude-period pair requires 2 nodes in the output layer.
    
    model.compile(
        optimizer = keras.optimizers.Adam(learning_rate = learning_rate),
        loss = 'mse',
        metrics = ('mse') ) # Measures train&test performance
    return model

model = model_builder()
training_history = model.fit(*df,
                             batch_size = minibatch,                               # Number of data per gradient step
                             epochs = epoch_nums,
                             verbose = 0,
                             validation_split = min(0.2, (train_sqrt**2)/5000),    # Fraction of data used for validation set
                             callbacks=tensorboard_callback)
                             
print("Average test loss: ", np.average(training_history.history('loss')(:10)))
print("Average val loss: ", np.average(training_history.history('val_loss')(:10)))
print('Time taken: ', time.time()-start)
print(' ')
    
import winsound
for i in range(2):
    winsound.Beep(1000, 250)

This is the first neural network I have written. Thank you very much for your thoughts, improvements and contributions.

tensorflow – How to create and train a object counting CNN model using tensor flow and keras in Python

Hi I’m new to deep learning, machine learning stuff, and new to the python language.

I have the https://github.com/poppinace/mtc/blob/master/README.md dataset.

I need to make and train a (tassel) object counting DCNN model using this data set.

So far I have issues such as I can not load and feed the .mat file in this data set to the model as the first step. I believe the counting model looks like regression-based counting because the .mat files contain the location of the tassels.

Are there similar projects that I can refer, to develop the solution that I’m expecting?

What are the suggestions you can give to me?

python 3.x – Can install specific version of Tensorflow from pipenv, but cannot install it from Pipfile

Really strange error where I can install tensorflow==2.4.1 from pipenv using the command

pipenv install tensorflow==2.4.1

But when I put it in a pipfile in the format of:

((source))
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"

(packages)
tensorflow = "==2.4.1"

(requires)
python_version = "3.8"

and I run the command:

pipenv lock --clear
pipenv --rm 
pipenv install

I get:

Pipfile.lock (bf56f7) out of date, updating to (0575f6)...
Locking (dev-packages) dependencies...
Locking (packages) dependencies...
Building requirements...
Resolving dependencies...
✘ Locking Failed!
CRITICAL:pipenv.patched.notpip._internal.index.package_finder:Could not find a version that satisfies the requirement tensorflow==2.4.1 (from -r /var/folders/m6/kqrmfdj12x79gmbbq6f2m1ph0000gn/T/pipenvxgupglu4requirements/pipenv-puyj_a_j-constraints.txt (line 2)) (from versions: none)
(ResolutionFailure):   File "/usr/local/Cellar/pipenv/2020.11.15/libexec/lib/python3.9/site-packages/pipenv/resolver.py", line 741, in _main
(ResolutionFailure):       resolve_packages(pre, clear, verbose, system, write, requirements_dir, packages, dev)
(ResolutionFailure):   File "/usr/local/Cellar/pipenv/2020.11.15/libexec/lib/python3.9/site-packages/pipenv/resolver.py", line 702, in resolve_packages
(ResolutionFailure):       results, resolver = resolve(
(ResolutionFailure):   File "/usr/local/Cellar/pipenv/2020.11.15/libexec/lib/python3.9/site-packages/pipenv/resolver.py", line 684, in resolve
(ResolutionFailure):       return resolve_deps(
(ResolutionFailure):   File "/usr/local/Cellar/pipenv/2020.11.15/libexec/lib/python3.9/site-packages/pipenv/utils.py", line 1395, in resolve_deps
(ResolutionFailure):       results, hashes, markers_lookup, resolver, skipped = actually_resolve_deps(
(ResolutionFailure):   File "/usr/local/Cellar/pipenv/2020.11.15/libexec/lib/python3.9/site-packages/pipenv/utils.py", line 1108, in actually_resolve_deps
(ResolutionFailure):       resolver.resolve()
(ResolutionFailure):   File "/usr/local/Cellar/pipenv/2020.11.15/libexec/lib/python3.9/site-packages/pipenv/utils.py", line 833, in resolve
(ResolutionFailure):       raise ResolutionFailure(message=str(e))
(pipenv.exceptions.ResolutionFailure): Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
  First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again.
 Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.
  Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: No matching distribution found for tensorflow==2.4.1 (from -r /var/folders/m6/kqrmfdj12x79gmbbq6f2m1ph0000gn/T/pipenvxgupglu4requirements/pipenv-puyj_a_j-constraints.txt (line 2))

I have committed myself to making pipenv work and am not amenable to other solutions for package management.

tensorflow – Confusion Matrix code wrong, Val_accuracy nearly 99%, But confusion Matrix bad results

Getting ~99% accuracy but confusion matrix getting bad results, val_set giving 99% accuracy as seen in results. but confuion matrix gets 410 correct amon 532 images in validation set.
import os
import numpy as np
import matplotlib.pyplot as plt
import keras
from keras.applications import xception
from keras.layers import *
from keras.models import *
from keras.preprocessing import image

model = xception.Xception(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
for layers in model.layers:
    layers.trainable=False
    
flat1 = Flatten()(model.layers(-1).output)
class1 = Dense(256, activation='relu')(flat1)
output = Dense(1, activation='sigmoid')(class1)

model = Model(inputs = model.inputs, outputs = output)


model.compile(loss = 'binary_crossentropy', optimizer='adam', metrics = ('binary_accuracy', 'categorical_accuracy'))


train_datagen = image.ImageDataGenerator(
    rescale = 1./255,
    shear_range = 0.2,
    zoom_range = 0.2,
    horizontal_flip = True,
    )

test_datagen = image.ImageDataGenerator(rescale = 1./255)

train_generator = train_datagen.flow_from_directory(
    '/Users/xd_anshul/Desktop/Research/Major/CovidDataset/Train',
    target_size = (224,224),
    batch_size = 10,
    class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
    '/Users/xd_anshul/Desktop/Research/Major/CovidDataset/Test',
    target_size = (224,224),
    batch_size = 10,
    class_mode='binary')

#model Fitting

hist = model.fit(
    train_generator,
    epochs=2,
    validation_data=validation_generator)
    
    
    
    from sklearn.metrics import classification_report, confusion_matrix
    
    Y_pred = model.predict_generator(validation_generator) #steps = np.ceil(validation_generator.samples / validation_generator.batch_size), verbose=1, workers=0)
    y_pred=( np.argmax(Y_pred(i)) for i in range(validation_generator.samples))
    
    print('Confusion Matrix')
    print(confusion_matrix(validation_generator.classes, y_pred))
    print('Classification Report')
    target_names = ('Covid', 'Normal')
    print(classification_report(validation_generator.classes, y_pred, target_names=target_names))


Found 2541 images belonging to 2 classes.
Found 532 images belonging to 2 classes.
Epoch 1/2
255/255 (==============================) - 403s 2s/step - loss: 1.4669 - binary_accuracy: 0.9517 - categorical_accuracy: 1.0000 - val_loss: 0.0731 - val_binary_accuracy: 0.9944 - val_categorical_accuracy: 1.0000
Epoch 2/2
255/255 (==============================) - 434s 2s/step - loss: 0.2706 - binary_accuracy: 0.9866 - categorical_accuracy: 1.0000 - val_loss: 0.0774 - val_binary_accuracy: 0.9868 - val_categorical_accuracy: 1.0000

print('Confusion Matrix')
Confusion Matrix

print(confusion_matrix(validation_generator.classes, y_pred))
((410   0)
 (122   0))

drivers – Changing Tensorflow PTXAS location

I am currently trying to custom train a neural network using tensorflow 2.4.0 with a RTX 3070 running CUDA 11.0 and and CUDNN 8.

I am having this wierd issue where I can train the model, but I can’t actually get any output because when I run:

output = model(x)
I am met with the following message and my jupyter kernel automatically restarts.

2021-01-08 20:52:53.437668: W tensorflow/stream_executor/gpu/asm_compiler.cc:191) Falling back to the CUDA driver for PTX compilation; ptxas does not support CC 8.6
2021-01-08 20:52:53.437690: W tensorflow/stream_executor/gpu/asm_compiler.cc:194) Used ptxas at /usr/local/cuda-11.0/bin/ptxas
2021-01-08 20:52:53.438427: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314) Unimplemented: /usr/local/cuda-11.0/bin/ptxas ptxas too old. Falling back to the driver to compile.
Relying on driver to perform ptx compilation. 
Modify $PATH to customize ptxas location.

As a test I have installed CUDA 11.1 and 11.2 and readjusted the $PATH variables accordingly, but tensorflow seems to default to using the ptxas version in the CUDA 11.0 folder.

What can I do to point tensorflow towards the 11.1 and 11.2 version of PTXAS instead of the 11.0 version?