Machine Learning – Use CNN with two inputs for prediction

I have a record like this:

q1 q2 label
ccc ddd 1
zzz yyy 0
, , ,
, , ,

Where q1 and q2 are sentences and the name indicates whether they are duplicates or not.

Now I am confused because I have two inputs q1 and q2 to concatenate both for the prediction. I have created two CNN functions for both columns and I want to concatenate them.

my cnn function:

                            def cnn_model (FILTER_SIZES, 
# Filter sizes as a list
MAX_NB_WORDS, 
# Total number of words
MAX_DOC_LEN, 
# max words in a document
EMBEDDING_DIM = 200, 
# Word vector dimension
NUM_FILTERS = 64, 
# Number of filters for all sizes
DROP_OUT = 0.5, 
# Cancellation rate
NUM_OUTPUT_UNITS = 1, 
# Number of output units
NUM_DENSE_UNITS = 100, 
# Number of units in dense layer
PRETRAINED_WORD_VECTOR = None, 
# Specifies whether trained word vectors should be used
LAM = 0.0):
# Regularization coefficient

main_input = Input (shape = (MAX_DOC_LEN,), 
dtype = & # 39; int32 & # 39 ;, name = & # 39; main_input & # 39;

if PRETRAINED_WORD_VECTOR is not None:
embed_1 = Embedding (input_dim = MAX_NB_WORDS + 1, 
output_dim = EMBEDDING_DIM, 
input_length = MAX_DOC_LEN, 
# Use ready-made word vectors
Weights =[PRETRAINED_WORD_VECTOR], 
# Word vectors can be further tuned
Set # to False when using static word vectors
trainable = true, 
name = & # 39; embedding & # 39;) (main_input)
otherwise:
embed_1 = Embedding (input_dim = MAX_NB_WORDS + 1, 
output_dim = EMBEDDING_DIM, 
input_length = MAX_DOC_LEN, 
name = & # 39; embedding & # 39;) (main_input)
# Add convolution-pooling-flat block
conv_blocks = []
   for f in FILTER_SIZES:
conv = Conv1D (filters = NUM_FILTERS, kernel_size = f, 
Activation = & # 39; relu & # 39 ;, name = & # 39; conv_ & # 39; + str (f)) (embed_1)
conv = MaxPooling1D (MAX_DOC_LEN-f + 1, name = max max _ # + str (f)) (conv)
conv = flattening (name = flat flat _ # + str (f)) (conv)
conv_blocks.append (conv)

if len (conv_blocks)> 1:
z = concatenate (name = & # 39; concatenate & # 39;) (conv_blocks)
otherwise:
z = conv_blocks[0]


   dense = dense (NUM_DENSE_UNITS, activation = & # 39; relu & # 39 ;, 
kernel_regularizer = l2 (LAM), name = & # 39; density & # 39;; (drop)

model = Model (inputs = main_input, outputs = tight)

model.compile (loss = "binary_crossentropy", 
optimizer = "adam", metrics =["accuracy"])

return model

First I have the pad sequence of the two columns:

            tokenizer = tokenizer (num_words = MAX_NB_WORDS)
tokenizer.fit_on_texts (data["q1"])

# set the dense units
dense_units_num = num_filters * len (FILTER_SIZES)

BTACH_SIZE = 32
NUM_EPOCHES = 100

sequence_1 = tokenizer. 
texte_zu_folgen (data["q1"])
#print (sequence_1)

sequence_2 = tokenizer. 
texte_zu_folgen (data["q2"])

Sequences = Sequences_1 + Sequences_2

output_units_num = 1



# Pad all sequences in the same length
# If a sentence is longer than maxlen, fill it up on the right
# If a sentence is shorter than maxlen, shorten it to the right
padded_sequences = pad_sequences (sequence, 
maxlen = MAX_DOC_LEN, 
padding = & # 39; post & # 39; 
truncating = & # 39; post & # 39;) `

Now I've made two models like this for both columns:

                left_cnn = cnn_model (FILTER_SIZES, MAX_NB_WORDS, 
MAX_DOC_LEN, 
NUM_FILTERS = num_filters, 
NUM_OUTPUT_UNITS = output_units_num, 
NUM_DENSE_UNITS = density_units_number, 
PRETRAINED_WORD_VECTOR = None)

right_cnn = cnn_model (FILTER_SIZES, MAX_NB_WORDS, 
MAX_DOC_LEN, 
NUM_FILTERS = num_filters, 
NUM_OUTPUT_UNITS = output_units_num, 
NUM_DENSE_UNITS = density_units_number, 
PRETRAINED_WORD_VECTOR = None)

Now I do not know how to link these two models. And what to do next!

Those on Yahoo tell us if you really believe that CNN feeds are not fake news and act as journalists as actors?

This child of a newspaper publisher grew up with journalists. Yes, CNN is a biased source, but they rely on facts to express this bias. On the other hand, Fox just does stuff. Therefore, I reject your reproach as a further projection from the extreme neurotic law.

Only Fox has argued in court of all news networks that the First Amendment protects their right to lie. The others have some standards even in this age.

,

Machine Learning – visualize and understand CNN with Mathematica

The famous 2013 article by Zeiler and Fergus "Visualization and Understanding of Convolution Networks" suggests a method to understand the behavior of CNN using one (or more) DeConv networks in conjunction with the original CNN.

The DeConv networks used use a set of unpooling and deconvolutional layers to reconstruct the features in the input image that are responsible for activating a particular feature map in a given layer.

These, however, use "Maximum location switchto undo the max pooling process, which, if I'm right, is to merge layers that are one argmax Operation, whereby the positions can be determined, from which the pooled maxima originate.

Unfortunately, PoolingLayer does not accept argmax as function Possibility.

Is it possible to circumvent this restriction and theMaximum location switchOr is there another technique that is applicable in Mathematica to produce a visualization similar to that proposed by Zeiler and Fergus to understand which features activate a given plane?

[ Politics ] Open Question: Now that there is no Trump / Russian Collusion, what about the credibility of CNN, MSNBC, Adam Schiff, Eric Swalwell and other fools?

[ Politics ] Open Question: Now that there is no Trump / Russian Collusion, what about the credibility of CNN, MSNBC, Adam Schiff, Eric Swalwell and other fools? ,

How did CNN know that there would be a raid on Roger Stone's house? They even filmed the FBI raid !!?

Normally, the grand jury will meet on Friday. This week they met on Thursday. This indicated to CNN that there would likely be unusual activity in connection with an indictment.

So they put away the houses of the people most likely on Mueller's list, and Roger Stone was one of them.

It's called "journalism." It takes thought, time and money. Maybe Fox News should practice journalism if they want to get those shovels.

,

python – LeNet CNN uses Tensorflow

Can someone help me find out where I'm working wrong? I can not figure it out
My validation accuracy is 0.000 even after 100 epochs and the cost function is also at first sight.

Notebook COLAB LINK

import numpy as np
Import Tensorflow as tf
import matplotlib.pyplot as plt
% matplotlib inline



Import input_data from tensorflow.examples.tutorials.mnist
mnist = input_data.read_data_sets ("MNIST_data /", reshape = false)

X_train, Y_train = mnist.train.images, mnist.train.labels
X_validate, Y_validate = mnist.validation.images, mnist.validation.labels
X_test, Y_test = mnist.test.images, mnist.test.labels

print ("X_train shape:", X_train.shape)
print ("Y_train shape:", Y_train.shape)
print ("X_validate shape:", X_validate.shape)
print ("Y_validate shape:", Y_validate.shape)
print ("X_test shape:", X_test.shape)
print ("Y_test Shape:", Y_test.shape)


X_train = np.pad (X_train, ((0,0), (2,2), (2,2), (0,0)), & # 39; constant & # 39 ;, constant values ​​= 0)
X_validate = np.pad (X_validate, ((0,0), (2,2), (2,2), (0,0)), & # 39; constant & # 39 ;, constant values ​​= 0)
X_test = np.pad (X_test, ((0,0), (2,2), (2,2), (0,0)), & # 39; constant & # 39 ;, constant values ​​= 0)

print ("X_train shape:", X_train.shape)
print ("Y_train shape:", Y_train.shape)
print ("X_validate shape:", X_validate.shape)
print ("Y_validate shape:", Y_validate.shape)
print ("X_test shape:", X_test.shape)
print ("Y_test Shape:", Y_test.shape)



X = tf.placeholder (tf.float32, (None, 32,32,1))
Y = tf.placeholder (tf.int32, (none))

output_y = tf.one_hot (Y, 1)


W1 = tf.Variable (tf.truncated_normal (Form = (5,5,1,6), mean = 0, Stddev = .1))
W2 = tf.Variable (tf.truncated_normal (Form = (5,5,6,16), mean = 0, Stddev = .1))

B1 = tf.Variable (tf.zeros (6))
B2 = tf.Variable (tf.zeros (16))


def feed_forward (X):
Z1 = tf.nn.bias_add (tf.nn.conv2d (X, W1, Strides = [1,1,1,1], padding = & # 39; VALID & # 39 ;, B1)
A1 = tf.nn.relu (Z1)
print (Z1.shape)
print (A1.shape)

P1 = tf.nn.avg_pool (A1, ksize = [1,2,2,1]strides = [1,2,2,1], padding = & # 39; VALID & # 39;
print (P1.form)

Z2 = tf.nn.bias_add (tf.nn.conv2d (P1, W2, Strides =[1,1,1,1], padding = & # 39; VALID & # 39 ;, B2)
A2 = tf.nn.relu (Z2)
print (Z2.shape)
print (A2.shape)

P2 = tf.nn.avg_pool (A2, ksize =[1,2,2,1]strides =[1,2,2,1], padding = & # 39; VALID & # 39;
print (P2.shape)

F = tf.contrib.layers.flatten (P2)
Print (F.Shape)

FC1 = tf.contrib.layers.fully_connected (F, 120, Activation_fn = tf.nn.relu)
FC2 = tf.contrib.layers.fully_connected (FC1,84, activation_fn = tf.nn.relu)
out = tf.contrib.layers.fully_connected (FC2,10, activation_fn = tf.nn.relu)
print (FC1.shape)
print (FC2.shape)
print (out.shape)


come back





model_op = feed_forward (X)

learning rate = 0.001
cross_entropy = tf.nn.softmax_cross_entropy_with_logits (logits = model_op, labels = output_y)
loss_operation = tf.reduce_mean (cross_entropy)
Optimizer = tf.train.AdamOptimizer (learning_rate = learning_rate)
training_operation = optimizer.minimize (loss_operation)



BATCH_SIZE = 128
predicted_op = tf.argmax (model_op, 1)
real_op = tf.argmax (output_y, 1)

correct_op = tf.equal (predicted_op, real_op)
precision_op = tf.reduce_mean (tf.cast (correct_op, tf.float32))



Defeval (x_data, y_data):

num_example = len (x_data)
total_accuracy = 0

sess = tf.get_default_session ()
for index in range (0, num_example, BATCH_SIZE):


batch_x, batch_y = x_data[index:index+BATCH_SIZE]y_data[index:index+BATCH_SIZE]
    Accuracy = Sess.Run (precision_op, Feed_Dict = {X: Batch_x, Y: Batch_y})

total_accuracy + = (precision * len (batch_x))

Returns total_accuracy / num_example



by sklearn.utils import shuffle

print ("Training ...")
saver = tf.train.Saver ()

init = tf.global_variables_initializer ()

with tf.Session () as session:

sess.run (init)

EPOCHS = 10
NUM_SAMPLE = len (X_train)
Costs []

    for epoch within reach (EPOCHS):
X_train, Y_train = Shuffle (X_train, Y_train)

epoch_cost = 0
for index in range (0, NUM_SAMPLE, BATCH_SIZE):

batch_x, batch_y =

X_train[index:index+BATCH_SIZE], Y_train[index:index+BATCH_SIZE]
            temp_cost = sess.run (loss_operation, feed_dict = {X: batch_x, Y: batch_y})
epoch_cost + = temp_cost

cost.append (epoch_cost)

print ("Costs in epoch% i:% f"% (epoch, costs[-1]))
if epoch% 5 == 0:
validation_acc = eval (X_validate, Y_validate)
print ("Validation accuracy of epoch% i:% f"% (epoch, validation_acc))
to press()


saver.save (sess, & # 39; / lenet & # 39;)
Print ("model saved !!!")

thank you in advance

Is the call of the President of the United States a liar? As Kellyanne Conway proposed on CNN?

It depends on whether it is true that Trump tells lies. When Trump was first elected, NPR went into detail on what exactly a "lie" is. Making a false statement can be a lie or not. A lie is when a person purposely makes a false statement in order to deceive someone in order to obtain a secondary gain. That's a lie. But if someone spits out false information without knowing that it is wrong or doing so for a secondary gain, that is not a lie. It's just wrong information instead.
For example, if Obama said that he would carry out his campaign in all 57 states, that is a false statement, but it is not a lie because Obama is misinformed about how many states the US actually has.
Cheers.

,