vpn – Toggle to disable background data

In a public WLAN I use my personal VPN server. Before my VPN server can create a secure tunnel, GMail and WhatsApp can be synchronized.

Is there a way to do the following?

  • Allow background data in certain WiFi networks (e.g. my home / work network)
  • Prevent background data on all other WLANs unless the VPN tunnel is active
  • Prevent background data in the cellular network unless expressly permitted (to reduce data usage).

google sheets – I want to insert already filtered data into a new table

Is it possible to refer data filtered in Google Sheets from one sheet to another?

I use the filter button to sort on a master sheet. I then use this formula in a new sheet:
= IMPORTRANGE ("urlofthemastersheet", "sheetname! A1: A")

This formula references and displays everything from column A in the master sheet in the new sheet. After filtering the data in the master sheet, the new sheet still shows the entire column A. How can I set the new sheet so that it only shows what was filtered in the master sheet?

The purpose of the new sheet is to highlight our logo, contact information, and lines so that it looks more professional when we need to share it with customers.

magento2 – Google Analytics shopping behavior analysis Missing data

I created an e-commerce shop and did the analysis for that shop.
I have placed the tracking code on the Magento2 backend and am starting to check whether the data is being sent to GA.
Extended e-commerce is already activated and collects order data.
The problem is the behavior of the purchasing analysis, where data is missing in some steps

I attached a print screen of it.

Enter image description here

greetings

python – I have this json file, but I just want to print or access the owner_id or prices data, but all the data contained in the json file

{
    "id": 1,
    "name": "Cargador",
    "code": "12",
    "description": "rerer",
    "unit": "12",
    "visible_to": "3",
    "owner_id": {
        "id": 11321795,
        "name": "SAUL BALDERRAMA PEREZ",
        "email": "1630275@upv.edu.mx",
        "value": 11321795
    },
    "add_time": "2020-01-13 19:18:31",
    "update_time": "2020-01-17 22:52:45",
    "prices": [
        {
            "id": 1,
            "product_id": 1,
            "price": 1000,
            "currency": "MXN",
        }
    ]
},
{
    "id": 2,
    "name": "Caja",
    "code": "123",
    "owner_id": {
        "id": 11321795,
        "name": "SAUL BALDERRAMA PEREZ",
        "email": "1630275@upv.edu.mx",
        "value": 11321795
    },
    "add_time": "2020-01-13 20:52:46",
    "update_time": "2020-01-13 20:52:46",
    "prices": [
        {
            "id": 2,
            "product_id": 2,
            "price": 100,
            "currency": "MXN",
        }
    ]
},

I just want to access the price or owner_id data, but all the data in the file

Example:

"Prices": [
{
"id": 2,
"product_id": 2,
"Price": 100,
"currency": "MXN",
"Cost": 0,
"Overhead": zero
}]

"Prices": [
{
"id": 1,
"product_id": 1,
"Price": 1000,
"currency": "MXN",
"Cost": 0,
"Overhead": zero
}]

python 3.x – data frame. How do I change the margin or the alignment in the display of the data table?

BD

Have a dictionary and present its values ​​in a DataFrame to change the orientation or margin of the result

for example:

If the resulting example looks like this:

A b
Snack lunch
Snack snack

And what I want is that it works like this:

A b
Snack lunch
Snack snack

thank you for your time

MySQL database does not load table data

My centOS 7 server was shut down unexpectedly. For some reason one of the MySQL databases shows that all tables are empty, but when I search the files in / var / lib / mysq / database I can see that the files (.frm and .ibd and the other files) open it with a text editor I can read some of the data in it.

What's going on here? Why doesn't one of my databases read or load the files?

MySQL replication failed with error 1236 data position> file size with relay bin file not increased

My slave computer was reset and now at SLAVE SHOW STATUS G I have the error:

Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Client requested master to start replication from position > file size'

The slave settings are displayed as:

Master_Log_File: bin.003767
Read_Master_Log_Pos: 751854814
Relay_Log_File: mysql-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: bin.003767
Slave_IO_Running: No
Slave_SQL_Running: Yes

However, the relay bin files on the slave are: mysql-relay-bin.000001 with a size of 1 KB. The master, on the other hand, has the files: bin.003767 at over 700 megabytes. However, no relay files are updated or copied to the slave.

Is there any way to fix this from the start without complete data synchronization? It takes days to achieve this.

Does anyone know what this is about?

Very appreciated!

numpy – machine learning of the Python MNIST data set

I am a beginner of machine learning and have struggled with it for a few days and I cannot understand why my neural network is having problems classifying the Mnist record. I checked my math and used the history check, but I can't seem to find the problem.

import pickle as pc
import numpy as np
import matplotlib.pyplot as mb
class MNIST:
#fix: gradient checking maybe not working, maybe backprop not working, symetrical updating, check if copying correctly


    def processImg(self):

        '''
        #slower than pickle file
        inTrset = np.loadtxt("mnist_train.csv", delimiter = ",");
        inTestSet = np.loadtxt("mnist_test.csv", delimiter = ",");
        fullX = np.asfarray(inTrset(:,1:))
        fullY = np.asfarray(inTrset(:, :1))
        '''

        with open("binaryMNIST.pkl", "br") as fh:
            data = pc.load(fh)


        img_dim = 28;
        features = 784;
        m = 60000
        test_m = 10000;

        fullX = (np.asfarray(data(0)))
        bias = np.ones((60000, 1))
        fullX = np.hstack((bias, fullX))

        fullY = np.asfarray(data(1))

        testX = (np.asfarray(data(2)))
        bias2 = np.ones((10000, 1))
        testX = np.hstack((bias2, testX))

        testY = np.asfarray(data(3))

        fullY = fullY.astype(int)
        testY = testY.astype(int)

        iden = np.identity(10, dtype = np.int)
        oneHot = np.zeros((m, 10), dtype = np.int)
        oneHot_T = np.zeros((test_m, 10), dtype = np.int)

        #creates m number of one, zeros vector indicating the class
        for i in range(test_m):
            oneHot_T(i) = iden(testY(i), :)

        for i in range(m):
            oneHot(i) = iden(fullY(i), :)

        trainX = fullX(:40000, :)
        trainY = oneHot(:40000, :)

        valX = np.asfarray(fullX(40000:, :))
        valY = np.asfarray(oneHot(40000:, :))


        self.trainX = trainX
        self.trainY = trainY
        self.valX = valX
        self.valY = valY
        self.testX = testX
        self.oneHot_T = oneHot_T


    def setThetas(self):
        #784 features
        #5 nodes per layer (not including bias)
        #(nodes in previous layer, nodes in next layer)
        #theta1(785, 5) theta2(6, 5) theta3(6, 10)

        #after finishing, do big 3d matrix of theta and vectorize backprop

        params = np.random.rand(4015)
        self.params = params



    def fbProp(self, theta1, theta2, theta3):

        #after calculating a w/sig(), add bias
        m = np.shape(self.trainY)(0)
        z1 = np.array(np.dot(self.trainX, theta1), dtype = np.float64)

        a1 = self.sig(z1)
        bias = np.ones((40000, 1))
        a1 = np.hstack((bias, a1))
        z2 = np.dot(a1, theta2)
        a2 = self.sig(z2)
        a2 = np.hstack((bias, a2))
        z3 = np.dot(a2, theta3)
        hyp = self.sig(z3)

        g3 = 0
        g2 = 0
        g1 = 0

        for i in range(m):
            dOut = hyp(i, :) - self.trainY(i, :)
            d2 = np.dot(np.transpose(dOut), np.transpose(theta3))
            d2 = d2(1:) * self.sigG(z2(i, :))
            d1 = np.dot(d2, np.transpose(theta2))
            d1 = d1(1:) * self.sigG(z1(i, :))

            g3 = g3 + np.dot(np.transpose(np.array(a2(i, :), ndmin = 2)), np.array(dOut, ndmin = 2))
            g2 = g2 + np.dot(np.transpose(np.array(a1(i, :), ndmin = 2)), np.array(d1, ndmin = 2))
            g1 = g1 + np.dot(np.transpose(np.array(self.trainX(i, :), ndmin = 2)), np.array(d1, ndmin = 2))

        self.theta1G = (1/m) * g1
        self.theta2G = (1/m) * g2
        self.theta3G = (1/m) * g3


    def gradDescent(self):

        params = np.array(self.params)
        theta1 = params(0:3925)
        theta1 = np.resize(theta1, (785, 5))
        theta2 = params(3925:3955)
        theta2 = np.resize(theta2, (6, 5))
        theta3 = params(3955:4015)
        theta3 = np.resize(theta3, (6, 10))

        for i in range(self.steps):
            J = self.error(theta1, theta2, theta3, self.trainX, self.trainY)
            print("Iteration: ", i+1, " | error: ", J)
            self.fbProp(theta1, theta2, theta3)
            theta1 = theta1 - (self.alpha * self.theta1G)
            theta2 = theta2 - (self.alpha * self.theta2G)
            theta3 = theta3 - (self.alpha * self.theta3G)



        #On test set
        correct = self.test(theta1, theta2, theta3)
        print(correct/100, "%")


    def error(self, params, X, y):
        theta1 = params(0:3925)
        theta1 = np.resize(theta1, (785, 5))
        theta2 = params(3925:3955)
        theta2 = np.resize(theta2, (6, 5))
        theta3 = params(3955:4015)
        theta3 = np.resize(theta3, (6, 10))


        bias = np.ones((np.shape(y)(0), 1))
        a1 = self.sig(np.dot(X, theta1))
        a1 = np.hstack((bias, a1))
        a2 = self.sig(np.dot(a1, theta2))
        a2 = np.hstack((bias, a2))
        hyp = self.sig(np.dot(a2, theta3))

        #10 classes
        pt1 = ((-np.log(hyp) * y) - (np.log(1-hyp) * (1-y))).sum()
        J = 1/(40000) * pt1.sum()

        return J


    def error(self, theta1, theta2, theta3, X, y):
        bias = np.ones((np.shape(y)(0), 1))
        a1 = self.sig(np.dot(X, theta1))
        a1 = np.hstack((bias, a1))
        a2 = self.sig(np.dot(a1, theta2))
        a2 = np.hstack((bias, a2))
        hyp = self.sig(np.dot(a2, theta3))
        print(hyp(0, :))

        #10 classes
        pt1 = ((np.log(hyp) * y) + (np.log(1-hyp) * (1-y))).sum()
        J = - (1/(40000)) * pt1.sum()

        return J



    #def validate(self):

    def test(self, theta1, theta2, theta3):
        X = self.testX
        y = self.oneHot_T
        bias = np.ones((np.shape(y)(0), 1))
        a1 = self.sig(np.dot(X, (theta1)))
        a1 = np.hstack((bias, a1))
        a2 = self.sig(np.dot(a1, (theta2)))
        a2 = np.hstack((bias, a2))
        hyp = self.sig(np.dot(a2, (theta3)))

        correct = 0
        ans = np.array((0, 1, 2, 3, 4, 5, 6, 7, 8, 9))

        for i in range(np.shape(y)(0)):
            #fix backprop and forward prop then this
            guess = np.argmax(hyp(i, :))
            match = np.argmax(y(i, :))
            print("guess: ", guess, "| ans: ", match)
            if guess == match:
                correct = correct + 1;

        return correct



    def gradientCheck(self):
        params = np.array(self.params)
        theta1 = params(0:3925)
        theta1 = np.resize(theta1, (785, 5))
        theta2 = params(3925:3955)
        theta2 = np.resize(theta2, (6, 5))
        theta3 = params(3955:4015)
        theta3 = np.resize(theta3, (6, 10))
        self.fbProp(theta1, theta2, theta3)

        grad = self.theta1G.ravel()
        grad = np.append(grad, self.theta2G.ravel())
        grad = np.append(grad, self.theta3G.ravel())


        print("got grads")
        epsilon = 0.00001

        params2 = np.array(self.params)
        check = np.zeros(np.shape(params))
        for i in range(3965, np.size(params)):
            temp = params(i)
            params(i) = params(i) + epsilon
            params2(i) = params2(i) - epsilon
            check(i) = (self.error(params, self.trainX, self.trainY) - self.error(params2, self.trainX, self.trainY)) / (2 * epsilon)
            params(i) = temp
            params2(i) = temp
            print(grad(i), " ", check(i))



    def sigG(self, z):
        return (self.sig(z) * (1-self.sig(z)))


    def sig(self, z):
        return 1/(1+(np.exp(-z)))


    def printPictures(self):
        #number of training examples to iterate over

        for i in range(3):
            img = self.trainX(i, 1:).reshape((28,28))
            mb.title('Digit = {}'.format(np.argmax(self.trainY(i,:))))
            mb.imshow(img, cmap = 'gray_r')
            mb.show()



    def __init__(self, steps, alpha, nodes, h_layers):
        self.steps = steps
        self.alpha = alpha
        self.nodes = nodes
        self.h_layers = h_layers



obj = MNIST(100, 0.1, 5, 1);
obj.processImg();
obj.setThetas();
obj.gradDescent()

#obj.gradientCheck()
#obj.printPictures()

javascript – table data is updated with every key press

I receive products and display them in a table by scanning the product code. But when I scan for the second element, the existing record in the table is removed. Here I used Jquery and Ajax to get data. I wanted to keep existing data in the table as I add new ones. Suggest me a way to fix this.

This is the table where I want to add products

Code to get and view data.

 //product_list is the tbody id.
 //Insert Product in Item List by scanning product code
    $('#pro_code').keydown(function(){
        var timer = null;
        clearTimeout(timer);
        timer = setTimeout(getItem, 1000)
    });

    function getItem() {
        var pro_code = $('#pro_code').val();
        //alert(pro_code);
        $.ajax({
            url: DOMAIN+"/controller/SalesController.php",
            method: "POST",
            data: {product_code:pro_code},
            success: function (data) {
                //alert(data);
                $('#product_list').html(data);
                //$("#pro_code").val("");
            }
        });
    }