❓ASK – How to get Accepted by all CPA networks? | Proxies-free

Earnings Disclaimer:  All the posts published herein are merely based on individual views, and they do not expressly or by implications represent those of Proxies-free or its owner. It is hereby made clear that Proxies-free does not endorse, support, adopt or vouch any views, programs and/or business opportunities posted herein. Proxies-free also does not give and/or offer any investment advice to any members and/or it’s readers. All members and readers are advised to independently consult their own consultants, lawyers and/or families before making any investment and/or business decisions. This forum is merely a place for general discussions. It is hereby agreed by all members and/or readers that Proxies-free is in no way responsible and/or liable for any damages and/or losses suffered by anyone of you.

computer networks – multicasting in ethernet

Thanks for contributing an answer to Computer Science Stack Exchange!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, see our tips on writing great answers.

computer networks – The size of frames generated in transferring files

An application sends a $7*10^{20}$ bytes file.

TCP layer breaks the file into a number of TCP segments. Each segment carries 1400 bytes of file data and 24 bytes of headers.

Then, a segment is encapsulated in an IP packet that has a header of 30 bytes.

Lastly, a packet is encapsulated inside a frame which carries a payload of maximum size of 2312 bytes and 44 bytes of header and trailer.

I am trying to calculate the number of frames generated. But, I am not sure if the size of each frame being transmitted should be 1400+24+30+44 or 2312+44. Any help is appreciated. Thank you.

computer networks – Multiplexing problem

Suppose a shared medium M offers to hosts $A_1 ,A_2 ,…,A_N$ in round-robin fashion an opportunity to transmit one packet; hosts that have nothing to send immediately relinquish M. How does this differ from Statistical multiplexing? How does network utilization of this scheme compare with Statistical multiplexing?

I don’t understand what are the differences between these two schemes as they send on packet-by-packet basis.

neural networks – Calculating error in the Input to a convolutional layer

Say I have a incredibly simple input to a convolutional layer: (In1) <—– 1*1 input matrix

I have two filters applied this input:(F1) and (F2)

They give the results: (F1 * In1) and (F2 * In1)

Then let (F1 * In1) = (Out1) and (F2 * In1) = (Out2)

I then calculate ∂C/∂In1 for each filter: (∂C/∂Out1 * ∂Out1/∂In1) and (∂C/∂Out2 * ∂Out2/∂In1)

This leaves me with two matrices above for ∂C/∂In1 do I sum or average these two matrices to get the final value for ∂C/∂In1

(Also I understand that this is an incredibly simple example in terms of convolution I’m just more concerned with the summing or averaging. Thanks for any help!)

neural networks – Looking for references for real-world scenarios of data-poisoning attack on labels while doing supervised learning

Consider the following mathematical model of training a neural net : Suppose $f_{w} : mathbb{R}^n rightarrow mathbb{R}$ is a neural net whose weights are $w$. Suppose during the training the adversary is sampling $x sim {cal D}$ from some distribution ${cal D}$ on $mathbb{R}^n$ and sending in training data of the form $(x, theta(x) + f_{w^*}(x))$ i.e the adversary is corrupting the true labels generated by $f_{w^*}$ (for some fixed $w^*$) by adding a real number to it.

Now suppose we want to have an algorithm which will use such corrupted training data as above and try to get as close to $w*$ as possible i.e despite getting data corrupted the above way the algorithm is trying to minimize (over $w$) the “original risk” $mathbb{E}_{x sim {cal D}} left ( frac{1}{2} left ( f_w (x) – f_{w*}(x) right )^2 right )$ as best as possible.

  • Is there a real life deep-learning application which comes close to the above framework or can motivate the above algorithmic aim?

Bridging two VPN networks

I have a raspberry into my home network. The home network has a public IP, so I can reach my raspberry with a NAT rule in my router and thus create a VPN connection where the raspberry is the Wireguard server and my phone is the client. No problems doing that.

I also set up a connection between the raspberry and a server of a VPN provider using the OpenVPN protocol, so the raspberry is the client that successfully connects to the VPN server somewhere in the world.

Now I want to "join" the Wireguard tun to the OpenVPN tun. My ultimate goal is to have a private connection from a device of mine to my house network (where other services run) and from there to the server of the VPN provider.

I’m a bit confused about how to do it. Should I just set the rules on iptables NATting the two nets or create routes? Or should I perform some other operation?

I have only one physical interface and my config is the following:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether dc:a6:32:be:3f:4f brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.142/24 brd 192.168.1.255 scope global dynamic eth0
       valid_lft 54463sec preferred_lft 54463sec
    inet6 2001:b07:5d33:80a:dea6:32ff:febe:3f4f/64 scope global mngtmpaddr noprefixroute
       valid_lft forever preferred_lft forever
    inet6 fe80::dea6:32ff:febe:3f4f/64 scope link
       valid_lft forever preferred_lft forever
3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether dc:a6:32:be:3f:50 brd ff:ff:ff:ff:ff:ff
4: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none
    inet 10.6.0.1/24 scope global wg0
       valid_lft forever preferred_lft forever
6: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none
    inet 10.203.8.54/24 brd 10.203.8.255 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fe80::3db:1b4c:f152:ac9c/64 scope link stable-privacy
       valid_lft forever preferred_lft forever
12: tun2: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none
    inet 10.203.17.206/24 brd 10.203.17.255 scope global tun2
       valid_lft forever preferred_lft forever
    inet6 fe80::dbfb:9700:bb06:c263/64 scope link stable-privacy
       valid_lft forever preferred_lft forever
13: tun1: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none
    inet 10.203.56.142/24 brd 10.203.56.255 scope global tun1
       valid_lft forever preferred_lft forever
    inet6 fe80::91f:9469:616a:8360/64 scope link stable-privacy
       valid_lft forever preferred_lft forever

neural networks – Not enough information was provided to fully specify the output of the NetEncoder

I using a UNET combined with a VGG16 network based loss function. The input of the UNET has an image netencoder attached to it, the output has an image netdecoder attached to it. The loss function has the same image netencoder attached to its two inputs.

I can set-up and initialize the network without problems. When the NetTrain function is called, I get the following warning :

NetEncoder::decnfs: Not enough information was provided to fully specify the output of the NetEncoder.

However, the training of the network starts anyway and proceeds as it should. I could not find any information what the warning is about.

Below are two images showing the network and the loss function used, with information about the netencoders attached to the inputs. The training samples used are monochrome images of 256×256 pixels.

I can’t figure out which information is missing to fully specify the output of the NetEncoder. I am running this using MMA 12.2

enter image description here

enter image description here