machine learning – Does Linear Discriminant Analysis make dimensionality reduction before classification?

I’m trying to understand what LDA exactly does when used as a classifier, i’ve understood how the dimensionality reduction works and i’ve understood that the classification task is carried out with the application of Bayes’ theorem, but i still can’t figure out if LDA executes both operation when used as a classification algorithm.

It’s correct to say that LDA as a classifier executes by itself dimensionality reduction and then applies Bayes’ theorem for classification?

If that makes any difference, i’ve used LDA in Python from the sklearn library.

legal – How to go about learning cyber security if possessing such software (hacking software) is highly and explicitly illegal in my and most countries?

A question for “ethical hackers” or cyber security professionals. I am very interested in the world of cyber security and all aspects of it. I am genuinely interested in the security aspect of it and from a highly ethical and moral perspective. I have purchased “self-teaching” online courses for learning cyber security, this is also dubbed loosely “ethical hacking”.

At this point we should not debate my intentions with such activities and I wish you to take them as genuinely ethical. I do understand that many high level organizations such as the NSA, Intelligence Agencies in many countries etcetera do monitor these activities heavily. Therefore I’m sure any such research would be monitored one way or another— enough said on that.

My question here is, how is it possible to learn in this direction if my country and many others forbid even the possession of software for “hacking” despite intentions? This seems one of the main tools used to learn vulnerabilities and how to defend against them. For ones own security purposes as well as better security for others also and my own software development securities.

While I understand the intentions behind such laws being directed at nefarious intentions, what about progress in this direction, what about people who want to enter the field or simply learn for ethical reasons? According to the criminal code of Canada there is no grey area, see first link below.

An FYI of how I intended to go about learning “hacking” ethically. By using my computer to break into my old brick laptop and learning from there. I wouldn’t use such software to hack anyone else unethically, just not interested.

References:

Understanding Canadian cybersecurity laws: Interpersonal privacy and cybercrime — Criminal Code of Canada (Article 4)

Is Ethical Hacking Legal? 3 Surprising Situations When It’s Not

machine learning – What algorithm do SVMs use to minimize their objective function?

Support Vector Machines turn machine learning linear classification tasks into a linear optimization problems.

$$ text{minimize } J(theta,theta_0) = frac1n sum_1^n text{HingeLoss}(theta,theta_0) + frac{lambda}{2} ||theta||^2 $$

My question is, what linear programming runs on the background for the minimization of the objective function $J$. Is it Simplex?

machine learning – The misjudge in yolov5

I train a single class model with yolo5.The dataset which contains 400 images is split into train,val and test dataset.Here is a notebook of train codetrain code.
The train get a well result .However,when i build a real-time object detection application with the exported model,the result is not satisfactory,it can recognize the object as expected.But there are many misjudge in the detection.The detection result:
result

In the result,there is only one pepsi,why the timestamp and the other small area are recognized as pepsi.

Can someone tell me (a novice) the reason of the result.

machine learning – Which model to apply on such panel data with so may rows but for each unique id rows are 6-8 rows per unique id?

I am new to such panel data where I have multiple observation for same ID in different Quarter and I am not sure what kind of machine learning algorithm I can apply.

I have data from Q1-18 till Q4-2020

I have 2,000,000 rows and 200,000 unique id and 20 columns

For each id I have only 6-8 past quarter values, max quarter for each id are 8 quarters and for some id I have only 6 quarters where few quarter value are not available for that id

Below is the basic idea of what my data set look like

Quarter – respective business quarter for that year

Target – is the sales volume in ratio

I am trying to Predict – Target column for 2021 Q1 quarter

I have 8-10 different numeric columns and state , quarter and ID as category columns

I would appreciate if someone could suggest me what kind of modelling could be performed

enter image description here

deep learning – is loss value should be divided by `seq_length`

I’m new to deep learning and I’m looking at train.py here: https://github.com/huanghao-code/VisRNN_ICLR_2016_Text/blob/master/train.py.

I run this code and the loss values are very high (attached). I think it should be divided by seq_length (which is defined as 70 in the config file). I mean, adding to line 61 this code:
loss = loss / seq_length

Am I right?

(1,   10) loss: 239.628
(1,   20) loss: 199.850
(1,   30) loss: 179.427
(1,   40) loss: 167.309
(1,   50) loss: 161.275
(1,   60) loss: 153.391
(1,   70) loss: 150.944
(1,   80) loss: 148.133
(1,   90) loss: 144.675
(1,  100) loss: 139.971
(1,  110) loss: 143.109
(1,  120) loss: 139.113

  2%|▏         | 1/50 (01:05<53:49, 65.90s/it)Training for 2 epochs...
(2,   10) loss: 137.888
(2,   20) loss: 130.642
(2,   30) loss: 135.233
(2,   40) loss: 131.931
(2,   50) loss: 131.866
(2,   60) loss: 128.675
(2,   70) loss: 130.911
(2,   80) loss: 130.391
(2,   90) loss: 127.622
(2,  100) loss: 122.895
(2,  110) loss: 130.114
(2,  120) loss: 127.582

....


 96%|█████████▌| 48/50 (52:11<02:07, 63.99s/it)Training for 49 epochs...
(49,   10) loss: 100.158
(49,   20) loss: 96.960
(49,   30) loss: 97.161
(49,   40) loss: 95.056
(49,   50) loss: 96.596
(49,   60) loss: 97.120
(49,   70) loss: 100.471
(49,   80) loss: 101.352
(49,   90) loss: 98.160
(49,  100) loss: 94.098
(49,  120) loss: 102.465

 98%|█████████▊| 49/50 (53:15<01:03, 63.94s/it)Training for 50 epochs...
(50,   10) loss: 100.296
(50,   20) loss: 96.931
(50,   30) loss: 97.210
(50,   40) loss: 94.970
(50,   50) loss: 96.643
(50,   60) loss: 96.761
(50,   70) loss: 100.220
(50,   80) loss: 101.168
(50,   90) loss: 97.711
(50,  100) loss: 93.862
(50,  110) loss: 102.333
(50,  120) loss: 102.228
```

Reference request: Learning runtime analysis (Complete material)

I am very curious about learning runtime analysis more than just what MIT courseware provides (The course with Erik Demaine 6.006) And what CLRS offers, which is a nice explanation about asymptotic notations, and the master theorem (+ its proof). The book I have (which is a guide to CLRS) also provides a little bit of the Akra-Bazzi method.

I would like to learn more about this topic, all the way down to its core, and more methods (other than the iterations method, master theorem and all other methods CLRS provides) to compute the runtime, and would also contain some known exercises (or even bizarre ones).

If you have a good resource (Preferably an open-free PDF book) that would be amazing!

The resources I have now, read/watched:

  • CLRS (Second edition) I know there is a third edition, but they did not add anything to runtime analysis

  • Guide to CLRS (It is not an open book, I have a hard cover) which contain a really really small paragraph about Akra-Bazzi

  • MIT courseware 6.006 Into to Algorithms (Erik Demaine & Srini Devades) youtube playlist (The part about runtime + other videos I found interesting)

  • MIT courseware 6.006 Into to Algorithms Class notes and questions

I tried to look up on the internet but this topic is like a ghost, I cannot find a PDF book about this, in a complete version (About all the stuff there is to know about runtime analysis, just like there is a complete book for number theory or calculus for example..)

(1) By complete I don’t mean 100% of-course, just so it would be a wide book on this specific topic, and deeper explanations.

Thank you for reading, and suggesting!

neural networks – Looking for references for real-world scenarios of data-poisoning attack on labels while doing supervised learning

Consider the following mathematical model of training a neural net : Suppose $f_{w} : mathbb{R}^n rightarrow mathbb{R}$ is a neural net whose weights are $w$. Suppose during the training the adversary is sampling $x sim {cal D}$ from some distribution ${cal D}$ on $mathbb{R}^n$ and sending in training data of the form $(x, theta(x) + f_{w^*}(x))$ i.e the adversary is corrupting the true labels generated by $f_{w^*}$ (for some fixed $w^*$) by adding a real number to it.

Now suppose we want to have an algorithm which will use such corrupted training data as above and try to get as close to $w*$ as possible i.e despite getting data corrupted the above way the algorithm is trying to minimize (over $w$) the “original risk” $mathbb{E}_{x sim {cal D}} left ( frac{1}{2} left ( f_w (x) – f_{w*}(x) right )^2 right )$ as best as possible.

  • Is there a real life deep-learning application which comes close to the above framework or can motivate the above algorithmic aim?