digital signature – Why are LMS and XMSS no candidates in the Post-Quantum Cryptography Standardization process?

Why are Leighton-Micali Signature Scheme (LMS) and eXtended Merkle Signature Scheme (XMSS) no candidates in the NIST Post-Quantum Cryptography Standardization process?
Both are mentioned in the final draft of Recommendation for Stateful Hash-Based Signature Schemes.

I was expecting that both algorithms are candidates in the standardization process as well, but it seems that they weren’t even submitted. Can anyone explain why? If they are not considered as candidates for a new standard why does the Recommendation for Stateful Hash-Based Signature Schemes exist and mention exactly those two algorithms?

Is the recommendation just a temporary standard until the standardization process is finished?

Classification – What is the purpose of standardization in machine learning?

I'm just beginning to learn about the next K neighbor and have trouble understanding why standardization is needed. As I read through, I came across a section with the inscription

When independent variables are measured differently in training data
Units it is important to standardize the variables before the calculation
Distance. For example, if a variable is based on height in cm, and
The other value is based on the weight in kg, then the size has a greater impact on performance
the distance calculation.

Since K's nearest neighbor is just a comparison of distances, it does matter whether any of the variables have values ​​with a larger range, since that's what it is.

What exactly does standardization do with the values? One of the formulas results from Xs = (X-mean) / (max-min) Where does such a formula come from and what does it really do? Hopefully someone can offer me a simplified explanation or give me a link to a site or book that explains this in simple terms for beginners.

Predictability – Why is there no comprehensive standardization body monitoring ISAs, bitcodes, code representation, etc., as is the case with Unicode?

There are a variety of known bit-code formats that are each suitable for their specific task:

  • LLVM IR:

    This format is based on an XML-like binary data stream model that can be used as a common compiler target for a variety of architectures and languages.

  • JVM bytecode:

    Mainly Java compilation target; increasingly reused as a larger target in this ecosystem.

  • Khronos Spir-V:

    Shader target language, graphics hardware abstraction.

  • Web Assembly:

    Developed as a goal to better integrate ecosystems and existing code into the browser context.

  • Specialized: (eg Lua and Python bytecodes):

    Interpreters generally execute bytecode during execution, whether explicit or not.

The hardware / ISA ecosystem is equally divided, in particular under ARM, i386 / x86, x86_64, Microchip / Atmel and now Risc-V.
There is also fragmentation in the computer graphics industry, which has outperformed CPU growth in many ways lately.
The hardware acceleration / API design is primarily a split between Khronos and Microsoft, but now Apple and Google are joining. Even with the two leading vendors, backward compatibility has proven to be a difficult problem, resulting in increased segmentation and overhead.

All I can see is that in the future these problems will worsen with the current state of traditional Moore's Law, the increasing use of accelerators and FPGAs, the proliferation of the Internet of Things, various types of storage and caching, hyperscale, and so forth.

In the past, segmentation was avoided forsean and in case of character set localization. For this reason, the Unicode consortium was founded independently of a single company / organization. The Unicode consortium has been able to standardize over 100,000 character encodings, and this is virtually universal.

Theoretically, I see no reason why this is not possible in the area of ​​bitcode / Turing.

https://www.quora.com/unanswered/Why-is-the-extensive-Standards-Organisation-for-Instruction- Set-Architecture-ISA-and-Bitcodes- as-the-is-the– Case -by-Unicode 1

In response to the community:

  • @Gilles Unicode acts as a supergroup of its predecessor ASCII. It complies with UTF-8, UTF-16 and UTF-32 standards. Modern microprocessors are no strangers to extending type codes in their instruction decode pipelines. Each computer can emulate larger types than its baseline, and support at the architectural level is not uncommon.

Privacy – Anonymity: standardization or randomization?

A complete elimination of fingerprints is not possible for a variety of technical reasons. In short, many data can be captured by fingerprint required for the correct operation of the site, so you tilt Reject it for websites or find it out through page channels that would require a major performance cutback. The original version of this answer contained an extensive list of specific reasons, but … it covered more than two pages. I removed it before posting. TL; DR: Fingerprints are always there.

However, the damage can generally be minimized by standardization:

  • Change the UI to include only the browser and the major version, so less data about the operating system and hardware is lost.
  • Remove certain features that Web sites should not require and use to provide additional information.
  • Encourage users to surf in full-screen or half-screen mode only to standardize their window sizes to a small range of values ​​that is still convenient.
  • Help consumers be more paranoid with cookies and demand tighter and more granular permissions before setting them.
  • By default, your browser may behave in such a way that the extensions are enabled so that websites can not detect if the functionality is not present or only disabled.

Randomization could also be used for these, but there is an added risk that randomization patterns will be fingerprint-capable, have no added value, and are more difficult to think about. The temporary benefit of confusing fingerprints would probably not be worth the damage of giving a lot of more accurate fingerprints (because the UA is currently unable to identify anyone – but as it happens is integrated into the code of the browser and therefore can be.)

In practice, standardization ultimately has a better impact on reducing fingerprints.

statistics – Why can not we use x / average for standardization instead of z-scores?

When I checked the data standardization and the Z-score theory, I had this intuition.
For example, suppose you have the results of people who performed two different tests:

Test A (mean = 70%, std.dev = 6%)
+ -------------- + ------- + --------- + ------- +
| Participant # | Score | z-score | x / avg |
+ -------------- + ------- + --------- + ------- +
| 1 | 60 | -1,66 | 0.85 |
| 2 | 65 | -0.83 | 0.92 |
| 3 | 80 | 1.66 | 1,14 |
| 4 | 90 | 3.33 | 1.28 |
| 5 | 40 | -5.00 | 0.57 |
| ... | | | |
+ -------------- + ------- + --------- + ------- +

Test B (mean = 75%; std.dev = 7%)
+ -------------- + ------- + --------- + ------- +
| Participant # | Score | z-score | x / avg |
+ -------------- + ------- + --------- + ------- +
| 1 | 60 | -2,14 | 0.8 |
| 2 | 70 | -0.71 | 0.93 |
| 3 | 80 | 0.71 | 1.06 |
| 4 | 90 | 2.14 | 1,2 |
| ... | | | |
+ -------------- + ------- + --------- + ------- +

We can see that Participant # 3 in Test A has a higher z-score than Participant # 3 in Test B and that he is relatively better than his counterpart.

I can not find any information about the name of the measurement x / avg, but I have the intuition that it could be used as a proxy for standardized data.

I'm certainly wrong, as it is nowhere mentioned, but why?

Programming Practices – Is there some sort of standardization for error reporting?

I am looking for a kind of standardization, similar to POSIX for compatibility and familiarity between different command-line interfaces, but for error reporting. I am looking for:

  • formatting rules
  • Standardization of logging practices
  • Rules for selecting the error code
  • good habits about how much and what is included in your bug report to the user
  • Decide to provide additional metadata about the status of the software if the user wants to see or report it.
  • Support for multiple languages ​​(i18n)

etc…