Transitional Systems – Intuition behind Moore's Law

I recently read a Quora response from a reputed author who said Moore's Law is still valid when people expect it to deviate from it, and explained why it would not last for long.

I want to understand how Gordon Moore came up with this law. He saw data for a few years, but how was it sufficient to make such a statement and say that it would be valid for a long time?

co.combinatorics – From Steiner systems to geometric grids to matroids

I'm looking for a specific matroid. I've found a source that claims to talk about these matroids, but then only about geometric grids. In this essay even the geometric grid, which seems the right one, was described as

… the grid connected to the Steiner system $ S (3,6,22) $,

It may be clear to some how to translate between all these different constructs, but I find it hard to find a source that explains (in a nutshell) how these concepts are related.

I assume that the matroid of the geometric grid $ mathcal L $ is defined on the set of atoms of $ mathcal L $and independence of atoms $ a_1, …, a_n in mathcal L $ means that the supremum $ a_1 vee cdots vee a_n $ has rank $ n $,
But that's just a guess.
In addition, the grid comes off $ S (3,6,22) $ should have rank 3, but not much more is said about that.

Can someone tell me how to get the matroid off $ S (3,6,22 $)?

There are actually two articles that I'm talking about:

Customized warehouse picking systems

Taizhou Ruihong Metal Products Co., Ltd.
Taizhou Ruihong Metal Products Co., Ltd. was established in 2006 and mainly engaged in research and development, manufacturing, marketing and service of fungal farms. The company provides a high level of support to customers through the introduction of cutting-edge technology in collaboration with specialists in the cultivation of mushrooms domestically and abroad, as well as the highly technical equipment and advanced planting programs.

Today, Ruihong has a professional design and service team that enjoys a good reputation in the industry. The products are 60% domestically marketed and exported to more than 20 countries and regions such as USA, Canada, Europe and Hong Kong.

"Quality First, Customer First" is the core idea of ​​Ruihong. Ruihong thinks and works for customers and will do its best to serve customers around the world. Hereby Ruihong sincerely welcome friends from home and abroad to visit the factory and establish long-term business relationship!


Customized warehouse picking systems

Operating Systems – Why is the kernel stored in a virtual memory area?

I have read, that:

The Virtual Kernel Address Space (KVA) is the virtual space in which all Linux kernel threads reside

Why does an operating system need to use virtual addressing for itself? Why is the physical memory not being used directly?

The reason for this question lies in my curiosity about the physical memory of Linux.

I know that the MMU is responsible for maintaining the virtual to physical address structure and is enforced by the hardware itself, but the kernel itself certainly has access to physical memory, right?

I tried to write a "kernel module" to get access to physical memory, but apparently even the kernel itself is in a virtual space.

Is there a way to access the physical memory except I write my own kernel form scratch?
There is no reason to try it just for fun and learning.

File Systems – How do I forbid the system from searching for / usr / local binaries, libraries and includes?

It's hard for me to compile things with just dependencies within the original system folders. However, different versions of libraries and tools are installed in / usr / local.

G ++ always includes files from / usr / local / include but connects to libraries in / usr / lib, which causes a lot of confusion.

My quick fix is ​​to tell Linux (Ubuntu) not to look in / usr / local and rely only on things outside the directory: places where packages are installed by default.

The sad story is that I can not just delete what's in / usr / local because some applications use libraries there.

How can this be done?

Web Application – Securing multiple systems accessing the same data

I am faced with a hurdle when it comes to security when managing server access rights.

At the moment I'm running a community that can create subserver. In this way, Community A can allow certain users to moderate, change settings, invite users, read logs, etc. for their own subserver, but not others.

My current system has a global user, this user has permissions structured as follows:

 "id" :"their unique id",
 "username" : "username",
 "globalRole" : "user",
 "permissions": [
      "resource" : "guilds_id_here",
      "permissions" : [
           "resource" : "guild.logs",
           "read" : true,
           "write" : false


A single user has access based on resources, and when he tries to change, read, or do something through my API or socket, I check that he has access to the resource he is changing.

This is pretty easy for me to manage permissions through the API when I intercept the request, grab the resource, and see if they can do the action they want to take, such as: For example, read a log or invite a user, and then either reject the API call before it reaches the controller.

The main problem I have now is maintaining multiple accessibility. I now have the REST API and a WebSocket that can access the same types of data, depending on where they access the guild.

Now the authorization system has become much more complicated and it is no longer so easy to intercept and block a request from the REST API. Now I have two authentication systems that I think are wrong and violate the DRY principle.

I'd like to know if there are industry standards for multiple data access. Should I create a resource manager that always needs credentials and the target resource, and then have a system user for internal access, or is there a simpler standard for tight control over who can do what, based on the permissions they set for one have a specific resource.

The ultimate goal is to grant permission to an object and properly filter out data that is authorized for the user requesting data.

ds.dynamical systems – On invariant cones of the katok card

I study the katok map and similar examples of non-uniform hyperbolic surface diffeomorphisms. An important part of the analysis of these diffeomorphisms is the invariance of a continuous family of stable and unstable cones, but I have trouble understanding part of the argument. (I apologize if this question is not true for a particular paper, but I have seen this reasoning often enough to believe that I lack something fundamental.) The core of the relevant construction is summarized below.)

The katok card is derived from hyperbolic automorphism $ left ( begin {array} {cc} 2 & 1 \ 1 & 1 end {array} right) $ of the torus $ mathbb T ^ 2 $which has a coordinate neighborhood in a neighborhood of origin $ (s_1, s_2) $ The automorphism is the time-1 map of the river, which is generated by the following vector field:
dot s_1 = s_1 log lambda, quad dot s_2 = -s_2 log lambda

from where $ lambda> 1 $ is an eigenvalue of the above matrix. To construct the katok card, we slow down the card in that quarter with a function $ psi: (0,1) to (0,1) $ what satisfies $ psi (0) = 0 $. $ psi & # 39; (u)> 0 , forall u in (0,1) $. $ psi (u) = 1 , forall u rq_0 $ (for some $ r_0> 0 $), and $ int_0 ^ 1 psi (u) ^ {- 1} , you < infty $, We define the map $ G $ the above automorphism be outside of $ D_ {r_0} subset mathbb T ^ 2 $and within this slice, the time-1 map of the flow created by the vector field $ V_ psi $, defined by:
dot s_1 = s_1 psi (s_1 ^ 2 + s_2 ^ 2) log lambda, quad dot s_2 = -s_2 psi (s_1 ^ 2 + s_2 ^ 2) log lambda.

(Strictly speaking, we then conjugate through a homeomorphism to obtain this map in the Lebesgue area, but it suffices to look at the map we have just constructed.)

We let $ ( xi_1, xi_2) $ are the default coordinates of $ T_x mathbb T ^ 2 $ to the $ x in mathbb T ^ 2 $and define the unstable and stable Cones each as
begin {align}
K ^ + _ x &: = left {( xi_1, xi_2) in T_x mathbb T ^ 2: | xi_1 | geq | xi_2 | Law} ,\
K ^ -_ x &: = left {( xi_1, xi_2) in T_x mathbb T ^ 2: | xi_1 | leq | xi_2 | right }

The goal is to prove it $ dG_x (K ^ + _ x) subset K ^ + _ {G (x)} $ and $ dG_x ^ {- 1} (K ^ -_ x) subset K ^ -_ {G ^ {- 1} (x)} $, The argument for $ K ^ + _ x $ is as follows (circumscribed from some sources) and the argument for $ K ^ -_ x $ is similar:

The linear part of the vector field $ V_ psi $ is
begin {align}
frac {d xi_1} {dt} & = ( log lambda) left ( xi_1 left ( psi + 2s_1 ^ 2 psi & # 39; right) + 2s_1s_2 xi_2 psi & # 39; right),
frac {d xi_2} {dt} & = – ( log lambda) left (2s_1s_2 xi_1 psi & # 39; + xi_2 left ( psi + 2s_2 ^ 2 psi & # 39; right ) right)

The equation for the tangent $ alpha = xi_2 / xi_1 $ is
frac {d alpha} {dt} = -2 ( log lambda) left ( alpha left ( psi + left (s_1 ^ 2 + s_2 ^ 2 right) psi & 39; right) + s_1s_2 left ( alpha ^ 2 + 1 right) psi & # 39; right)

Replace $ alpha = 1 $ and $ alpha = -1 $ in this equation gives us
begin {align}
frac {d alpha} {dt} & = -2 ( log lambda) left ( psi + left (s_1 + s_2 right) ^ 2 psi & # 39; right) leq 0, \
frac {d alpha} {dt} & = 2 ( log lambda) left ( psi + left (s_1-s_2 right) ^ 2 psi & # 39; right) geq 0.

These inequalities are strict everywhere, except for the origin. This is proved by the cone invariance.

This argument confuses me. In the equations for $ d xi_i / dt $is $ ( xi_1 (t), xi_2 (t)) $ should be the vector field $ V_ psi $ at one point along a specific trajectory? If yes, then $ xi_i = dot s_i $, and $ alpha $ would be the slope of the tangent vector at this point. The calculations work in this case, but that seems too restrictive to be useful. Is that what they mean by "tangent"? How does this prove the invariance of $ K ^ + _ x $ under $ dG $?

Terminology – Does The Hacking Computer of The Elements of Computing Systems Use the Von Neumann Architecture?

I read "The Elements of Computing Systems" (subtitle "Building a Modern Computer from First Principles – Nand to Tetris Companion") by Noam Nisan and Shimon Schocken.

Chapter 4 deals with the machine language, and in particular with the machine language used on the Hack computer platform. Section 4.2.1 says this about hack:

The hack computer is a von Neumann platform. It is a 16-bit machine consisting of one CPU, two separate memory modules serving as instruction and data storage, and two memory-mapped I / O devices: a screen and a keyboard.

The CPU can only execute programs that reside in the instruction memory. The instruction store is a read-only device into which programs are loaded with exogenous means.

Is this distinction between instruction memory and data memory really a Neumann architecture? In my understanding of the difference between von Neumann and Harvard, this description sounds much more like a Harvard architecture.