filesystems – Why has Windows used NTFS for 20+ years, while many different systems have trendend in the linux community over the same time?

I’m a first year MS:CS student, and my data structures class has inspired me to research file systems and their implementations. I recall using ext2, then ReiserFS, then ext3, then ext4, and now btrfs seems like the new thing. I understand (more or less) what changed from each of these, and their relative improvements, but what I don’t understand is how NTFS has stayed relevant during roughly the same period of time (looks like the last major version of NTFS shipped with Windows XP).

Was NTFS simply that well spec’d and designed from the beginning, or has Windows been working around some NTFS deficiencies in the interest of not having to rewrite some core parts of Windows from scratch? If that is the case, why are linux distros much more flexible in changing FS (user can even select a different FS at install time).

cryptography – Why public key systems involve private keys

Public key cryptography means that the entire communication between both parties is public, including the setup. Contrast this with the case of two parties $A,B$ meeting in secret, agreeing on some keyword, and using this keyword to encrypt future communications.

Clearly, if $A,B$ decide on the encrpyption scheme in public, something has to be kept private (otherwise you could decipher the messages just like the parties involved). This is the private key, so the flow is something along the following lines: $A$ and $B$ publicly discuss and share some information with each other and the world, then they do something in private and send each other encrypted messages. Witnesses to the public exchange alone can’t recover what is being said.

The child version of such scheme which I like is the following. Suppose $A$ and $B$ want to agree on some secret color, only known to them, however the entire exchange must be public. Under the assumption that mixing colors is easy, but given a mixture recovering its components is hard, then they could do the following: $A$ and $B$ each choose a secret (private key) color denoted by $a,b$. Then $A$ sends $B$ the color $c$ (public key), and the mixture $(a,c)$. $B$ now creates the mixture $(b,c)$ and sends it to $A$, and also mixes $(a,b,c)$ and keeps this compound to himself. Finally, $A$ adds $a$ to $(b,c)$ and is now also in the possession of the secret mixture $(a,b,c)$, known to $A,B$ but unknown to anyone who solely witnessed the interaction between them.

How do I properly sync data between two Windows 10 systems on the same LAN utilizing Robocopy, a .bat file, and Task Scheduler?

I’ve read a number of posts regarding the use of robocopy to sync data between two Windows systems. I tried various configurations, and the settings I currently have in place are what seemed to work best for most users.

System A runs Windows 10 Home, and its desktop is shared to a Microsoft user account w/ full privileges.
System B runs Windows 10 Pro, and its desktop is shared to same Microsoft user account w/ full privileges.

The .bat files were stored on each system’s respective desktop and scheduled to run every three minutes.

System_A sync.bat:
cd C:UsersusernameDesktop
robocopy C:UsersusernameDesktopdirectory_to_sync ‘System_BDesktopdirectory_to_sync’ /E /MIR /mt /z

System B sync.bat:
cd C:UsersusernameOneDriveDesktop
robocopy C:UsersusernameOneDriveDesktopdirectory_to_sync ‘System_ADesktopdirectory_to_sync’ /E /MIR /mt /z

Using System_A’s sync.bat as an example, I set the task to run w/ highest privileges, and I configured it for Windows 10, since it defaulted to Vista/Server 2008. I triggered it to run at task creation/modification, and repeat every three minutes indefinitely, only stopping if the task were to run longer than three hours. I set it active to a time earlier this morning, and I synched it across time zones.

The Actions Tab is where most of the posts I’d read had made changes and saw varying degrees of success.
My configuration is as follows:

Action: Start a program
Program/script: cmd
Add arguments (optional): /c sync.bat (Note: The /c was auto-added by Windows for whatever reason.)
Start in (optional): C:UsersusernameDesktop

The job history reports that it completes w/ an operational code of 2, but nothing is synched. I’m out of ideas, so any help would be greatly appreciated. Thank you.

concurrency – Why does taking advantage of locality matter in multithreaded systems?

As we all know, when a given thread/process reaches a memory address it does not have cached, the execution will (for the most part) freeze up until said data is fetched from memory. What I don’t understand, is why in multithreaded systems, we can’t save ourselves the headache of data-oriented design. Why can’t the processor/OS simply do work elsewhere on a different thread until the data is received?

I couldn’t find a good post on this exact question, and this may just be obvious to others. I only know so much about the pipeline and such so there could be a very obvious reason for this, I simply don’t know why.

operating systems – Renaming of linear list directory

Consider a linear list-based directory implementation in a file system. Each directory is a list of nodes, where each node contains the file name along with the file metadata, such as the list of pointers to the data blocks. Consider a given directory foo.

Which of the following operations will necessarily require a full scan of foo for successful completion?

A.Creation of a new file in foo

B.Deletion of an existing file from foo

C.Renaming of an existing file in foo

D.Opening of an existing file in foo

Here I know that how the creation of a new file necessarily required a full scan. but I’m confused with how the renaming of the file necessarily requires a full scan?
plz, discuss how the renaming is done in the directory structure.

usability testing – How do you prototype systems that are normally connected to Active Directory or other complex external systems?

I am working on a product that has a quite typical setup when it comes to enterprise software: it is usually connected to the Active Directory of the origanization and authenticates its users against it and fetches their group membership information from it. The permissions within the products are assigned to the groups that come from AD. For tiny installations and in test scenarios it is possible to add local users and groups but in production usage it is almost always integrated with Active Directory.

We are planning on making some pretty significant changes in how permission settings can be made and the mockups for the changes tested well when local users & groups were used. We would now like to see if the interface works well in a more realistic scenario when the product is connected to AD and we have thousands of users and groups.

I was wondering on whether you have any experience or insight on how to do users tests in such a situation. Creating and maintaining a fake, internet-facing AD installation seems to be an overkill for this purpose and also cause problems during the test as well as it’d be impossible to connect the real AD with the wireframe we want to test. Creating a mock AD user management interface would also take tons of time and would probably still be quite far from how that UI works normally.

Do you have any experience with this or more generally speaking on doing wireframe tests of systems that are normally connected to large, complex external systems in production?

Are there any applications to use a halting decidable system to study security problems in distributed systems?

There’s a bounded system S (halting is decidable) that can be used to model for reliable asynchronous communication and unreliable asynchronous communication. Are there any applications to use this system to resolve security problems in distributed systems?

root systems – Symmetry in the complex semisimple lie algebra – help to understand definition

I got stucked in the definition of “symmetry” in the chapter of Lie Algebras to understand later the root systems. Well in the script they used the following definition:
Let $alpha in Vsetminus {0}$. A symmetry with vector $alpha$ is an element of $s in GL(V)$ with $$s(v)=v-alpha^*(v)alpha$$
for all $v in V$ with $alpha^*(alpha)=2$.
Now the book of Serre “Complex Semisimple Lie Algebra” gives us an other definition:
Let $alpha in Vsetminus {0}$. One defines a symmetry with vector $alpha$ to be any automorphism $s$ of $V$ satisfying the following two conditions:
(i) $s(alpha) = – alpha$

(ii) The set $H$ of elements of $V$ fixed by $s$ is a hyperplane of $V$.

I know don’t see the relation between these two definitions. Especially the first point in the second definition confuses me a lot. Many thanks for some help.

ag.algebraic geometry – Why are critical points important for dynamical systems?

I have just started reading a little about (arithmetic) dynamics and it seems like critical points are very important – for instance, rational maps so that critical points have finite forward orbit (PCF maps?) seems to be an important object of study. For instance, the algebraic numbers $c$ so that $0$ has a finite forward orbit under $z to z^2 + c$ seem to be quite important.

It’s a little hard for me to understand why critical points should be important. I have heard for instance that PCF maps are assosciated to (infinite) Galois groups with finitely many ramified places and this is very interesting but I would be very interested in a precise reference. I have also heard that they are in some sense vaguely analogous to special CM points on modular varieties and any more elaboration along these lines would also be very welcome.

Given the interest in PCF maps, I am sure there must be any other reasons and I would be very happy to hear about any of them.

legal – Which security standard or compliance bans admin privileges across all the company systems

Someone in my company wants to be granted admin privileges in every single application, infrastructure and network component, just because he is a big IT manager. I don’t have a security background, but it sounds wrong practice. My team is not comfortable with that, because we are responsible of our application and we don’t want someone external to the team being able to change the system and impact our users. Even if we can set controls on the admin account, it is not very reassuring.

My team has expressed this concern with the top level managers. Those people, that have a non-technical background, ask us if we can provide some precise articles from state laws, security standards or compliance requirements that explicitly says that no one should get admin access everywhere.

Where should I look? Could you please point me to some lines/paragraphs about it in security standards, laws, compliance documents, auditors materials?