hard drive – HDD. Windows 10 Home 64-bit. Which file system I should create?

I want to buy HDD and External Hard Drive Enclosure. I will connect it to computer via USB. I will write on this HDD system backups, current sound work files backups, sound work files archive. I run Windows 10 Home 64-bit. When I will buy HDD it will be unformatted? Which file system I should create?

hard drive – Differential diagnosis: source of password error with external HD

I have a LaCie Rugged external HD that has had issues with the password for the disk-level encryption resulting from being formatted in macOS Recovery for use with Time Machine. On a number of occasions, when I needed to enter the password for various housekeeping purposes, on a given day the password would suddenly fail to be accepted, and I’d sometimes be able to unlock the disk normally if I went into the startup recovery mode -> Disk Utility to unlock and change the password, but on a few occasions I had to give up and reformat the disk to get it to work normally.

I have no idea why I persisted in using this disk, but nonetheless…

This has happened again, and since a few months went by between the last time I hooked up this drive and my most recent attempt, I have been second-guessing myself and wondering if I’d changed the password after taking sleeping medication/etc. some evening and forgotten it, but that would be extremely unlike me and in light of this drive’s history, the former explanation seems to fit better.

My current situation:

I have an entire Time Machine section on this drive as well as a user-managed folder for storing large or unused files/docs, and have been confronted by the shuddering password query box each time I try to access it whether during a normal desktop session or in recovery mode.

But, when I tried booting my mac to an alternate startup drive (holding Option key at power-on), then selected this drive, it accepted my known password, ran through a boot cycle, then ended up in recovery mode. If I had indeed potentially forgotten the password for an encrypted Time machine backup on an encrypted HD (and for which account FileVault had been enabled), then I would truly be out of options and have to give up or try to brute-force the password through a list of potential password strings, BUT can this be reason to think that there is instead some kind of problem with the disk or its firmware, and that my password is still valid?

Given this, how can I safely access the files on the LaCie? The successful startup boot to recovery mode was unorthodox and didn’t seem to work completely, since I still went to try to Restore From Time Machine, and it let me view the lists of backups by date, but when I selected one, it showed there was nothing visible in any of the folders… Of course, if the system was using that very HD as the startup disk, maybe it makes sense that it didn’t work.

I need recommendations and some differential diagnostics from those with a higher level of expertise – I am already beyond my competency and am just a capable dilletante attempting viable solutions as I learn of them. Please forgive my rambling and self-indulgent prose record of the circumstances, but I wanted to be thorough.

Thank you very much for taking the time/energy to review, rebuke (I’m sure), and remand.

How to store and work from a Postgresql database on an external hard drive?

I have installed PostgreSQL 13.1 on two computers, a laptop and desktop.

I created database using a Tablespace on a portable SSD drive under F:MyProjectDBPG_13_20200720120350. So I expect all the database files to be in there somewhere.

I want to work on the database on the SSD drive which is fine on the laptop that created the database, but when I go to my desktop I can’t figure out how to attach (SQL Server style) the existing database/tablespace and continue working on it.

Is there a way to work like this with Postgresql?

quantum computing – How would a hard fork for a new key-pair/address generation algorithm be implemented practically?

If an upcoming hard fork entails a switch to new method for key-pair/address generation, how exactly is it going to be carried out? How would all the old addresses going to change to ones that are compatible with the new algorithm and how is the transfer of ownership going to be conducted exactly?

For the blockchain to keep the same state after the fork, there would have to be a new address for each old (incompatible) address that holds the same amount of BTC and has the same owner.

One way I can think of is adding support for both the old and new key-pair/address generation methods temporarily, and then prompt all owners holding addresses generated using the old method to send their BTC to new addresses generated using the newer method, and finally dropping support for the old method. With ~30 million unique BTC addresses as of today, and a current block size capable of processing 2-7 transactions per second, this transition would need between 50-173 days to complete, during which the blockchain is completely incapacitated, and miner fees being higher than usual. I suppose it could also trap any BTC that hasn’t been transferred to a new address by owners in a timely manner within the old, incompatible address.

This type of hard fork is very likely going to need to happen soon, as quantum computers advance. I want to know if the BTC community is preparing at all for the event.

hard drive – Encrypting Disk stuck on “Status: Checking”

I have a Mac mini with 2 disk, and I wanted to encrypt the second (non-boot) disk. I used Finder’s right click > Encrypt to start the encryption process. However, it seems to be stuck.

diskutil cs list outputs:

CoreStorage logical volume groups (1 found)
|
+-- Logical Volume Group 51...
    =========================================================
    Name:         Macintosh HD
    Status:       Offline
    Size:         0 B (0 B)
    Free Space:   -none-
    |
    +-< Physical Volume 8F...
        ----------------------------------------------------
        Index:    0
        Disk:     disk1s2
        Status:   Checking
        Size:     998999998464 B (999.0 GB)

and diskutil cs info 51... outputs:

Core Storage Properties:
   Role:                       Physical Volume (PV)
   UUID:                       8F...
   Parent LVG UUID:            51...
   Device Identifier:          disk1s2
   PV Size:                    998999998464 B
   PV Status:                  Checking

It has been stuck on “Checking” for quite a while, and looking at Activity Monitor shows no processes using CPU or Disk, leading me to believe it has gotten stuck. How can I move forward and get my disk out of this situation?

Edit:
Checking the Console, I keep seeing the message debugPortal: kCoreStorageGetGroupSparseState invalid lvg uuid pop up every second.

Edit 2:
The disk has now disappeared from the Finder and Disk Utility, however it still appears in diskutil and diskutil cs, though no diskutil cs commands work, they just say the UUID is invalid.

photos.app – How to move some photos to external hard drive

I want to move some of my older photos from the library to an external hard drive as my disk is getting full. I do not want to export, delete and re-import the photos to keep face/keyword information. I also do not want to move the full library to the external hard drive as I like to have my more recent pictures stored locally.

Note that I still want to keep the pictures within the Photos library. I just want to move their physical location.

algorithms – $m*n$ matrix in row-wised manner and specific time complexity, very hard interview question

This is very hard question as mentioned on some site on google and firstly introduce on Interview Amazon Question.

we have a m*n matrix in which all rows was sorted in ascending order and all elements was distinct. we want to find the k’th smallest elements in this matrix.

Q) there is a $O(m (log m + log n))$ algorithm for doing this !

I see lots of post especially Here but as Yuval Filmus mentioned in comments there are difference in large value of $k$.

We are familiar with median of medians method that throw half of elements with median, but here there is a very challenging question how this time complexity was reached !

hard drive – How to determine wasted space by large chunk size on macOS RAID array?

The chunk size of your RAID array does not determine how much space on disk a single file uses. Therefore no space is actually wasted due to having a larger chunk size than optimal.

The amount of space wasted is instead determined by the file system block size, which is independent of the RAID array chunk size. On macOS, you’re typically looking at APFS, which uses 4096 byte blocks – or HFS+ which uses 512 byte sectors that are typically grouped together in allocation blocks of 4096 bytes (unless you have a RAID drive that is more than 16 TB, then it is larger).

You can determine your allocation block size by running this command in the Terminal (change the device node to match your disk setup):

diskutil info /dev/disk2s1

Unfortunately lots of “myths” and wrong information has circulated regarding RAID chunk sizes, as it has been seen as a form of “dark arts” to choose the right size. It is essentially hard to choose the optimal chunk size from a long list of options without actually benchmarking with the actual data and operations done on them.

However, in your case you actually have the type of setup you want. If you have many small files, you actually want a big chunk size on your RAID. If you have fewer, large files, you want a small chunk size on your RAID.

Unfortunately some have heard the opposite advice here. That comes from the fact that if you have a single disk, you want the opposite – i.e. for storing few, large files you want big blocks, and for storing many, smaller files, you want small blocks. This is because you want to minimize the number of block operations per second with large files to optimize throughput, whereas for smaller files, you want to optimize for latency by having smaller blocks and thus more operations per second.

However, on a RAID-system with many disks – things are ofcourse different. When dealing with large files, you want to distribute the workload evenly over many drives to optimize performance. This means relatively small chunks so that you can get many drives working for you at once – each with their own small chunk. On the other hand, when you’re dealing with small files, you want to ensure that most operations can be completed by a single drive only, so you get the lowest latency possible. This means a large chunk size to ensure that your data is contained in a single chunk that can be processed by a single disk.