partitioning – Ubuntu 20.04 headless disk allocation, only 10GB for /home/user

I’m using a pre installed and configured Ubuntu 20.04 image from my provider. The server is preconfigured with 2x1TB Raid1. I noticed once I set up a new User the allocated space was only 10GB. It seems like the RAID space is not allocated, how can I allocate the 1TB space and make it the default space for new users? Also do I need to delete and recreate the already created user so the new free space will be properly provided to it?
Thank you!

Model: ATA HGST HUS722T1TAL (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name     Flags
 1      1049kB  3146kB  2097kB                  primary  bios_grub
 2      3146kB  30.0GB  30.0GB  ext3            primary  raid
 3      30.0GB  40.0GB  10.0GB  linux-swap(v1)  primary  swap
 4      40.0GB  1000GB  960GB                   primary  raid


Model: ATA HGST HUS722T1TAL (scsi)
Disk /dev/sdb: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name     Flags
 1      1049kB  3146kB  2097kB                  primary  bios_grub
 2      3146kB  30.0GB  30.0GB  ext3            primary  raid
 3      30.0GB  40.0GB  10.0GB  linux-swap(v1)  primary  swap
 4      40.0GB  1000GB  960GB                   primary  raid


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/vg00-var: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  10.7GB  10.7GB  ext4


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/vg00-usr: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  10.7GB  10.7GB  ext4


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/vg00-home: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  10.7GB  10.7GB  ext4


Error: /dev/md4: unrecognised disk label
Model: Linux Software RAID Array (md)
Disk /dev/md4: 960GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Model: Linux Software RAID Array (md)
Disk /dev/md2: 30.0GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  30.0GB  30.0GB  ext3

NAME            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0             7:0    0  99.2M  1 loop  /snap/core/10958
loop1             7:1    0  68.7M  1 loop  /snap/lxd/19823
loop2             7:2    0  55.5M  1 loop  /snap/core18/1997
loop3             7:3    0  89.1M  1 loop  /snap/core/8268
loop4             7:4    0  54.9M  1 loop  /snap/lxd/12631
sda               8:0    0 931.5G  0 disk
├─sda1            8:1    0     2M  0 part
├─sda2            8:2    0    28G  0 part
│ └─md2           9:2    0    28G  0 raid1 /
├─sda3            8:3    0   9.3G  0 part  (SWAP)
└─sda4            8:4    0 894.3G  0 part
  └─md4           9:4    0 894.3G  0 raid1
    ├─vg00-usr  253:0    0    10G  0 lvm   /usr
    ├─vg00-var  253:1    0    10G  0 lvm   /var
    └─vg00-home 253:2    0    10G  0 lvm   /home
sdb               8:16   0 931.5G  0 disk
├─sdb1            8:17   0     2M  0 part
├─sdb2            8:18   0    28G  0 part
│ └─md2           9:2    0    28G  0 raid1 /
├─sdb3            8:19   0   9.3G  0 part  (SWAP)
└─sdb4            8:20   0 894.3G  0 part
  └─md4           9:4    0 894.3G  0 raid1
    ├─vg00-usr  253:0    0    10G  0 lvm   /usr
    ├─vg00-var  253:1    0    10G  0 lvm   /var
    └─vg00-home 253:2    0    10G  0 lvm   /home

How does PCIe lane allocation work?

Say I have a CPU and motherboard that can support 16 lanes, but I try to plug in say a 10G network card that wants x8 and 2 GPUs that also want x16 each. That would total to 40, but I only have 16 lanes. How exactly would the CPU distribute lanes? Would it prioritize certain devices? Do the devices in their startup negotiation tell the CPU their minimum number of lanes? Could someone give an in-depth breakdown of how exactly that negotiation goes down and what happens when more lanes are needed than available?
Thanks

data structures – Memory allocation of priority queues

An associative dictionary is any data structure which maps keys to values. Binary search trees, hash tables, and B-trees are all examples of associative dictionaries. It is incorrect to say that an associative dictionary must be a binary search tree.

Similarly, a priority queue is any data structure which allows insertion in any order and removal in priority order. A binary heap is an example of a priority queue, but it is incorrect to say that a priority queue must be a binary heap.

There are lots of other data structures which implement priority queues, including n-ary heaps (for n greater than 2), binomial heaps, Fibonacci heaps, Brodal queues, van Emde Boas trees, and many more besides.

And that’s leaving aside the issue that a binary heap doesn’t strictly need to be stored in contiguous memory. Any storage scheme can be used as long as it can be indexed like an array and supports $O(1)$ access time.

microsoft excel – Use google sheets for resource allocation

I am trying to figure out how to use Google Sheets to allocate resources to tasks automatically based on multiple variables (like qualifications and preferences).
Lets say Task 1 takes place on week 31 only mornings and requires skills A and B. Task 2 takes place on week 32 only afternoons and requires skills B and C.
Then I have resource 1 who is available only mornings and selected weeks and has skills A, B and C. Resource 2 meanwhile is available whole day and has only skills B and C.
Now I’d need resource 1 to be automatically allocated to Task 1 and Resource 2 to Task 2. And of course I’d need to avoid situation where one resource gets allocated to two tasks in the same week.

Is this something that even is possible? I know I could use some advanced project management solutions but the team is already using Google Sheets and this is once a year scenario so acquiring entire separate solution is not so reasonable. If this was somehow not possible in Google Sheets but only in Excel, that would still be very useful.

Thank you for any help!

c++ – How does boost sort handle additional memory allocation

According to the documentation of boost::sort (https://www.boost.org/doc/libs/1_75_0/libs/sort/doc/html/index.html) all algorithms uses “additional memory“.

I couldn’t find any information on how this memory is allocated in the documentation.

Does anyone know if it is heap-allocated during each sort operation, is it stack-space, or does boost::sort utilize some static internal memory buffer (like std::stable_sort)?

If heap-allocated, is it possible to pass a pre-allocated buffer to the sort operations in order to gain control over memory allocations?

bash – convert-im6.q16: memory allocation failed for converting from gif to bmp

I am trying to convert a file from .gif to .bmp using this command:

convert -coalesce pixels8.gif out.bmp

While doing this i get this error:

convert-im6.q16: memory allocation failed `pixels8.gif' @ error/gif.c/ReadGIFImage/1303.
convert-im6.q16: no images defined `out.bmp' @ error/convert.c/ConvertImageCommand/3258.

I am not sure what the problem is. I have taken this command from https://github.com/glouw/paperview. My system has 8GB ram.

pixels8.gif
Type: ‘image/gif’
Size: 4.6 MB (46,42,849 bytes)
Resolution: 1920×1080

Gamma Correction and Shadow Detail bit allocation

I’m having slight difficulties understanding how a gamma correction increases details in the shadows(where our eyes are more sensitive). Once the bits have been been reallocated to the shadows after applying an INVERSE GAMMA/GAMMA CORRECTION in the camera wouldn’t all that detail just be lost as the monitor would apply a GAMMA to counter the inverse gamma thus bringing the image luminance back to a linear function. Or are the code values saved after gamma correction and only the brightness is brought down.

I’ll use an example I took from the video “Diving into dynamic range” from Filmmaker IQ on youtube.

From what I understood, in his example he uses an 8 STOP(the triangles represent the stops) camera with an 8 bit depth(Not exactly sure about this I got abit confused here. Please correct me If I’m wrong)

enter image description here

enter image description here

Basically once the GAMMA of the screen is applied to the above OETF GAMMA curve and it goes back to the LINEAR one why wouldn’t we loose all the details in the shadow again??

Here’s the link to the video just in case: https://www.youtube.com/watch?v=2sshGdMgJxQ

Explain the proof of allocation problem

I don’t like this proof so much. Here is a different one.

We can assume without loss of generality that $A_1 leq cdots leq A_N$. Consider an optimal solution ${i_1,ldots,i_ell}$, where $i_1 < cdots < i_ell$. In particular,
$$
A_{i_1} + cdots + A_{i_ell} leq B.
$$

Intuitively, it is clear that $A_1 + cdots + A_ell leq B$ (we will prove this formally in a moment), and so the greedy algorithm will also choose at least $ell$ houses.

Now let us prove that $A_1 + cdots + A_ell leq A_{i_1} + cdots + A_{i_ell}$.

I claim that $i_j geq j$. Indeed,
$$
i_j geq 1 + i_{j-1} geq 2 + i_{j-2} geq cdots geq j-1 + i_1 geq j.
$$

This implies that $A_{i_j} geq A_j$, and so $A_{i_1} + cdots + A_{i_ell} geq A_1 + cdots + A_ell$.

filesystems – Linked allocation in operating systems: why not use a doubly linked list?

Singly and doubly linked lists have the identical (terrible) linear access time. Using doubly linked lists would not help performance. The only benefit you’d get is that reading file backwards is more efficient.

No modern file system uses linked allocation anymore. Most use ranges of consecutive blocks, so called extents. Extents are usually more space-efficient and support random access. Neither Singly or doubly linked lists support random access. In practice these days the metadata for a file is read in at once (with all its extents), this is fast. Reading all nodes of the linked lists would mean reading several blocks, one after the other. This is slow, obviously.