linux – Multi-user development environment on cloud

For the context:

We’re planning to migrate to a cloud VM(s) based development environment for our engineering team. The idea is to develop and build everything on these remote VMs and nothing needs to be installed on the personal Macbooks.

Following is the initial approach we’re planning:

  • Dev environment will consist of X no. of cloud VM(s) which will be shared among our engineering team.
  • These VMs can only be accessed from our VPN.
  • Every VM on the pool will have a NFS volume mounted which will contain the home dir. of every developer. This will allow devs to access their data if they login on a different VM next time.
  • No sudo access to devs and read-only access to other developer’s home/working directory
  • Limited CPU and RAM quota for every developer
  • Everything will be installed on these VMs like Python, go, node, etc.
  • We’re planning to use this VSCode extension for remote development
  • Webapp can be accessed in the browser through Port forwarding

Are there any good resources (blogs, papers, etc) and alternatives available which will help me to understand how established orgs. are doing this? this is already a solved problem so there must be some good researches available.

Any pointers related to this highly appreaciated.


linux – Openssh update promt that I need to bypass

I am writing a script that will get everything I need on a server installed and configured automatically with no user input, problem is that openssh asks about what I want to do with a file. tried force-confdef and confold, but those doesn’t apply to openssh config I guess.

picture of openssh config promt

So I guess the question is, how can I get it to always choose default?
the marker already is on my right choice(default), it only need my input to enter, but i want to bypass the need for a human input.

This is what I thought would solve it:


Dpkg::Options {" 

performance – Why is the execution time on Datagrip longer on MySQL 8 than MySQL 5.7 on a new Linux Server?

I am migrating from a 4 year old setup to a brand new high performance server and am experiencing slower performance than on the old machine.

The old setup is a Ubuntu 16.04 Server on bare metal Intel i7 64G RAM and MySQL 5.7 and is under medium load with many services running.

The new setup is an AMD Ryzen™ 9 5950X with 128G RAM, no load and only MySQL 8 running on Ubuntu 20.04. Both are bare metal machines and I am the only one who can access it.

Now the database I am testing against holds a table with 100M rows. The new one holds an exact copy, imported after a mysqldump. I am running the exact same simple query without any join, just from one table.

The old server returns the result in about 120ms the new one returns in about 200ms.

10 rows retrieved starting from 1 in 192 ms (execution: 184 ms,
fetching: 8 ms)

I am using DataGrip for the query (connected via SSH). If I connect via SSH terminal directly and use MySQL on the console the result is returned in 0.00s

How is that even possible? I did run a few tests via mysqltuner, but found nothing that might be of help.

Here is part of the the my.cnf from the new server:

default_authentication_plugin= mysql_native_password
innodb_buffer_pool_size = 100G
innodb_buffer_pool_instances = 64
innodb_buffer_pool_chunk_size = 134217728
innodb_log_file_size = 12G    
collation_server        = utf8_unicode_ci
character_set_server    = utf8

This is mysqltuner output:

 >>  MySQLTuner 1.7.13 - Major Hayden <>
 >>  Bug reports, feature requests, and downloads at
 >>  Run with '--help' for additional options and output filtering

(--) Skipped version check for MySQLTuner script
Please enter your MySQL administrative login: root
Please enter your MySQL administrative password: (!!) Currently running unsupported MySQL version 8.0.25-0ubuntu0.20.04.1
(OK) Operating on 64-bit architecture
-------- Log file Recommendations ------------------------------------------------------------------
(--) Log file: /mnt/mysql/data/leo.err(0B)
(!!) Log file /mnt/mysql/data/leo.err doesn't exist
(!!) Log file /mnt/mysql/data/leo.err isn't readable.
-------- Storage Engine Statistics -----------------------------------------------------------------
(--) Data in InnoDB tables: 16.8G (Tables: 39)
(OK) Total fragmented tables: 0
-------- Analysis Performance Metrics --------------------------------------------------------------
(--) innodb_stats_on_metadata: OFF
(OK) No stat updates during querying INFORMATION_SCHEMA.
-------- Security Recommendations ------------------------------------------------------------------
(--) Skipped due to unsupported feature for MySQL 8
-------- CVE Security Recommendations --------------------------------------------------------------
-------- Performance Metrics -----------------------------------------------------------------------
(--) Up for: 1m 47s (419 q (3.916 qps), 45 conn, TX: 402K, RX: 35K)
(--) Reads / Writes: 100% / 0%
(--) Binary logging is enabled (GTID MODE: OFF)
(--) Physical Memory     : 125.8G
(--) Max MySQL memory    : 104.2G
(--) Other process memory: 136.7M
(--) Total buffers: 104.0G global + 1.1M per thread (151 max threads)
(--) P_S Max memory usage: 72B
(--) Galera GCache Max memory usage: 0B
(OK) Maximum reached memory usage: 104.0G (82.72% of installed RAM)
(OK) Maximum possible memory usage: 104.2G (82.85% of installed RAM)
(OK) Overall possible memory usage with other process is compatible with memory available
(OK) Slow queries: 0% (0/419)
(OK) Highest usage of available connections: 1% (2/151)
(!!) Aborted connections: 4.44%  (2/45)
(!!) name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
(--) Query cache have been removed in MySQL 8
(OK) Sorts requiring temporary tables: 0% (0 temp sorts / 19 sorts)
(!!) Joins performed without indexes: 18
(OK) Temporary tables created on disk: 0% (0 on disk / 23 total)
(OK) Thread cache hit rate: 95% (2 created / 45 connections)
(OK) Table cache hit rate: 80% (327 open / 408 opened)
(OK) Open file limit used: 0% (2/10K)
(OK) Table locks acquired immediately: 100% (9 immediate / 9 locks)
(OK) Binlog cache memory access: 0% (0 Memory / 0 Total)
-------- Performance schema ------------------------------------------------------------------------
(--) Memory used by P_S: 72B
(--) Sys schema is installed.
-------- ThreadPool Metrics ------------------------------------------------------------------------
(--) ThreadPool stat is disabled.
-------- MyISAM Metrics ----------------------------------------------------------------------------
(!!) Key buffer used: 18.2% (3M used / 16M cache)
(!!) Cannot calculate MyISAM index size - re-run script as root user
-------- InnoDB Metrics ----------------------------------------------------------------------------
(--) InnoDB is enabled.
(--) InnoDB Thread Concurrency: 0
(OK) InnoDB File per table is activated
(OK) InnoDB buffer pool / data size: 104.0G/16.8G
(OK) Ratio InnoDB log file size / InnoDB Buffer pool size: 12.0G * 2/104.0G should be equal 25%
(OK) InnoDB buffer pool instances: 64
(--) Number of InnoDB Buffer Pool Chunk : 832 for 64 Buffer Pool Instance(s)
(OK) Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
(!!) InnoDB Read buffer efficiency: 13.96% (5858 hits/ 41974 total)
(OK) InnoDB Write log efficiency: 98.44% (630 hits/ 640 total)
(OK) InnoDB log waits: 0.00% (0 waits / 10 writes)
-------- AriaDB Metrics ----------------------------------------------------------------------------
(--) AriaDB is disabled.
-------- TokuDB Metrics ----------------------------------------------------------------------------
(--) TokuDB is disabled.
-------- XtraDB Metrics ----------------------------------------------------------------------------
(--) XtraDB is disabled.
-------- Galera Metrics ----------------------------------------------------------------------------
(--) Galera is disabled.
-------- Replication Metrics -----------------------------------------------------------------------
(--) Galera Synchronous replication: NO
(--) No replication slave(s) for this server.
(--) Binlog format: ROW
(--) XA support enabled: ON
(--) Semi synchronous replication Master: Not Activated
(--) Semi synchronous replication Slave: Not Activated
(--) This is a standalone server
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
    MySQL was started within the last 24 hours - recommendations may be inaccurate
    Reduce or eliminate unclosed connections and network issues
    Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1
    Adjust your join queries to always utilize indexes
Variables to adjust:
    join_buffer_size (> 256.0K, or always use indexes with JOINs)

What is a recommended aproach to narrow down the problem?

linux – how to create a user specifically for opening veracrypt

according to recent sudo issues i want to remove it’s package from my machine and use root instead (for just related stuff – not all the time – like update and …), but i have an issue here that launching veracrypt via root is not a wise choice, then i want to create a user (or maybe use one of available users) to just using for veracrypt. if i remove sudo package it also try to remove sudo veracrypt package too, then does it possible to open it later (after remvong sudo and sudo veracrypt) via specified user in /etc/sudoers.d?

How to Find Files in Linux

How to Find Files in Linux
There’s nothing more frustrating than knowing that a file exists on your system but not knowing where it is. It’s like losing your car keys or misplacing your phone.

Sometimes you’ll have contextual clues – for example, system configuration files are usually in /etc. But even there, you’ve got 253 directories and subdirectories and over 1,730 files.

Let’s look at some tools to make find files easier.

I’ve setup a Debian 10 VPS and done the following:

  • created two users, named frank and mary
  • in /home/mary, I’ve created a directory called diary, placed some files in it (entry1.txt, entry2.txt, etc.), and made it mode 700 so only mary (and root) could see files in that directory.
  • in /home/frank, I created files called “plan1.txt” and “plan2.txt”, both of which are mode 600, so only frank (and root) can see them.

mlocate (Merging Locate) is a package that builds a global database of files that can be queried to find files. locate has a long history in Unix (1982). slocate was an improvement on locate and mlocate is a further improvement which seeks to operate more quickly and not blow up the filesystem cache when it updates.

mlocate also only shows files that are available to the user, not all files. All these programs run as unpriveleged users, so you needn’t worry that user A is going to see user B’s file called /home/userb/my-secret-diary.txt

To install on Debian:

apt-get install mlocate

When installed, the database is empty, so trying to use the locate command produces an error:

root@vnc:~# locate hosts
locate: can not stat () `/var/lib/mlocate/mlocate.db': No such file or directory

Let’s get the database up to date:

# time updatedb

real 0m0.593s
user 0m0.018s
sys 0m0.112s

On an AMD EPYC 7601 with 25GB RAID SSD disk that 95% free, updating the database takes less than a second. For comparison, on 6TB of spinning disk RAID-1 on less-than-enterprise-grade storage with an i3 on my home fileserver, updating the database can take several minutes.

Let’s see what the mlocate database looks like:

root@vnc:~# locate -S
Database /var/lib/mlocate/mlocate.db:
3,260 directories
33,859 files
1,433,462 bytes in file names
640,990 bytes used to store database

Now that we’re up to date, I can query. As root:

root@vnc:~# locate hosts

If I wanted to do a case-insensitive locate, I could use locate -i.

Because I’m root, I can see files in mary’s home directory:

# locate entry1.txt

But if I’m frank, I cannot:

# su - frank
$ locate entry1.txt

updatedb will run nightly to update out of /etc/cron/cron.daily/mlocate.

One weakness of locate is that if you create a file during the day, you have to wait until the overnight run of updatedb (or run it manually) before the file is in the database. For real-time queries, we can use find.

Find is a very old Linux command that goes back to Unix version 5 (1978!). It does a real-time search of filesystems. Unlike locate, you can search with criteria besides just a name.

The general format for find is


Let’s say I wanted to find /etc/passwd. I would type:

find /etc -name passwd -print

This means

  • “go look in the /etc directory and all its subdirectories”
  • “match files named ‘passwd’”
  • “print out each file you find”

Here are the results:

# find /etc -name passwd -print

The -print is optional so if you leave it off, you’ll get the same result.

Of course, I may not know the directory, so I could run this against the root filesystem:

# find / -name passwd -print

-iname is the case-insensitive parallel to -name.

With find, I can also find based on other criteria. Some examples to whet your appetite:

# dd if=/dev/zero of=/root/bigfile bs=1048576 count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB, 512 MiB) copied, 0.490124 s, 1.1 GB/s
# find / -size +100M -print
find: ‘/proc/2381/task/2381/fd/6’: No such file or directory
find: ‘/proc/2381/task/2381/fdinfo/6’: No such file or directory
find: ‘/proc/2381/fd/5’: No such file or directory
find: ‘/proc/2381/fdinfo/5’: No such file or directory

Here I’ve created a 512MB file and then searched for files bigger than 100M (“-size +100M”). The errors are because I asked find to search root and that includes /proc, and during find’s run some processes that were running when it started finished and their files no longer existed.

I can also find files based on date:

# mkdir /backup
# touch -t 201008201111 /backup/some_old_backup.tar.gz
# touch /backup/current_backup.tar.gz
# ll /backup
total 8
drwxr-xr-x 2 root root 4096 May 24 18:42 .
drwxr-xr-x 19 root root 4096 May 24 18:40 ..
-rw-r--r-- 1 root root 0 May 24 18:42 current_backup.tar.gz
-rw-r--r-- 1 root root 0 Aug 20 2010 some_old_backup.tar.gz
# find /backup -mtime +30 -print

That find expression (“-mtime +30”) menas “older than 30 days”.

There is much more you can do with find – for example, printing is not the only action. There are also a galaxy of expressions you can use to seach by: owner, group owner, newer than or older than other files, type of file, etc. Consult the find man page to learn all about find.


I’m Andrew, techno polymath and long-time LowEndTalk community Moderator. My technical interests include all things Unix, perl, python, shell scripting, and relational database systems. I enjoy writing technical articles here on LowEndBox to help people get more out of their VPSes.


Have you used or do you currently use Linux?

I just put Manjaro KDE on my laptop. I am loving it so far and will probably also put it on my desktop. :]

partitioning – add linux swap partition without harming existing data on the hard drive

I installed Ubuntu 20.04 using a bootable USB and followed the instructions. I noticed after installation that I only have a 2GB swapfile which is mounted under root directory(/). I want to separately have a linux swap partition in my hard drive. What is the best possible way to repartition the drive so that I don’t loose any information on my existing drive. linux swap of 64GB is optimal for me since my ram is also 64GB.

Let me know if you need any more information.


encryption – Linux LUKS encrypted root: fstrim always trims all empty space

I have a Manjaro system with a LUKS encrypted root filesystem on an NVME SSD. It is set up to decrypt / on boot via the kernel.

$ cat /proc/cmdline
initrd=amd-ucode.img initrd=initramfs-5.11-x86_64.img root=UUID=</dev/mapper/cryptroot UUID> rw cryptdevice=UUID=<SSD partition UUID>:cryptroot:allow-discards

$ cat /etc/fstab
# /dev/mapper/cryptroot
UUID=</dev/mapper/cryptroot UUID>       /               xfs             rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota        0 0

# /dev/nvme0n1p1
UUID=</boot UUID>          /boot           vfat            rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro   0 0

This all works, but recently I realised trim wasn’t enabled. I added :allow-discards to the kernel command line as you can see above. Now when I run fstrim -v /, every time it trims all of the free space even when there hasn’t been much filesystem activity:

$ sudo fstrim -v /
/: 540.8 GiB (580689911808 bytes) trimmed
$ sudo fstrim -v /
/: 540.8 GiB (580660371456 bytes) trimmed

Is this expected behaviour? My other SSDs (unencrypted) typically trim much less on an fstrim. dmsetup table shows the allow_discards option so trim should be enabled.

john the ripper – Retrieving partially forgotten Linux password: Beginning and end known

I forgot my linux password. I have access to the shadow file (Fedora 33), and I believe it should be possible to retrieve it with John the Ripper as I remember the first 4 characters, I remember the last 5 characters, I just can’t connect them anymore (must be 3 or 4 char in the middle).

Can I use John for that? What would be the command? Thanks!

ubuntu – como descargar un archivo de linux

Tengo el archivo bkup-gcp-htdocs.tar.gz en el directorio backups

root@api-vm-vm:~# cd backups
root@api-vm-vm:~/backups# ls

En el SSH le doy la opcion de Descargar Archivo el cual me pide una ruta, le doy backups/bkup-gcp-htdocs.tar.gz y me dice que el archivo no existe…

Que ruta le debo dar? o que otro método utilizo para descargarlo?