Python – Pymongo: Query data from certain Wiredtiger (.wt) files in Mongodb

I am new to Pymongo. I have some Wiredtigers (.wt) available in Mongodb like below. I wanted to know what data is stored in each of the following (.wt) files. All of these files belong to a collection.

Questions: How can I query the data from the following files? How do I know what data is stored in the following files?

collection-82-151410804740075054.wt

index-83-151410804740075054.wt

index-84-151410804740075054.wt

index-85-151410804740075054.wt

Save your files. | Proxies-free

Hello,

How do you deal with backing up your files in difficult times when no hosting is permanent?
recommend using some services with which you have experience .. (Mega? Google? Backblaze?) something else?
We have more than 50 TB of data on multiple hosts. But we want cold backup.

So make some suggestions, guys :) :)

Domain Name System – Unbound does not seem to read certificate files for DNS-over-TLS, receives "permission denied"

I am trying to set up DNS over TLS (DoT) with an unbound resolver. i.e. I am trying to encrypt the connection between the client and the unbound I am NOT trying to encrypt the unbound resolver → upstream connection that many instructions on the Internet talk about.

I have the following in the configuration file, as explained in the man page and also described here:

 server:
   interface: 0.0.0.0@853

   tls-port: 853
   tls-service-key: "/etc/letsencryp/live/DOMAIN/privkey.pem"
   tls-service-pem: "/etc/letsencryp/live/DOMAIN/fullchain.pem"

However, when I try to reboot unbound, the following permission for the certificate files is denied.

package-helper(778): /var/lib/unbound/root.key has content
package-helper(778): success: the anchor is ok
unbound(813): (1586107523) unbound(813:0) error: error for cert file: /etc/letsencryp/live/DOMAIN/fullchain.pem
unbound(813): (1586107523) unbound(813:0) error: error in SSL_CTX use_certificate_chain_file crypto error:0200100D:system library:fopen:Permission denied
unbound(813): (1586107523) unbound(813:0) error: and additionally crypto error:20074002:BIO routines:file_ctrl:system lib
unbound(813): (1586107523) unbound(813:0) error: and additionally crypto error:140DC002:SSL routines:use_certificate_chain_file:system lib
unbound(813): (1586107523) unbound(813:0) fatal error: could not set up listen SSL_CTX
systemd(1): unbound.service: Main process exited, code=exited, status=1/FAILURE

I tried to move the files from this directory and experimented with the setting root or unbound as owner. The only way to get it to work was to place the files directly in the /etc/unbound/ Directory. A symlink in the same place, which refers to files managed by letsencrypt, did not work either. This is not ideal as I have to copy the certificate files from the letsencrypt directory regularly when a certificate is renewed and / or need to restart the DNS resolver unnecessarily.

I have thoroughly checked whether a chroot is not configured in configuration files or default settings or is compiled in the binary file. In fact, it was explicitly disabled by default in Debian (bug report)

How unbound may not be able to read files, that's right there with unbound:unbound as owner: group and permissions set as readable?

I am using unbound version 1.9.0-2 + ​​deb10u1 on Debian Buster (10) if it matters.

Python – Check header files for namespace usage

To ensure that there are no name conflicts in our project (C++) I was asked to write a python script to check all of our header files for the appearance of a using namespace ... inside the file. If an occurrence is found, it is appended to a list and then written to a log file. It's a fairly simple script, but I think there could be some tweaks. This script runs whenever someone commits to the repository.

"""
Checks all the header files in our project to ensure there
aren't occurrences of the namespace string.

AUTHOR: Ben Antonellis
DATE: April 4th, 2020
"""

import os

namespace: str = "using namespace"
working_directory: str = os.path.dirname(os.path.realpath(__file__));
occurrences: list = ()

for file in os.listdir(working_directory):
    formatted_file = f"{working_directory}/{file}"
    with open(formatted_file, "r") as source_file:
        for line_number, line in enumerate(source_file):
            if namespace in line and file(-3:) != ".py":
                occurrences.append(f"NAMESPACE FOUND: LINE ({line_number + 1}) IN FILE {formatted_file}")

with open("logs/log.txt", "w") as log_file:
    for line in occurrences:
        log_file.write(line)

bash – The hard drive is full, but cannot find large files

I ran a bash script that created a 14G file in the tmp directory. I deleted it, but I can't find the directory or file that is large.

My output for df -h


Filesystem      Size  Used Avail Use% Mounted on
udev            474M     0  474M   0% /dev
tmpfs            99M   11M   88M  11% /run
/dev/vda1        25G   25G     0 100% /
tmpfs           491M     0  491M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           491M     0  491M   0% /sys/fs/cgroup
/dev/vda15      105M  3.9M  101M   4% /boot/efi
/dev/loop0       90M   90M     0 100% /snap/core/7917
/dev/loop1       55M   55M     0 100% /snap/lxd/12211
/dev/loop2       94M   94M     0 100% /snap/core/8935
/dev/loop3       68M   68M     0 100% /snap/lxd/14194
tmpfs            99M     0   99M   0% /run/user/0
/dev/loop4       55M   55M     0 100% /snap/core18/1705
/dev/loop5       49M   49M     0 100% /snap/gtk-common-themes/1474
/dev/loop6      153M  153M     0 100% /snap/chromium/1071
tmpfs            99M     0   99M   0% /run/user/1000

My output for du -sh in the directory /

du: cannot access './proc/19935/task/19935/fd/4': No such file or directory
du: cannot access './proc/19935/task/19935/fdinfo/4': No such file or directory
du: cannot access './proc/19935/fd/3': No such file or directory
du: cannot access './proc/19935/fdinfo/3': No such file or directory
4.7G    .

I can't install ncdu or any other tools because the hard drive is full considering the summarized size after du -sh where the rest of the space is 25 GB

How can you see which files are in the internal memory and which are on the SD card (formatted as internal memory)?

On Android 9 (Pie) how to visualize SD card (formatted as internal) vs internal storage? I have to differentiate them because I chronically run out of storage space and can't download apps, but I have more than 200 GB on my SD card.

As far as I can tell, the operating system doesn't seem to differentiate between the two physical storage devices. Is it possible without rooting?

Bonus points if this is possible in a nice visual structure like SpaceMonger / WinDirStat / Baobab. Here is an example of the SpaceMonger user interface, where you can immediately identify the problem areas
Enter the image description here

How to hide current files in iPad OS and iOS

Native: With iPad OS and iOS, you cannot remove / hide files from the Recent section in the native files app.

1. Bypass (any file)

If you want to prevent files from appearing in the Recent section.

  1. Create a folder on the "On my (device here)" tab
  2. Move any files you want to remove from Recent to the folder you created.
  3. Long press the folder and select Compress.

This will create a duplicate of the original folder with all of the files that you saved as a ZIP file.

  1. Delete the original folder that was not compressed

This does not delete the folder immediately and will be moved to Recently Deleted instead if you want to restore your file. (This also removes the folder and its contents from the "Recent" section, so you can technically view the files from "Recently Deleted" without "Last" showing anything even after you view the content have restored.)

  1. Tap the compressed file to export a duplicate of the original compressed folder. None of the files in the exported folder will be listed in the "Recent" section until they are displayed again.
  2. Assuming that no files have been added or changed (if files have been added, start from step 1), after viewing the content, delete the uncompressed folder and decompress the zip file. Rinse and repeat as needed.

2. Workaround (images, GIFs, videos only):

This workaround applies specifically to images, GIFs, and videos.

  1. Separate all files into a folder as you would do at work by 1.
  2. Select all sorted images
  3. Tap "Share" and press "Save picture (s)".
  4. Delete the folder with all pictures / videos

Again, the files / folders are not deleted immediately and you can restore them if you wish.

  1. Open the native photo app
  2. Select all the pictures you want to hide
  3. Press Share and tap the "Hide" option
  4. Go to the "Albums" section and go to the "Hidden" album.

As a result, all images are no longer displayed in the Photo app or another application. Unless hidden, all images are limited to the album "Hidden".

Python – get different values ​​from multiple files to write to a single file

I have 5 * .dat files here: https://filebin.net/pkh86usc4j7ttjkf

Each of them has several parameters. However, we only focus on two lines:

xx
......
......

  yy

For every file xx and yy are different values. For example xx is 0.1, 0.3, 0.4, ... and yy is -15.95.. or -13.45... and so on.

We want to extract xx and yy from different files and collect them in a text file. The mentioned text file in a desired situation contains two columns as:

0.3    -14.335415618223263
0.4    -17.957315618223263
0.7    -12.554415618223263
1.0    -10.997315618223263

It is difficult for us because we cannot extract values ​​if they are between different parameters.
How can we write code in Python to achieve such a goal?