c# – Are Flate compression in PDF and the Deflate algorthm 2 different things?

I’m trying to make a program that produces pdf files as an output. I’ve been studying the pdf format specification and specific pdf files the format of which I’m seeking to mimic. There’s this line /FlateDecode in these documents and when I searched about this compression algorthm the google results referenced a built in .Net algorithm, Deflate.

The thing is, when I try to decode the encoded text from the aforementioned files using Deflate, the C# algorithm returns nothing (I copied the binary data in a hex editor into a new file, cutting off both the starting and trailing newline (0x0A)) and the online Deflate decoders say the text is invalid (here I copied the data from a text editor, so it was text in ANSI encoding), as in not encoded by the same algorithm, leading me to believe that despite google’s best efforts it popped up a similarly named, but not identical compression method.

If this hypothesis is correct, then does anybody know if there is a publicly available, already implemented Flate encoder or do I have to write my own based on the PDF file format specifications?

If the hypothesis is incorrect and I’m just screwing up something, what am I screwing up?


linux – unzip multiple files and folders with different compression formats and passwords

Some time ago I asked this question and posted my bash script,to decompress multiple files, in different compression formats, with different passwords:

PASS="passfoo passbar passfoobar"
LIST=$(ls -1 *.{zip,7z,7z.001,rar})
for password in $PASS; do
  for i in $LIST
      echo "$password"
      7z x -y -p"$password" "$i" -aoa
      if ( $? -eq 0 ); then

And I selected this answer as correct:

#!/usr/bin/env bash

shopt -s extglob nullglob


for f in *.@(zip|7z|7z.001|rar); do
  for p in "${passw(@)}"; do
    if 7z x -y -p"$p" "$f" -aoa; then

But my script still contained errors with multipart .rar files. So I asked another question, and received an answer that fixed the problem, leaving the script as follows:

#!/usr/bin/env bash
shopt -s extglob nullglob nocaseglob
for f in *.@(rar|zip|zip.001|7z|7z.001); do
  (( ( "$f" =~ .part((:digit:))+.rar$ ) && ! ( "$f" =~ .part0*1.rar$ ) )) && continue
  for p in "${passw(@)}"; do
    if 7z x -y -p"$p" "$f" -aoa; then

But my script still has problems. For example:

  1. when the script tries to unzip a compressed file that has an error preventing it from unzipping, the script runs indefinitely

  2. When the script tries to unzip a file that contains a folder with different files, the script tries to unzip the files inside the folder (ok), but even those that do not have a compression extension (e.g.: * .url) and get ERROR CRC wrong password and keep running and testing every password of the script trying to unzip uncompressed files

some help?

sql server – Database Page Compression and ColumnStore Archive – is data compressed when in buffer pool and when sent to client?

Your privacy

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.

complexity theory – How is compression ratio related to tree structure in Huffman?

Consider a huffman Tree T for a text file , will the compression ratio be more if the tree is more balanced ?

What is the relation of tree structure with compression ratio.

I tried thinking in this way – Suppose we get a perfectly balanced tree in huffman, which will be the same as uniform encoding but in this case frequencies of each letter in the text file will be the same. As opposed to when frequencies of letters are different. But then it is hard to comment which compression ratio will be better – a balanced tree or a skewed tree?

applications – How to change the compression and format settings of images taken in 8k+ 64MP on my Samsung Galaxy S20+ 5G

The Camera App natively supports 64MP and I have read that my phone (Samsung Galaxy S20+ 5G) may be able to go even higher.

I’m taking lots of pictures in 64MP and it’s great, but I’ve noticed an issue.

The images appear to be compressed with visible compression artifacts in the lossy jpeg format.

I’d much prefer that that they are stored losslessly when taken and later I can deal with how I want to compress/handle them.

I have not yet found an app or setting that can do this.

Most of the apps and settings I’ve used to get a RAW image generally work, but none have preserved the capacity to take pictures in 64MP, instead they apply to 4k, which is not what I’m interested in especially.

I’ve tried the Camera App’s Pro Mode, Open Camera, CameraMX and Lightroom. I even tried using a special build of Google’s Camera App for my phone that I got from XDA Developers. And CameraMX just crashes at start.

sql server – Test to see when table is candidate for compression (row or page)

Stephen’s answer pretty much sums up the information you’re looking for, just adding my two cents in, hopefully without being redundant.

Backups shouldn’t take any longer because of Page or Row Compression. Worst case they take about the same, and they can possibly be faster because part of the normal backup process is compressing the data. I would presume (haven’t tested yet) that already compressed data will measurably speed up that step of the backup process, since the work has already been done ahead of time.

With compression, you’re essentially trading more CPU resource consumption for improved Disk and Memory performance. Data is persisted on Disk and loaded to / from Memory compressed. As Stephen pointed out, it isn’t until it is delivered to the consuming application that it’s decompressed, which is when you may experience higher CPU resource usage. Or as you guessed, when new data needs to be written to the database, it also becomes compressed first.

You should monitor your CPU throughout the day, and with common and heavy tasks your environment’s workload typically endures. If you find your CPU is constantly pegged near 100% (and it previously wasn’t), then you may want to further investigate into if your compression settings are the main contributor, and possibly try testing loosening the settings on some of your most frequently used or larger tables / objects.

I always refer to sp_estimate_data_compression_savings (as Stephen mentioned) before changing compression, as it’s pretty helpful in telling you how beneficial it may be to compress an object one way or another vs not compressing all. You can use that to help you debug and see if you guys are unnecessarily compressing some tables at no benefit in return.

Though generally I’ve found modern CPUs work pretty well, even under high paced workloads (from my experiences with working with semi-big data), and have seen many opportunities to utilize wasted unused CPU resources. Regarding this, I like one of Brent Ozar’s mantras that unused server resources (to a degree) is a waste, when you’re paying for something you’re not using.

plugins – WordPress built-in compression of images? How well does it compare to Photoshop or Shortpixel?

I think the quiestion is pretty straight forward.

I have around 5000 images I want to upload. I need to compress them somehow, but how?!

Photoshop compression
use the built-in compression of new wordpress versions.

Like does Photoshop do anything better? Because the only thing I can see in Photoshop is basically the % of compression for jpg. It also offers a preview of the image but I don’t really care nor need that for these types of images. I would do it in a batch either way.

So yeah, really? What is the difference and what would be beneficial in using the Photoshop compression or Shortpixel compression?

What is the best method to backup Windows 10 system partition with the highest compression level?

So I have made up my mind to backup my current Windows 10 installation, I have made many tweaks to the current system and disabled many unnecessary services and installed all programs that needs to be installed and I have run many checks and tested it again and again, and I have determined it to be stable and fully functional, now I think this is a good time to make a full partition backup lest I somehow break it again, system restore doesn’t cut it anymore.

Which brings the question of how to best backup system partition, with a huge amount of files and all those access control lists, so I want to do it all right all the while make the backup file as small as possible.

Currently on my Samsung 870 QVO SSD, 603 GiB is available out of 931 GiB total, so that’s about 328 GiB used, I have a Seagate Exos 7E8 HDD and I am going to put this backup into a 3 TiB NTFS partition on the HDD, but I don’t want the backup to take up 328 GiB space, I want the backup file to have a compression ratio as high as possible, so as to save the maximum space possible.

Right now I am using another Windows 10 system I installed to another HDD, and I have tried to run this command:

dism /capture-image /imagefile:D:backupsWindows10.wim /capturedir:F: /name:Windows10 /compress:max

But it just keeps giving me this error:

Deployment Image Servicing and Management tool
Version: 10.0.19041.572

Error: 32

The process cannot access the file because it is being used by another process.

The DISM log file can be found at C:WindowsLogsDISMdism.log

Do I have to run the above mentioned command inside a Windows Preinstallation Environment? I don’t really have any spare USB drive right now.

Aside from dism, the only other thing I know of that can get the job done is Norton Ghost, but that thing isn’t free and has become history now and I can’t find a copy of it now and I don’t want to.

So what other free options do I have that can backup a whole Windows 10 installation partition with ACLs and has a very high compression ratio?

archiving – Dictionary Size and Compression Method

I have knowledge about archiving, since I already try it so many times in a basic way just right click and add to archive then press ok, but it is my first time to modify it using the settings.

I want to store the folder in an archive without compressing it and retain its original size and everything. However, if I select store option, there is a dictionary size. What is that?
A store compression method and a bunch of dictionary size

I try it to myself to know the results:

This is a store and 64kb

This is a store and 4096kb

I am a little bit confuse about the sizes. Is there a single piece of information that leads into a loss of data in a long run?

That is why I decided to archive a folder without compressing it, so that there is no loss of data and I can upload it on the internet in an efficient way. However, there is a dictionary size which I am uncertain of.