Best compression ratio goes to xz/pxz, followed by lzip/plzip and then the various bzip2 implementations.compression ratio is ratio of original size to compressed size so larger the compression ratio, the better the compression and smaller the resulting compressed file size.compress and decompress cpu % is percentage of cpu utilisation where 100% = 1 cpu thread and 800% = 8 cpu threads.compress and decompress times are in seconds.pigz has specific level 11 for Zopfli compression and zstd/pztd has levels up to 19-22 where it can match xz/pxz in terms of compression ratios. compression levels 1-9 were tested despite some compression algorithms allowing to go higher in terms of levels i.e.Centmin Mod 123.09beta01 LEMP stack - Nginx 1.13.4, MariaDB 10.1.26 MySQL, + CSF Firewallīelow are the comparison results for compression tests with links to the raw data as well.The test data set was taken from Silesia Compression Corpus zip file here turned into a tar archive for compression tests. pxz v5.2.2 - multi-threaded version of xz.plzip v1.6 – multi-threaded version of lzip.lzip v1.19 – based on LZMA compression algorithm.lbzip2 v2.5– multi-threaded version of bzip2.pbzip2 v1.1.12 – multi-threaded version of bzip2.pigz v2.3.4 - multi-threaded version of gzip.brotli v1.0.0 - Google developed Brotli compression algorithm.zstd & pzstd v1.3.0 - Facebook developed realtime compression algorithm.This time I have added two new compression algorithms from Facebook's zStandard (zstd) realtime compression algorithm which is said to way faster than gzip/zlib but with comparable compression ratios and Google's Brotli compression algorithm which has better compression ratios. In the past I have done comparison benchmarks for the various compression algorithms and tools I normally use. cpu speed, number of cpu cores/threads and memory. Part of the backup and restoration process is compression and decompression speeds which essentially comes down to the type of compression algorithms and tools you use and your system resources you have available i.e. The faster I can backup and restore file data, the better. One task I always focus on is data backup and restoration speeds. If I can figure out how to do a task better and faster than the standard way, I will work towards doing that. So I need to export a reader and writer for the raw LZMA stream without the header.Some folks will know I have a strong focus on performance and efficiency. you cannot directly do lzip with my LZMA package, you would need to do something comparable like your lz to LZMA converter, because my lzma reader and writer requires the LZMA header. Rust doesn't have build scripts as well, it has the cargo command and the Cargo.toml file. As I wrote, this is natural for a Go developer. All what is needed is a go.mod file, the rest is handled by the go command. This will be faster than any table lookup. The function call will be translated by the compiler directly into a machine instruction. The package didn't exist when I wrote the code. In my update I will use the LeadingZeros32 function from the math/bits package of the Go standard library. The implementation for nlz32 is in bitops.go. Actually I did the implementation simply from the specification, I only looked at the reference implementation that were part of the specification. The comment in fastpos.h references the bzr machine operation and what I'm doing is comparable to that approach. I use a nlz32 function, which computes the number of leading zero bits of a 32-bit unsigned integer. I do the calculation for the match distance in line 78 of distcodec.go, not using a fastpos table. If those probabilities are indeed different then decompressor would fail at some point, wouldn't it?Īttached converter script: lz-to-lzma.zipįastpos_table is not a probability table. And the result surprised me - xz decompressed lzma from lz amd lzip decompressed lzma stream from xz. I also made a test and converted lz to lzma and lzma to lz by contriving header and copying lzma stream, without making a footer. None it's going to investigate which file is what and create building script for such a big project git sake of little change they would like to introduce and test. go files and not all of them being to the same "subproject". Without that how go you expect anyone to do any changes to the project, which is quite big? There are 79. Well, I downloaded go, tried instruction from README and nothing happened. And it is foreign to me.īut, regardless, you have no Makefile nor any building scripts that would allow user to compile it. As I said I don't know go so how does that relate to that? I'm far from understanding it, especially in foreign language (I mean go). If I'm not mistaken this is the table: fastpos_table.c.
0 Comments
Leave a Reply. |