rex files
-
- Posts: 3604
- Joined: Fri Jun 04, 2004 2:57 pm
- Location: The south east suburbs of Malmö, Sweden.
Same reason a zip file can crunch 10 word files down to half their size, just compression.
Reducing size by half is fairly easy for most compression algorithms, and with such a low compression ratio there is no need to lose quality.
To reduce by 10 times or more (i.e. compressing a bitmap to a jpeg, or a wave file into MP3) is only possible if you start discarding bits of data to allow the compression to be more effective.
Here's a (fairly crap) analogy; the following string is 30 characters long:
aaaaabbbbbcccccbccccaaaaababaa
Here is the same 30 characters, but compressed:
5a,5b,5c,b,4c,5a,b,a,b,2a
That's now down to 25 characters, but it holds the same information without loss. Here is is again, but with some tweaking of the compression to lose bits of info in order to compress further
5a,5b,10c,10a
Now down to 12 characters, but we've lost 3 bits of the data (the last 3 "b"s)
Imagine that those 3 "b"s represented a sound, or a pixel colour, that sounded or looked so close to the ones next to it, that you could lose it without noticing too much, that's all a lossy compression algorithm does.
Of course, the more you compress, the more you lose, the worse it gets, but the above should prove that you can compress and guarantee not to lose quality if you want to.
Reducing size by half is fairly easy for most compression algorithms, and with such a low compression ratio there is no need to lose quality.
To reduce by 10 times or more (i.e. compressing a bitmap to a jpeg, or a wave file into MP3) is only possible if you start discarding bits of data to allow the compression to be more effective.
Here's a (fairly crap) analogy; the following string is 30 characters long:
aaaaabbbbbcccccbccccaaaaababaa
Here is the same 30 characters, but compressed:
5a,5b,5c,b,4c,5a,b,a,b,2a
That's now down to 25 characters, but it holds the same information without loss. Here is is again, but with some tweaking of the compression to lose bits of info in order to compress further
5a,5b,10c,10a
Now down to 12 characters, but we've lost 3 bits of the data (the last 3 "b"s)
Imagine that those 3 "b"s represented a sound, or a pixel colour, that sounded or looked so close to the ones next to it, that you could lose it without noticing too much, that's all a lossy compression algorithm does.
Of course, the more you compress, the more you lose, the worse it gets, but the above should prove that you can compress and guarantee not to lose quality if you want to.
A compressed file works together with a compression/decompression program. Together the two encode and decode data streams. The compressed file represents instructions for reconstructing the original data stream. The program provides the algorithms and data-handling routines to create the compressed file and to perform the reconstruction.
As long as the original information can be reconstructed perfectly from the compressed version, there is no loss. There's a limit to how much you can compress without losing any information. If you're willing to sacrifice some of your original data, you can squeeze tighter, get a smaller compressed file.
In the case of JPEG graphics, you can easily see the result of data loss: artifacts -- the squarish, blocky things -- appear when compression becomes excessively lossy. When you compress music too much, your MP3 (or whatever) will have audible artifacts in the form of distorted sound and weird background ringing.
When using lossy compression, the trick is to reach a comfortable balance between file size and reconstructed data quality. It's up to the individual compression user to decide where that balance lies, based on his or her needs. If you must have high fidelity sound, you'll have to compress less. If bandwidth is the paramount concern, you must accept larger files. To some extent, improved compression algorithms can help -- but they really just push the borderline case further toward the high quality, low file size case. You'll still have to make the same decision, but results will be better.
I've deliberately omitted two parameters: the time required to compress and decompress files. That's a separate issue that would only add confusion. I've also been a bit cavalier in using "data" and "information" interchangeably. They ain't the same thing! But this is an informal description and I think no harm is done by fudging. :-)
HTH!
As long as the original information can be reconstructed perfectly from the compressed version, there is no loss. There's a limit to how much you can compress without losing any information. If you're willing to sacrifice some of your original data, you can squeeze tighter, get a smaller compressed file.
In the case of JPEG graphics, you can easily see the result of data loss: artifacts -- the squarish, blocky things -- appear when compression becomes excessively lossy. When you compress music too much, your MP3 (or whatever) will have audible artifacts in the form of distorted sound and weird background ringing.
When using lossy compression, the trick is to reach a comfortable balance between file size and reconstructed data quality. It's up to the individual compression user to decide where that balance lies, based on his or her needs. If you must have high fidelity sound, you'll have to compress less. If bandwidth is the paramount concern, you must accept larger files. To some extent, improved compression algorithms can help -- but they really just push the borderline case further toward the high quality, low file size case. You'll still have to make the same decision, but results will be better.
I've deliberately omitted two parameters: the time required to compress and decompress files. That's a separate issue that would only add confusion. I've also been a bit cavalier in using "data" and "information" interchangeably. They ain't the same thing! But this is an informal description and I think no harm is done by fudging. :-)
HTH!
Last edited by Meffy on Fri Jul 15, 2005 2:24 pm, edited 1 time in total.
The more you compress using a lossy compression algorithm, the more bits you lose, and the smaller it gets, but the worse it looks or sounds. You get to choose the quality (eg 128bt, 360bit) and thus ratio that is used for lossy formats like MP3, JPEG.
Rex files (and FLAC, search for this, it's interesting) are lossless, so they will hit a limit on how small they can go before not being able to go smaller without losing stuff, which they refuse to do. This is typically about half. You usually don't get to choose the ratio, as it will just do what it can, and you won;t be able to get it smaller.
Rex files (and FLAC, search for this, it's interesting) are lossless, so they will hit a limit on how small they can go before not being able to go smaller without losing stuff, which they refuse to do. This is typically about half. You usually don't get to choose the ratio, as it will just do what it can, and you won;t be able to get it smaller.
Don't feel bad if it's hard to wrap your mind around. Data compression isn't exactly intuitive! It is a complicated subject, and I don't begin to understand some of the more advanced kinds of algorithms and file formats used.
For a real hoot, check out Iterated Systems, Inc.'s FRACTAL compression and the .fif format. Using .fif, you can achieve 50:1 compression ratios with good fidelity... and you can actually zoom IN, apparently revealing details not in the original image! It's a trick, really, a clever hand-wave -- but it works surprisingly well on a wide variety of source images.
For a real hoot, check out Iterated Systems, Inc.'s FRACTAL compression and the .fif format. Using .fif, you can achieve 50:1 compression ratios with good fidelity... and you can actually zoom IN, apparently revealing details not in the original image! It's a trick, really, a clever hand-wave -- but it works surprisingly well on a wide variety of source images.