AFAIK the original Code was done by the „TH-Double Team“ aka „Thomas Borris“ and „Thomas Meidinger“ … it was published by the german „RUN Magazine“. The Program was named „The Final Super-Compressor“.
1) I learned how to use the Matcham time cruncher as a teenager without internet or help from someone in 1988. Was fascinated. Built a pal->ntsc switch into the machine to gain maybe 10% speed for such stuff, but using it would crash the machine sometimes lol. Apart from that I remember Matcham speed packer on the same disk (it's on csdb). Was very much faster and now I wonder how they achieved that... 2) I also thought on Amiga there's only lossless compression until I recently learned how sample 4bit-delta compression works. One of the rare examples for lossy. Has exactly 50% compression ratio as each byte becomes a nibble. 3) Super episode, thanks!
So nice that you found Matcham and got him to explain & answer questions. He was most def the cruncher king. Also, he didn't live too far from me. Remember 1001 was so proud of their cruncher, the card crucher, which they even put onto cartridge and tried to sell to software houses (going to the PCW show in 1985 or 1986). But compared to Matcham's Time cruncher, it came short. Remember that there were made "special" versions for the biggest groups at the time. So Matcham was respected by the best. I have an HTML file with the code for Time Cruncher, where I made all the Jumps, JSR's and branches clickable so they jump to the correct addresses. It's impressive short. Galleon dived into the source code and came up with his Cruel Cruncher. One that had the 2mhz function for the C128. It shorted the crunching time by a factor of 2.
Great explanation on picking which value to use for the packer control byte. WAY back in the prehistoric times I encountered packers, that didn't scan before packing, and used either a hardcoded value or allowing the user to pick which one to use. That would either result in a packer being very inefficient, or the user having to go through a lot of trial-and-error to make sure the packed file wasn't unnecessarily large.
Glad you like it! (And for the rest, this is QED/Triangle - a good friend from ancient times). No scanning you could still get Lucky, if the value picked was one that was likely to be the one a scanner would also find. Looking at EBC it was of course scanning but I seem to remember that when I reassembled it recently, it had errors scanning for values repeating more than 256 times.
@@FairLight1337 I can't fully recall how EBC handled values repeating for more than one page (it was 33 years ago, after all) but seeing as I "remastered" EBC 1.9 to build in KickAssembler back in 2021, it's possible that I could go back and check.
I am only 5 minutes in, but the easy explanation of lossy and losless compression is stunning. I had a need to much more words. But, you got it it right, in the simplicity. Impressed.
There are a couple of cases of lossy compression being used on C64’s and Amigas: when people store animations using PETSCII chars that are close, but not identical to 8x8 blocks of pixels, and I’m pretty sure those Blender 3D animations in the winning demo from Xenium 2024 used a limited number of redefined chars to store the animations lossy. And on the Amiga and Atari ST, ADPCM is used for sample playback, because it can be decompressed on-the-fly fast enough to be viable in demos. Great video as always! ❤
Crunchers are such a fascinating subject matter. I started with the Time Crunch 5.0, spending many a night crunching, but once Galleon of Oneway released Fast Cruel 2.5+, it was game over: it beat everyone in size. I didn't mind trading time for size because I wasn't competing against anyone and I was trying to save as much disk space as possible, since I had no cartridge, size affected loading times and I didn't have any money to buy more floppies when I was starting out. Disk space therefore was always at a premium: the smaller the files, the more floppies could be freed, since every floppy disk was precious. Back in those days, one could only buy floppies outside of the country, so saving as much disk space as possible was no joke.
Great video. Matcham! Back in the 80's, I hand wrote a letter to him asking if he could help me convert C64 Time Cruncher to the Amiga and he replied and provided a complete 68000 source listing for Time Cruncher V4. Excellent product for the time.
Great content, I love it !!!! A lot of your stuff helped me getting back into C64 programming. Soon I will release a new game where you helped with your content.
I built and packaged Exomizer for Solaris 10 on i86pc and sparc platforms. Works like a charm, but lacks a good manual page to go along with it. I wonder who would win: Exomizer or Fast Cruel 2.5+.
@AnnatarTheMaia Exomizer I would imagine, I did some tests using it as both a packer and a cruncher a few years ago and it beat everything I put it agaisnt, although it did have longer depack times. But as its also packing via a PC, takes seconds, not all night. /Pitcher
I love compression (especially on the C64). Made a huffman encoder just for fun. Looked into package-merge algorithm also. RLE of course also. Tried some opmtimizations (ore formatting the string etc. Encoding of the bytecount etc. Can be done in smart ways. I love this kind of stuff 😂)
For "option 2" packers (ie. control byte + number of occurrences + value), the packer should pick its control-byte based only on occurrences of each of the 256 possible values that do not occur on their own or in a 2-byte sequence, as those are the only occurrences that increase the size of the compressed data. At least, that's how I did it from EBC V1.6 onward, back in 1991. :)
@@FairLight1337 That's an implementation detail - you can bunch up your bits in batches of 8 and then your literals will all be naturally byte aligned and you give Huffman a chance to find some statistics to chew on.
@@FairLight1337 It should just be a different order of your compressed bits, making them more friendly for Huffman. I believe the bit-grouping approach was popular in the later LZSS and similar variants; it makes outputting literals much faster as it is a plain copy not needing bitshifting.
Ok. I guess I need to see this in practice to see if there are efficiency gains. Without having done any statistics on real data, I cant see how this makes sense but it could be that I dont understand the brilliance here.
You should give Dali a spin. It's the zx0 implementation for the c64. But don't quote me on the tech stuff. It's used in Bitfire and packs fast and decrompressor quickly fast aswell.
@@FairLight1337 as far as i understood on CSDB it's way faster than exomizer, esspecially when depacking. Tbh i don't know if the pack ratio is around exomizer. You should try it out! :)
@@FairLight1337 There are no drawbacks; ZX0 has same ratio with only about 1/4th the speed of a memcpy. The only other choices are lz4 (for maximum depack speed) or something that goes for max compression like shrinkler or ukpr.
Dali looks promising. I need to have a look at the size of the depacker as well. Retrofitting games with level crunching, you tend to have very little memory left. So zero page usage and memory footprint of the depacker is key.
@@FairLight1337 Really no compression? I thought that was the only way to get so many games onto a single tape. Well, at least I learned something new today. :)
The programs were of course compressed, at least most of them. But the Turbos means storing the bits as shorter pulses on the tape. A.most efficient (but fragile) way of storing data on the tape.
@@FairLight1337 Ok, so the extra compression with Turbo 250 lies in the actual storage method on the tape? That in itself would be an interesting topic for your next YT-video. ;)
Compression is reducing the size of the actual file. Fast Loading on tape is putting the bits of the file with a higher density. Its not compression. Its like setting and he disk to use 40 tracks. Gives more space on the disk. It doesnt change the files you store there.
Burrows-Wheeler-Transform isn't a compression algorithm in itself, it's a transformation to reorder data for better compression, similar to what huffman encoding does in the LZH format.
@@Kobold666 yes. It sorts symbols by their context, so that symbols with similar contexts come close together. That usually leads to long runs of the same symbol. The surprising thing is that such sorting is reversible.
When decoding the data, is that done on the fly when reading the data from the medium, or is it first loaded into ram, and then decompressed? If so, how can everything be in the memory at the same time?
You mean self extracting or levels? Self extracting loads the compressed and decompresses in memory. Levelcrunch loads from disk and decompresses on the fly. So the compressed data is never loaded to memory - it comes as a stream and ens in decompressed form.
Pontus, congratulations for this awesome video !!!
Humble thanks pal(s)!
@radwat1941 Do you have any insight on the Supercompactor? Its supposed to be by Flash. Anyone able to connect me to the coder?
AFAIK the original Code was done by the „TH-Double Team“ aka „Thomas Borris“ and „Thomas Meidinger“ … it was published by the german „RUN Magazine“. The Program was named „The Final Super-Compressor“.
Any contact with them?
I’ll try to get in contact with’em. Cheers MWS
1) I learned how to use the Matcham time cruncher as a teenager without internet or help from someone in 1988. Was fascinated. Built a pal->ntsc switch into the machine to gain maybe 10% speed for such stuff, but using it would crash the machine sometimes lol. Apart from that I remember Matcham speed packer on the same disk (it's on csdb). Was very much faster and now I wonder how they achieved that...
2) I also thought on Amiga there's only lossless compression until I recently learned how sample 4bit-delta compression works. One of the rare examples for lossy. Has exactly 50% compression ratio as each byte becomes a nibble.
3) Super episode, thanks!
Thanks for watching!
So nice that you found Matcham and got him to explain & answer questions. He was most def the cruncher king. Also, he didn't live too far from me. Remember 1001 was so proud of their cruncher, the card crucher, which they even put onto cartridge and tried to sell to software houses (going to the PCW show in 1985 or 1986). But compared to Matcham's Time cruncher, it came short. Remember that there were made "special" versions for the biggest groups at the time. So Matcham was respected by the best. I have an HTML file with the code for Time Cruncher, where I made all the Jumps, JSR's and branches clickable so they jump to the correct addresses. It's impressive short.
Galleon dived into the source code and came up with his Cruel Cruncher. One that had the 2mhz function for the C128. It shorted the crunching time by a factor of 2.
Note to self: I should watch the whole videos before commenting. 😅 Many of the things I wrote were in the video after the "interview". 😛
Ha ha ha :)
There will be more in the interview next week!
Great explanation on picking which value to use for the packer control byte. WAY back in the prehistoric times I encountered packers, that didn't scan before packing, and used either a hardcoded value or allowing the user to pick which one to use. That would either result in a packer being very inefficient, or the user having to go through a lot of trial-and-error to make sure the packed file wasn't unnecessarily large.
Glad you like it! (And for the rest, this is QED/Triangle - a good friend from ancient times).
No scanning you could still get Lucky, if the value picked was one that was likely to be the one a scanner would also find. Looking at EBC it was of course scanning but I seem to remember that when I reassembled it recently, it had errors scanning for values repeating more than 256 times.
@@FairLight1337 I can't fully recall how EBC handled values repeating for more than one page (it was 33 years ago, after all) but seeing as I "remastered" EBC 1.9 to build in KickAssembler back in 2021, it's possible that I could go back and check.
I did the same (the version you did for me on the meeting in Öland).
I am only 5 minutes in, but the easy explanation of lossy and losless compression is stunning.
I had a need to much more words.
But, you got it it right, in the simplicity.
Impressed.
I generally feel I am babbling a bit. Im glad you dont see it like that :)
Very interesting stuff, thanks for sharing.
Glad you like it!
There are a couple of cases of lossy compression being used on C64’s and Amigas: when people store animations using PETSCII chars that are close, but not identical to 8x8 blocks of pixels, and I’m pretty sure those Blender 3D animations in the winning demo from Xenium 2024 used a limited number of redefined chars to store the animations lossy.
And on the Amiga and Atari ST, ADPCM is used for sample playback, because it can be decompressed on-the-fly fast enough to be viable in demos.
Great video as always! ❤
Vector quantization is probably what you wanted to say. :-) It's used a lot in C64 productions.
@@az09letters92 Aha, I didn’t know it had a name, thank you!
Thanks!
Thanks!
Crunchers are such a fascinating subject matter. I started with the Time Crunch 5.0, spending many a night crunching, but once Galleon of Oneway released Fast Cruel 2.5+, it was game over: it beat everyone in size. I didn't mind trading time for size because I wasn't competing against anyone and I was trying to save as much disk space as possible, since I had no cartridge, size affected loading times and I didn't have any money to buy more floppies when I was starting out. Disk space therefore was always at a premium: the smaller the files, the more floppies could be freed, since every floppy disk was precious. Back in those days, one could only buy floppies outside of the country, so saving as much disk space as possible was no joke.
My first disk was €6 - for one! So space was indeed previous. Thanks for the post!
Don't recall Fast Cruel, last C64 packer I used was probably AB Packer.
Agree. There were indeed what I used as well before doing everything on PC
Great video. Matcham! Back in the 80's, I hand wrote a letter to him asking if he could help me convert C64 Time Cruncher to the Amiga and he replied and provided a complete 68000 source listing for Time Cruncher V4. Excellent product for the time.
He is really a humble and great guy. Truly likable!
Great content, I love it !!!! A lot of your stuff helped me getting back into C64 programming. Soon I will release a new game where you helped with your content.
Humble thanks!
I built and packaged Exomizer for Solaris 10 on i86pc and sparc platforms. Works like a charm, but lacks a good manual page to go along with it. I wonder who would win: Exomizer or Fast Cruel 2.5+.
@AnnatarTheMaia Exomizer I would imagine, I did some tests using it as both a packer and a cruncher a few years ago and it beat everything I put it agaisnt, although it did have longer depack times.
But as its also packing via a PC, takes seconds, not all night.
/Pitcher
I love compression (especially on the C64). Made a huffman encoder just for fun. Looked into package-merge algorithm also. RLE of course also. Tried some opmtimizations (ore formatting the string etc. Encoding of the bytecount etc. Can be done in smart ways. I love this kind of stuff 😂)
I do to, as Im sure you can tell :)
@@FairLight1337 indeed! And even thought it is inefficient as hell, it's still fun 😎
:)
For "option 2" packers (ie. control byte + number of occurrences + value), the packer should pick its control-byte based only on occurrences of each of the 256 possible values that do not occur on their own or in a 2-byte sequence, as those are the only occurrences that increase the size of the compressed data.
At least, that's how I did it from EBC V1.6 onward, back in 1991. :)
Yepp of course!
Huffman/arithmetic are best applied _after_ the LZ stage unless you use something like LHa that integrates Huffman in its algorithm.
LZ breaks the byte boundaries and the result is totally random. Im surprised if Huffman would work on that.
@@FairLight1337 That's an implementation detail - you can bunch up your bits in batches of 8 and then your literals will all be naturally byte aligned and you give Huffman a chance to find some statistics to chew on.
But then your LZ would not work its best. Is the total really better?
@@FairLight1337 It should just be a different order of your compressed bits, making them more friendly for Huffman. I believe the bit-grouping approach was popular in the later LZSS and similar variants; it makes outputting literals much faster as it is a plain copy not needing bitshifting.
Ok. I guess I need to see this in practice to see if there are efficiency gains. Without having done any statistics on real data, I cant see how this makes sense but it could be that I dont understand the brilliance here.
You should give Dali a spin. It's the zx0 implementation for the c64. But don't quote me on the tech stuff.
It's used in Bitfire and packs fast and decrompressor quickly fast aswell.
Interesting. What are the drawbacks? Footprint of the depacker?
@@FairLight1337 as far as i understood on CSDB it's way faster than exomizer, esspecially when depacking. Tbh i don't know if the pack ratio is around exomizer.
You should try it out! :)
The CSDb review looks promising. Thanks for the mention.
@@FairLight1337 There are no drawbacks; ZX0 has same ratio with only about 1/4th the speed of a memcpy. The only other choices are lz4 (for maximum depack speed) or something that goes for max compression like shrinkler or ukpr.
Dali looks promising. I need to have a look at the size of the depacker as well. Retrofitting games with level crunching, you tend to have very little memory left. So zero page usage and memory footprint of the depacker is key.
First thing I think of when talking compression on the C64, Is Turbo 250 by MrZ. If you could arrange an interview with him, then whohoo! :)
That one loads faster. No compression involved. I should have lunch with him in the near future, but I do agree.
@@FairLight1337 Really no compression? I thought that was the only way to get so many games onto a single tape. Well, at least I learned something new today. :)
The programs were of course compressed, at least most of them. But the Turbos means storing the bits as shorter pulses on the tape. A.most efficient (but fragile) way of storing data on the tape.
@@FairLight1337 Ok, so the extra compression with Turbo 250 lies in the actual storage method on the tape? That in itself would be an interesting topic for your next YT-video. ;)
Compression is reducing the size of the actual file. Fast Loading on tape is putting the bits of the file with a higher density. Its not compression.
Its like setting and he disk to use 40 tracks. Gives more space on the disk. It doesnt change the files you store there.
I know it's not competitive... I know it would be a comical idea on the C64... but I really, really like BWT compression. It's such a weird algorithm.
BWT yeah, there others too for RLE.
I dont know what that is. Please share.
Burrows-Wheeler-Transform isn't a compression algorithm in itself, it's a transformation to reorder data for better compression, similar to what huffman encoding does in the LZH format.
@@Kobold666 yes. It sorts symbols by their context, so that symbols with similar contexts come close together. That usually leads to long runs of the same symbol. The surprising thing is that such sorting is reversible.
Ok, still sceptical :)
When decoding the data, is that done on the fly when reading the data from the medium, or is it first loaded into ram, and then decompressed? If so, how can everything be in the memory at the same time?
You mean self extracting or levels? Self extracting loads the compressed and decompresses in memory. Levelcrunch loads from disk and decompresses on the fly. So the compressed data is never loaded to memory - it comes as a stream and ens in decompressed form.