FairLight TV

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 ธ.ค. 2024

ความคิดเห็น • 74

  • @radwar1941
    @radwar1941 หลายเดือนก่อน +3

    Pontus, congratulations for this awesome video !!!

    • @FairLight1337
      @FairLight1337  หลายเดือนก่อน

      Humble thanks pal(s)!

    • @FairLight1337
      @FairLight1337  หลายเดือนก่อน

      @radwat1941 Do you have any insight on the Supercompactor? Its supposed to be by Flash. Anyone able to connect me to the coder?

    • @radwar1941
      @radwar1941 หลายเดือนก่อน +1

      AFAIK the original Code was done by the „TH-Double Team“ aka „Thomas Borris“ and „Thomas Meidinger“ … it was published by the german „RUN Magazine“. The Program was named „The Final Super-Compressor“.

    • @FairLight1337
      @FairLight1337  หลายเดือนก่อน

      Any contact with them?

    • @radwar1941
      @radwar1941 หลายเดือนก่อน

      I’ll try to get in contact with’em. Cheers MWS

  • @Nightshft42
    @Nightshft42 2 หลายเดือนก่อน +2

    1) I learned how to use the Matcham time cruncher as a teenager without internet or help from someone in 1988. Was fascinated. Built a pal->ntsc switch into the machine to gain maybe 10% speed for such stuff, but using it would crash the machine sometimes lol. Apart from that I remember Matcham speed packer on the same disk (it's on csdb). Was very much faster and now I wonder how they achieved that...
    2) I also thought on Amiga there's only lossless compression until I recently learned how sample 4bit-delta compression works. One of the rare examples for lossy. Has exactly 50% compression ratio as each byte becomes a nibble.
    3) Super episode, thanks!

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      Thanks for watching!

  • @jmp01a24
    @jmp01a24 2 หลายเดือนก่อน +1

    So nice that you found Matcham and got him to explain & answer questions. He was most def the cruncher king. Also, he didn't live too far from me. Remember 1001 was so proud of their cruncher, the card crucher, which they even put onto cartridge and tried to sell to software houses (going to the PCW show in 1985 or 1986). But compared to Matcham's Time cruncher, it came short. Remember that there were made "special" versions for the biggest groups at the time. So Matcham was respected by the best. I have an HTML file with the code for Time Cruncher, where I made all the Jumps, JSR's and branches clickable so they jump to the correct addresses. It's impressive short.
    Galleon dived into the source code and came up with his Cruel Cruncher. One that had the 2mhz function for the C128. It shorted the crunching time by a factor of 2.

    • @jmp01a24
      @jmp01a24 2 หลายเดือนก่อน

      Note to self: I should watch the whole videos before commenting. 😅 Many of the things I wrote were in the video after the "interview". 😛

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน +1

      Ha ha ha :)
      There will be more in the interview next week!

  • @QEDTriangle3532
    @QEDTriangle3532 2 หลายเดือนก่อน +1

    Great explanation on picking which value to use for the packer control byte. WAY back in the prehistoric times I encountered packers, that didn't scan before packing, and used either a hardcoded value or allowing the user to pick which one to use. That would either result in a packer being very inefficient, or the user having to go through a lot of trial-and-error to make sure the packed file wasn't unnecessarily large.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      Glad you like it! (And for the rest, this is QED/Triangle - a good friend from ancient times).
      No scanning you could still get Lucky, if the value picked was one that was likely to be the one a scanner would also find. Looking at EBC it was of course scanning but I seem to remember that when I reassembled it recently, it had errors scanning for values repeating more than 256 times.

    • @QEDTriangle3532
      @QEDTriangle3532 2 หลายเดือนก่อน

      @@FairLight1337 I can't fully recall how EBC handled values repeating for more than one page (it was 33 years ago, after all) but seeing as I "remastered" EBC 1.9 to build in KickAssembler back in 2021, it's possible that I could go back and check.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      I did the same (the version you did for me on the meeting in Öland).

  • @emanrovemanhcan9863
    @emanrovemanhcan9863 2 หลายเดือนก่อน +2

    I am only 5 minutes in, but the easy explanation of lossy and losless compression is stunning.
    I had a need to much more words.
    But, you got it it right, in the simplicity.
    Impressed.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน +1

      I generally feel I am babbling a bit. Im glad you dont see it like that :)

  • @GerbenWijnja
    @GerbenWijnja 2 หลายเดือนก่อน +2

    Very interesting stuff, thanks for sharing.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      Glad you like it!

  • @PeranMe
    @PeranMe 2 หลายเดือนก่อน +2

    There are a couple of cases of lossy compression being used on C64’s and Amigas: when people store animations using PETSCII chars that are close, but not identical to 8x8 blocks of pixels, and I’m pretty sure those Blender 3D animations in the winning demo from Xenium 2024 used a limited number of redefined chars to store the animations lossy.
    And on the Amiga and Atari ST, ADPCM is used for sample playback, because it can be decompressed on-the-fly fast enough to be viable in demos.
    Great video as always! ❤

    • @az09letters92
      @az09letters92 2 หลายเดือนก่อน +1

      Vector quantization is probably what you wanted to say. :-) It's used a lot in C64 productions.

    • @PeranMe
      @PeranMe 2 หลายเดือนก่อน

      @@az09letters92 Aha, I didn’t know it had a name, thank you!

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      Thanks!

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      Thanks!

  • @AnnatarTheMaia
    @AnnatarTheMaia 2 หลายเดือนก่อน +2

    Crunchers are such a fascinating subject matter. I started with the Time Crunch 5.0, spending many a night crunching, but once Galleon of Oneway released Fast Cruel 2.5+, it was game over: it beat everyone in size. I didn't mind trading time for size because I wasn't competing against anyone and I was trying to save as much disk space as possible, since I had no cartridge, size affected loading times and I didn't have any money to buy more floppies when I was starting out. Disk space therefore was always at a premium: the smaller the files, the more floppies could be freed, since every floppy disk was precious. Back in those days, one could only buy floppies outside of the country, so saving as much disk space as possible was no joke.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน +1

      My first disk was €6 - for one! So space was indeed previous. Thanks for the post!

    • @ethicalcompanies
      @ethicalcompanies 2 หลายเดือนก่อน

      Don't recall Fast Cruel, last C64 packer I used was probably AB Packer.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      Agree. There were indeed what I used as well before doing everything on PC

  • @geehaf
    @geehaf 28 วันที่ผ่านมา

    Great video. Matcham! Back in the 80's, I hand wrote a letter to him asking if he could help me convert C64 Time Cruncher to the Amiga and he replied and provided a complete 68000 source listing for Time Cruncher V4. Excellent product for the time.

    • @FairLight1337
      @FairLight1337  28 วันที่ผ่านมา +1

      He is really a humble and great guy. Truly likable!

  • @marcteufel8348
    @marcteufel8348 2 หลายเดือนก่อน +2

    Great content, I love it !!!! A lot of your stuff helped me getting back into C64 programming. Soon I will release a new game where you helped with your content.

  • @AnnatarTheMaia
    @AnnatarTheMaia 2 หลายเดือนก่อน +4

    I built and packaged Exomizer for Solaris 10 on i86pc and sparc platforms. Works like a charm, but lacks a good manual page to go along with it. I wonder who would win: Exomizer or Fast Cruel 2.5+.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน +1

      @AnnatarTheMaia Exomizer I would imagine, I did some tests using it as both a packer and a cruncher a few years ago and it beat everything I put it agaisnt, although it did have longer depack times.
      But as its also packing via a PC, takes seconds, not all night.
      /Pitcher

  • @PSL1969
    @PSL1969 2 หลายเดือนก่อน +2

    I love compression (especially on the C64). Made a huffman encoder just for fun. Looked into package-merge algorithm also. RLE of course also. Tried some opmtimizations (ore formatting the string etc. Encoding of the bytecount etc. Can be done in smart ways. I love this kind of stuff 😂)

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน +1

      I do to, as Im sure you can tell :)

    • @PSL1969
      @PSL1969 2 หลายเดือนก่อน

      @@FairLight1337 indeed! And even thought it is inefficient as hell, it's still fun 😎

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      :)

  • @QEDTriangle3532
    @QEDTriangle3532 2 หลายเดือนก่อน

    For "option 2" packers (ie. control byte + number of occurrences + value), the packer should pick its control-byte based only on occurrences of each of the 256 possible values that do not occur on their own or in a 2-byte sequence, as those are the only occurrences that increase the size of the compressed data.
    At least, that's how I did it from EBC V1.6 onward, back in 1991. :)

  • @NorthWay_no
    @NorthWay_no 2 หลายเดือนก่อน +1

    Huffman/arithmetic are best applied _after_ the LZ stage unless you use something like LHa that integrates Huffman in its algorithm.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน +1

      LZ breaks the byte boundaries and the result is totally random. Im surprised if Huffman would work on that.

    • @NorthWay_no
      @NorthWay_no 2 หลายเดือนก่อน +1

      @@FairLight1337 That's an implementation detail - you can bunch up your bits in batches of 8 and then your literals will all be naturally byte aligned and you give Huffman a chance to find some statistics to chew on.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      But then your LZ would not work its best. Is the total really better?

    • @NorthWay_no
      @NorthWay_no 2 หลายเดือนก่อน

      @@FairLight1337 It should just be a different order of your compressed bits, making them more friendly for Huffman. I believe the bit-grouping approach was popular in the later LZSS and similar variants; it makes outputting literals much faster as it is a plain copy not needing bitshifting.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      Ok. I guess I need to see this in practice to see if there are efficiency gains. Without having done any statistics on real data, I cant see how this makes sense but it could be that I dont understand the brilliance here.

  • @MarkusBurgstaller1508
    @MarkusBurgstaller1508 2 หลายเดือนก่อน +1

    You should give Dali a spin. It's the zx0 implementation for the c64. But don't quote me on the tech stuff.
    It's used in Bitfire and packs fast and decrompressor quickly fast aswell.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      Interesting. What are the drawbacks? Footprint of the depacker?

    • @MarkusBurgstaller1508
      @MarkusBurgstaller1508 2 หลายเดือนก่อน

      @@FairLight1337 as far as i understood on CSDB it's way faster than exomizer, esspecially when depacking. Tbh i don't know if the pack ratio is around exomizer.
      You should try it out! :)

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      The CSDb review looks promising. Thanks for the mention.

    • @JimLeonard
      @JimLeonard 2 หลายเดือนก่อน

      @@FairLight1337 There are no drawbacks; ZX0 has same ratio with only about 1/4th the speed of a memcpy. The only other choices are lz4 (for maximum depack speed) or something that goes for max compression like shrinkler or ukpr.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน +1

      Dali looks promising. I need to have a look at the size of the depacker as well. Retrofitting games with level crunching, you tend to have very little memory left. So zero page usage and memory footprint of the depacker is key.

  • @mrw104
    @mrw104 2 หลายเดือนก่อน

    First thing I think of when talking compression on the C64, Is Turbo 250 by MrZ. If you could arrange an interview with him, then whohoo! :)

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      That one loads faster. No compression involved. I should have lunch with him in the near future, but I do agree.

    • @mrw104
      @mrw104 2 หลายเดือนก่อน

      @@FairLight1337 Really no compression? I thought that was the only way to get so many games onto a single tape. Well, at least I learned something new today. :)

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      The programs were of course compressed, at least most of them. But the Turbos means storing the bits as shorter pulses on the tape. A.most efficient (but fragile) way of storing data on the tape.

    • @mrw104
      @mrw104 2 หลายเดือนก่อน

      @@FairLight1337 Ok, so the extra compression with Turbo 250 lies in the actual storage method on the tape? That in itself would be an interesting topic for your next YT-video. ;)

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน +1

      Compression is reducing the size of the actual file. Fast Loading on tape is putting the bits of the file with a higher density. Its not compression.
      Its like setting and he disk to use 40 tracks. Gives more space on the disk. It doesnt change the files you store there.

  • @Mnnvint
    @Mnnvint 2 หลายเดือนก่อน +1

    I know it's not competitive... I know it would be a comical idea on the C64... but I really, really like BWT compression. It's such a weird algorithm.

    • @PSL1969
      @PSL1969 2 หลายเดือนก่อน

      BWT yeah, there others too for RLE.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      I dont know what that is. Please share.

    • @Kobold666
      @Kobold666 2 หลายเดือนก่อน

      Burrows-Wheeler-Transform isn't a compression algorithm in itself, it's a transformation to reorder data for better compression, similar to what huffman encoding does in the LZH format.

    • @Mnnvint
      @Mnnvint 2 หลายเดือนก่อน

      @@Kobold666 yes. It sorts symbols by their context, so that symbols with similar contexts come close together. That usually leads to long runs of the same symbol. The surprising thing is that such sorting is reversible.

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน

      Ok, still sceptical :)

  • @AndrewTSq
    @AndrewTSq 2 หลายเดือนก่อน

    When decoding the data, is that done on the fly when reading the data from the medium, or is it first loaded into ram, and then decompressed? If so, how can everything be in the memory at the same time?

    • @FairLight1337
      @FairLight1337  2 หลายเดือนก่อน +1

      You mean self extracting or levels? Self extracting loads the compressed and decompresses in memory. Levelcrunch loads from disk and decompresses on the fly. So the compressed data is never loaded to memory - it comes as a stream and ens in decompressed form.