ความคิดเห็น •

  • @janderogee
    @janderogee 2 ปีที่แล้ว +3

    And again, you blew my mind, this is awesome stuff. I love the way you keep on exploring and pushing the limits. Thanks for showing us.

  • @_Hawk78_
    @_Hawk78_ 2 ปีที่แล้ว +2

    Great result! Can‘t wait for more… 👍😀

  • @LordmonkeyTRM
    @LordmonkeyTRM 2 ปีที่แล้ว +2

    Damn thats freaking amazing from 8 years to now.

  • @jbevren
    @jbevren 2 ปีที่แล้ว +1

    Impressive as always thealgorithm!

  • @cathedrow
    @cathedrow 2 ปีที่แล้ว +3

    Watching it quick before it gets content matched to oblivion! Insane quality for the bitrate. Is there going to be a full write up of the method? I’d love to learn more.

    • @thealgorithm
      @thealgorithm 2 ปีที่แล้ว +1

      Thanks. There will be a write up, but after my multipart c64 demo is complete (featuring also the method used in this video).

    • @cathedrow
      @cathedrow 2 ปีที่แล้ว

      @@thealgorithm good luck with the demo. I’ve been playing with Ferris’ Pulsejet codec, but this sounds better on so many levels, and no DCT in the decoder too. Looking forward to porting it to x86 when the docs emerge.

    • @thealgorithm
      @thealgorithm 2 ปีที่แล้ว +1

      @@cathedrow Thamls. The actual decoder is very very light. Its just offsets to short waveform tables and then to the desired volume table per layer that are mixed together. I have managed to get 8 layers without using precalculated volume and waveform tables running on the c64. The encoder specifies which waveform to use per 6-40ms duration per layer along with phase, frequency and amplitude. compared to tests, it can generate similar quality to say mdct based methods using only a quarter of the data or less

  • @fcycles
    @fcycles 2 ปีที่แล้ว +1

    Great results! Audio quality is better than some others technics... I wonder if you also have a way to re-use some part of the data when the audio repeat in the song? Then has a sub-question... could one of the layer contain repetition and be treated separately?.. (I am assuming that currently it's an encoding/decoding that always required new data to be feeded...

    • @thealgorithm
      @thealgorithm 2 ปีที่แล้ว +2

      Yes certainly. Repeated segments can be repeated and encoded separately. The separate layers can also be treated separately but would not have any use as it relies heavily on the current layers of the segment to build up the audio. The Human league example took into account repeating sections as well hence was able to fit the entire song to 50k

    • @FolkerHQ
      @FolkerHQ 2 ปีที่แล้ว

      @@thealgorithm I am wondering, if with that technique it could be possible to enhance Amiga Modules and also have singing with MODs or would it make more sense to use other compressions like OGG or MO3 (combination OGG and MOD)? 50k for an entire song seems a compression in the range of Speex or Lyra. I also wonder, if that could be used in other hardware like NES, SNES, MegaDrive, SEGA MasterSystem and so on. And also, how much of a cost in performance has the Encoder? Can we try it somewhere?

  • @fcycles
    @fcycles 2 ปีที่แล้ว +1

    The layer… sounds like what get process in our brain… at least this is the closest sound-like I can tell after a head trauma.. :( its sounded noisy but carrying waveforms…

  • @fcycles
    @fcycles 2 ปีที่แล้ว

    2400 bauds... = 300 bytes / sec... Does that mean, we could get digital music streamed from a BBS?

    • @fcycles
      @fcycles 2 ปีที่แล้ว

      on a 2Mhz... C128?

  • @NickFellows
    @NickFellows 2 ปีที่แล้ว

    how resource intensive is it ?

    • @thealgorithm
      @thealgorithm 2 ปีที่แล้ว +2

      For decoding its extremely light. Able to decode 8 layers and play back on a 1mhz c64. Encoding takes a long time however. Working on reducing the encode times

    • @Cmdrbzrd
      @Cmdrbzrd 2 ปีที่แล้ว

      @@thealgorithm Could you open source this compression algorithm? Also, it could be useful for homebrew on more powerful old hardware.

  • @NickFellows
    @NickFellows 2 ปีที่แล้ว

    Could the mapping of sid voices be done with AI

    • @NickFellows
      @NickFellows 2 ปีที่แล้ว

      not needed for playback, but could be used to make the optimal configurations.

    • @thealgorithm
      @thealgorithm 2 ปีที่แล้ว +3

      @@NickFellows This method does not use sid waveforms at all. Everything is digitally mixed using 8 predefined waveforms.

    • @fcycles
      @fcycles 2 ปีที่แล้ว

      @@thealgorithm would be interesting to see the 8 predefined waveforms.. I guess they are define during the encoding process? If so, do you see a set or rules that seem to emerge from the audio-source type?

    • @thealgorithm
      @thealgorithm 2 ปีที่แล้ว

      @@fcycles At the moment I used predefined waveforms before the encoding process (Waveforms are not created based on the source audio). These are Triangle, Pulse, Sawtooth, Noise, Pulse25, Pulse 75, Reverse Sawtooth and Sine

  • @FolkerHQ
    @FolkerHQ 2 ปีที่แล้ว

    Is there any chance, we could try that out? Would be nice to get hands on that fancy encoder. Kind regards.