BLAKE3 and bao deep dive

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 ก.พ. 2025

ความคิดเห็น • 8

  • @daniel2color
    @daniel2color ปีที่แล้ว +3

    Incredibly well explained! Thanks, Rüdiger 💡🙏
    Compared to the chunking process of a typical UnixFS file, this seems much more elegant and efficient.
    Things I particularly liked:
    - The idea of just keeping the outboard encoding of the Merkle tree around as a separate file instead of UnixFS in a .CAR file takes less space.
    - Being able to tune the size of the Merkle tree with chunk groups that tradeoff computation for more efficient merkle tree size
    - Streaming verification!
    Looking forward to learning more about how it's integrated into Iroh.

  • @oconnor663
    @oconnor663 ปีที่แล้ว +4

    Fabulous talk! It makes me so happy to see Bao getting some real world use :)

    • @n0computer
      @n0computer  9 หลายเดือนก่อน +2

      it is soooo good. Thank you for for your work on both bao AND BLAKE3!

  • @kickeddroid
    @kickeddroid ปีที่แล้ว

    Wonderfully explained, great job!

  • @edbertkwesi4931
    @edbertkwesi4931 ปีที่แล้ว +1

    weh are you guys coming back miss your youtube reviews and meetings

  • @headshock1111
    @headshock1111 ปีที่แล้ว +2

    based and tree-pilled

  • @ShawnMorel
    @ShawnMorel 10 หลายเดือนก่อน +1

    fantastic presentation. At 23mins, I think the tradeoff of chunk-groups isn't well explained. If the point of verified streaming is to verify the content, you'd be re-computing the chunk hashes regardless. The tradeoff seems to be that with chunk-groups, you need to wait to receive and verify n chunks before you can verify they're correct as opposed to being able to verify each 1024 chunk as it arrives

    • @markg5891
      @markg5891 9 หลายเดือนก่อน

      +1 to this comment! I noticed that too. Chunk grouping + streaming is only "free" (as in fast to compute) if you have all the chunks in a given group. In some streaming situations (like for example just downloading a file) this might be perfectly sensible. However, for seeking in a file like in a movie, you'd need to have all the chunks within a group before you can verify them. So grouping is gonna give some bandwidth overhead here where "some" is increasingly bigger the larger the group gets. Tradeoffs i suppose ;)