How does the database guarantee reliability using write-ahead logging?

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 มิ.ย. 2024
  • System Design for SDE-2 and above: arpitbhayani.me/masterclass
    System Design for Beginners: arpitbhayani.me/sys-design
    Redis Internals: arpitbhayani.me/redis
    Build Your Own Redis / DNS / BitTorrent / SQLite - with CodeCrafters.
    Sign up and get 40% off - app.codecrafters.io/join?via=...
    In this video, I delved into the concept of Write Ahead Logging (WAL) for ensuring data integrity in databases. I explained how WAL guarantees reliability by logging changes before flushing them to disk, thus enhancing database performance. By logging updates in a separate file, WAL minimizes disk writes and enables point-in-time recovery. I highlighted the importance of CRC checks in protecting data integrity within the log file. WAL's structure includes segments and pages, with each entry having a unique Log Sequence Number for easy reference and recovery.
    Recommended videos and playlists
    If you liked this video, you will find the following videos and playlists helpful
    System Design: • PostgreSQL connection ...
    Designing Microservices: • Advantages of adopting...
    Database Engineering: • How nested loop, hash,...
    Concurrency In-depth: • How to write efficient...
    Research paper dissections: • The Google File System...
    Outage Dissections: • Dissecting GitHub Outa...
    Hash Table Internals: • Internal Structure of ...
    Bittorrent Internals: • Introduction to BitTor...
    Things you will find amusing
    Knowledge Base: arpitbhayani.me/knowledge-base
    Bookshelf: arpitbhayani.me/bookshelf
    Papershelf: arpitbhayani.me/papershelf
    Other socials
    I keep writing and sharing my practical experience and learnings every day, so if you resonate then follow along. I keep it no fluff.
    LinkedIn: / arpitbhayani
    Twitter: / arpit_bhayani
    Weekly Newsletter: arpit.substack.com
    Thank you for watching and supporting! it means a ton.
    I am on a mission to bring out the best engineering stories from around the world and make you all fall in
    love with engineering. If you resonate with this then follow along, I always keep it no-fluff.
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 27

  • @jaskiratwalia
    @jaskiratwalia 4 หลายเดือนก่อน +1

    Amazing content as always. The clarity you have on such important concepts is amazing!

  • @jayesha.6194
    @jayesha.6194 2 ปีที่แล้ว +1

    Thanks Arpit for this videos.

  • @atlicervantes6187
    @atlicervantes6187 ปีที่แล้ว +1

    Excellent explanation!

  • @sanjitselvan5348
    @sanjitselvan5348 ปีที่แล้ว

    Nice explanation! I learnt a lot. Thank you!

  • @lakshaysharma8144
    @lakshaysharma8144 ปีที่แล้ว +11

    Netflix for developers.

  • @ziyinyou938
    @ziyinyou938 2 หลายเดือนก่อน

    This is just AWESOME

  • @viren24
    @viren24 ปีที่แล้ว

    Awesome Content. I am loving it Arpit. it is also useful in case of Master Slave configuration.

  • @ShubhamKumar-fi1kp
    @ShubhamKumar-fi1kp 2 ปีที่แล้ว

    Hi Arpit Bhaiya , I am not able to understand when the db has to update million rows what happens can you please give a brief about that

  • @AshwaniSharma0207
    @AshwaniSharma0207 ปีที่แล้ว

    In WAL file, if we don't keep the actual data with Insert/Update commands, how will we get the data in a new fresh DB by applying the WAL file?

  • @pranjalagnihotri6072
    @pranjalagnihotri6072 2 ปีที่แล้ว +6

    Hi Arpit, totally loving these videos.
    I had few questions on WAL:
    1- Suppose we configure flush frequency as 2 second and a update operation happens which gets appended to the log file and not yet flushed to the disk, in that time if other process tries to read the same row it will get the stale data right?
    2- When we are appending log to the log file it will reply with some kind of acknowledgement right saying log is successfully written, what will happen if this process crashes and acknowledgement is not sent will it write the same log twice(due to retry)? If this is the case how do we ensure that we will discard duplicates while flushing logs to disk?

    • @AsliEngineering
      @AsliEngineering  2 ปีที่แล้ว +9

      1. The updates are first made in the disk blocks the database fetched in memory. Because it might be possible due to some constraint that update might fail so writing to flush file without checking if it is even possible is futile. Hence the flow is: fetch the diskblocks to be updated in memory, update the blocks in memory, write to WAL, flush it to the disk
      My bad: I should have explained this in the video.
      2. To be honest I am not sure. But my guess is because writing to a log file is not a network call, it is not unpredictable. Hence you will always get an ACK about writing it to the log file so no need of retry. Again my best guess.
      It would be awesome if we could deep dive into it sometime, If you find any resource let me know.

    • @AbhishekYadav-fb3uh
      @AbhishekYadav-fb3uh ปีที่แล้ว

      Ans 2)
      In order to ensure that duplicate log records are not written to the log file during a retry, most databases use a technique called "idempotent logging". In idempotent logging, each log record is assigned a unique identifier, and the database uses this identifier to check if the log record has already been written to the log file. If the log record has already been written, the database simply ignores the duplicate record and does not write it again.

  • @vinitsunita
    @vinitsunita ปีที่แล้ว

    Writing to wal means we are persisting changes in Disk which could be slow operation also. What is the way to make it faster?

  • @abhinavsingh4221
    @abhinavsingh4221 ปีที่แล้ว +1

    Nice video! Had one question. You mentioned that the committed data is not straightaway put in the database memory instead it is appended in the WAL file and then asynchronously these changes are applied to the memory. But suppose I write some data in a commit and that query is appended to WAL but not written to the memory. And in the mean time I read from the database so will I get the old data? How this situation is handled?

    • @vishalbhopal
      @vishalbhopal ปีที่แล้ว

      I think the write operation will happen in main memory which is fast. It will be written in disk asynchronously

  • @akashagarwal6390
    @akashagarwal6390 ปีที่แล้ว

    What if there is a dirty read made to the same data written concurrently? How do we deal?

  • @dipankarkumarsingh
    @dipankarkumarsingh ปีที่แล้ว +1

    ✅ finsihed .... 👌.... ❤

  • @pratprop
    @pratprop 7 หลายเดือนก่อน

    Hi Arpit, great explanation of WAL. Do you know what would be a great follow up
    to this, ARIES for database recovery.

    • @AsliEngineering
      @AsliEngineering  7 หลายเดือนก่อน

      That's an excellent topic. Thanks for suggesting.

  • @VikramKumar-lp7wv
    @VikramKumar-lp7wv ปีที่แล้ว

    Hi arpit great explaination man🙌
    just one question: you've said that while adding a new entry to the WAL file we generate CRC, first add that to the WAL file and then store correspoing SQL command/data as a new entry. Is this not a possibility that after writing the CRC for the new entry and as we've written some SQL command entry text, our system crashes and this latest entry is incorrect. But as the system will be fixed and up again, we'll read this latest entry as a valid one bcz CRC was assigned to this entry and we consider it as a valid entry bcz of that.
    Is it not better to first add the SQL command/data as a new entry first into WAL page and then if its written successfully assign CRC to this new entry. This way in what ever scenario our system goes down, we'll be correctly able to figure out whether the latest entry was correctly written or not!

    • @kalyanben10
      @kalyanben10 ปีที่แล้ว

      So, you first want to read the entire record data and then read CRC code? Don't you think its not optimal? You first read CRC code, and then you keep reading fixed number of bytes, keep doing check, this way you don't overuse the memory.. CRC check is done sequentially on a file.. hence, the point is to only read fixed number of bytes of the actual record, apply check, discard whatever record data you read.. load next chunk of bytes and repeat the process. If you record is very huge.. you are unnecessarily overloading system into reading entire record.

  • @kushagraverma7855
    @kushagraverma7855 ปีที่แล้ว

    awesome content, would be great to have a discord / reddit page for each of the videos for further discussion

  • @sakshamgoyal4079
    @sakshamgoyal4079 2 ปีที่แล้ว +1

    😍

  • @ramyakmehra312
    @ramyakmehra312 11 หลายเดือนก่อน

    if we are flushing the data every 1 minute lets say, how is the consistency ensured? Incase a read happens before the changes are made. It will fetch it from the disk but disk doesn't have the latest changes yet.

    • @Avinashkk360
      @Avinashkk360 8 หลายเดือนก่อน

      Until it is flushed, it stays in memory. And reads are addressed from memory/cache first, so it wont go to disk to fetch it.

  • @protyaybanerjee5051
    @protyaybanerjee5051 ปีที่แล้ว

    TL;DR - Crash recovery guarantees in ALMOST all DBs are enabled by the judicious use of WAL.

  • @kumarprateek1279
    @kumarprateek1279 2 ปีที่แล้ว

    😍