Absolutely clear your explanation. Very simple approach making all B-tree propose understandable. I was reading a advanced book where I was getting lack of information and motivation about how to range from a leaf like containing 101 to a leaf containing 601. Your example is very realistic providing easier understanding. Congrats, buddy!
To be precise: Most implementations of a dynamic multilevel index use a variation of the B-tree data structure called a B+-tree. The leaf nodes have an entry for every value of the search field, along with a data pointer to the record (or to the block that contains this record) if the search field is a key(uniquely identifies each row) field. For a nonkey search field, the pointer points to a block containing pointers to the data file records, creating an extra level of indirection.
B+ Tree data structure is used to store both the row & index. Since every table has only one clustered index which determines how data is stored physically on the disk, in clustered index B+ tree will store the row data also. But in a non-clustered index, B+ tree will not have the actual row data but a pointer connecting it to the clustered index which has the actual data.
Very well explained Arpit. Looking forward to more such videos. TH-cam is full of content and newbie creators. But, knowing the amount of research you put in your content coupled with your rich hands on experience, adds a tremendous confidence in whatever we learn from your channel. Thanks
Thanks for resonating 🙌 I ensure the correctness and in-depth understanding before making the video on it. Happy that you saw the efforts I have put in. Thanks again 🙌 will continue putting out videos on things that matter.
Nice video, reminds me of how documents are stored in shelfs at my home. Containers saying what those documents are for say ITR, followed by files mentioning year blocks say 2010-2020, individual files and then individual documents.
You mentioned in the first approach that while updating a row we cannot automatically move beyond the the size of the width of the row but later as you mentioned we have a fixed row size so in what case we will be exceeding the width and overwriting other data for the naive file storage approach?
For the first approach (discussed before B+ tree) - - All rows stored in a file - Can we not have 4KB (=disk block) sections each in that file too - If there were 1000 rows, then, each block is 100 rows now - so, to go through all of 1000 rows, I do 10 disk reads - hence, findOne is not O(n) but, O(no. of total disk blocks) ? - Also, when inserting - - Again, can we not put these 10 disk reads in RAM - re-arrange and flush all back to the file, the disk (similar to what we did for B+ tree) ?
1. that is exactly how disk io happens but how will you do random reads 2. Today you are talking about 10, but you will not have just 40kb of data, it will go in GBs if not TBs. Then how will you do it?
Sir , I'm not able to imagine things when you say b tree is serialised and stored in the disk. Could someone please help me understand this better and visualize this?
One question here Arpit Leaf nodes of the tree contains the actural rows of table. So leafs nodes are efficiently utilizing the disk block size. But the internal nodes of the tree contains the child and their node information. So it seems internal nodes of the tree is not fully uitlizing the disk block size. So overall if we have more interanl nodes then we are un nessessarly wasting the disk block. Please let me know if I am thinking in right direction.
doubt, so now 1st leaf node (block) contains 3 rows, when we update its 1st row to an extent that it occupies whole space of the node leaf (block), will the rebalancing happen on the 1st leaf node & a new node will be created for 2nd & 3rd row ? how will this look like ?
MongoDB does not use b+ tree In the b+ tree we have a duplicate node where the actual node is pointing to the leaf node from the leaf node we can directly get the value as well we can do range query as well
but for the file case also we can't we do a similar operation of reading the entire file, bringing it into memory and then shifting the data around and finally flushing it back to disk. What advantage does B+ tree offer, I mean the only one which I can think if is that in case of B+ tress you wont have to touch the entire data but rather only those specific 100 rows, then can't have a design where we store 100 rows on file and the next 100 on another file and achieve the same output/performance ?
Hey Arpit - Help me to understand the case, when DB update and insert operation cannot take place in the given block due to limited space in the block then how DB manages insert and updates in such scenarios ?
how the dataBase manages that depends on how the database handles memory management. It could either reallocate bigger memory region and move the existing data over there or it might fill up any empty gaps caused by fragmentation within the existing memory region.
I have a doubt , if i insert another row in row 195 then all the next rows should also move forward (you have to do the same in B Trees also right ?) , i don't get how btrees solve this problem
For this very reason, indexes, such as B+ Trees, perform poorly in write-heavy systems. Indexes are primarily designed to enhance lookup speeds (reads), not writes. This inefficiency arises because each insert, update, or delete operation may require rearranging the tree to maintain its balanced nature. This rearrangement often involves splitting or merging nodes and updating pointers, which can be time-consuming. It is crucial to carefully consider the required access patterns and create only the necessary minimum number of indexes on a table.
Arpit, great explanation! I have a few questions: 1. Do you think the "1|101" information isn't fully utilizing the 4KB size? 2. What happens if the 4KB space is full and we need to add more data to a leaf node? 3. How does the system balance things if the leaf node (e.g., 1-100) is full and we need to add 8 more rows in between? Would we need to move data to another node?
don't know about 1 but 2. a new node is added to the b+ tree based on the range it lies 3. i think it will be handled in such a way that the no. of rows that are admissible to be put in a row shouldn't exceed 4kb size. ie sizeof(1|101)
Can Clustered and Non-clustered index be related to B+ tree store of the database? Like for instance if we create a primary key then a clustered index is created on that, and inside the index, pointers to the block stored. So is that pointer pointing to the leaf node of the B+ tree?
Indexes are implemented as B+ trees. Indexes are implicit tables with 2 columns (indexed value and row id). Storage is very similar to how a table is stored.
@@AsliEngineering so the numbers 100,200..500 you talked about in the video, are they row Ids? when we index on a column, does it store a mapping of that column to rowId in B+ trees? In which case it has to do the B+ tree search? Or is the index a mapping of column to physical address on the disk?
In case of update still we can face same issue of overflow which implies time complexity to O(n). How B+ trees handle this situation? I mean new update might have larger value and my leaf node is already fully occupied.
Does a DBMS behave more intelligent than an os by bypassing the filesystem I/O? If yes what is the system call for reading writing to disk block from user space?
Somehow, I am naturally very curious about how various kinds of databases/use cases store data at the lowest level i.e. disk level/SSD level. How various social websites/apps model and store list of followers/followees of a user? because after each friend request, this entire list changes. Sources of my confusion: (1) If we store this entire data in one field, that field will keep changing.Do relational databases even provide such variable length field(I know varchar,string etc, but that doesnt seem right data structure for this, to begin with) (2) instagram I read some time back uses postgres. How does postgres model such variable length field? (3) is there any social platform which uses nosql just to meet this criteria. That would sound funny though. (4) I am inclined to split the discussion into 2 parts : (a) When this field will be updated in-place like regular B+ tree, then a separate block might be needed to be allocated everytime, the new field size causes the block to expand. (b) if we use a database which uses sstable/memtable, and argue that anyway the field will be appended after every change, still we will cause intentional appending of this entire field(list of complete followers/followees after each new friend request) very frequently. (5) Some reference from real world use cases of how actual systems implement this in practice, will be very useful. Given your vast readings, I am sure you will be able to add value to it. (6) graph databases might be an option, but in real practice, are there social apps which use graph database for this use case? Thanks a lot!, If you find the doubt generic enough, you might make content for general consumption. Jagrati
Hey Arpit Really good content..but one doubt in findById when we reach the leaf node of B+ tree then to search a particular should we need to traverse one by one or we can use binary search as well to reach a particular node?
If a table has too many columns such as the data in each row exceeds 4kb of block size Will that block size will be of bigger value Or the databse will partition the row on basis of columns
Great detailed video, thanks Arpit. Had one doubt if you can explain: At one point we're saying when we're finding one by Id = 3 it does 3 disk reads from root of the tree to leaf node, but isn't there caching involved and indexes are cached in memory so that the disk reads are avoided also is the caching involved only after 1st call is done pertaining to a row/table or are the indexes cached as soon as DB is ready to take reads/writes?
intermediate nodes bloat up (given data now resides there as well). This means it will take us to read more data from disk to reach rows present in the leaf. Performance will degrade.
At th-cam.com/video/09E-tVAUqQw/w-d-xo.html -> are we assuming always that all B+ tree nodes will be part of one file therefore concluding that B+ tree leaf nodes contain offset of the next leaf node?
was exploring mongodb internals and needed to understand why we use B+ trees.
awesome explanation ,crisp and to the point !
Absolutely clear your explanation. Very simple approach making all B-tree propose understandable. I was reading a advanced book where I was getting lack of information and motivation about how to range from a leaf like containing 101 to a leaf containing 601. Your example is very realistic providing easier understanding. Congrats, buddy!
To be precise: Most implementations of a dynamic multilevel index use a variation of the B-tree data structure called a B+-tree. The leaf nodes have an entry for every value of the search field, along with a data pointer to the record (or to the block that contains this record) if the search field is a key(uniquely identifies each row) field. For a nonkey search field, the pointer points to a block containing pointers to the data file records, creating an extra level of indirection.
Now I am slightly confused with indexing and storing rows as B+ trees 😂
B+ Tree data structure is used to store both the row & index.
Since every table has only one clustered index which determines how data is stored physically on the disk, in clustered index B+ tree will store the row data also.
But in a non-clustered index, B+ tree will not have the actual row data but a pointer connecting it to the clustered index which has the actual data.
Very well explained Arpit. Looking forward to more such videos.
TH-cam is full of content and newbie creators.
But, knowing the amount of research you put in your content coupled with your rich hands on experience, adds a tremendous confidence in whatever we learn from your channel.
Thanks
Thanks for resonating 🙌 I ensure the correctness and in-depth understanding before making the video on it.
Happy that you saw the efforts I have put in. Thanks again 🙌 will continue putting out videos on things that matter.
Nice video, reminds me of how documents are stored in shelfs at my home. Containers saying what those documents are for say ITR, followed by files mentioning year blocks say 2010-2020, individual files and then individual documents.
its very very deeply explained just like a good book author explains it .... highly Appreciate your efforts
Good enough for interview preparations...thanks a lot
You mentioned in the first approach that while updating a row we cannot automatically move beyond the the size of the width of the row but later as you mentioned we have a fixed row size so in what case we will be exceeding the width and overwriting other data for the naive file storage approach?
Beautifully explained...hats off to you!
A video suggestion. Difference between MySQL and Postgres. How they index differently and how that affects read, insert, update and delete query.
For the first approach (discussed before B+ tree) -
- All rows stored in a file
- Can we not have 4KB (=disk block) sections each in that file too
- If there were 1000 rows, then, each block is 100 rows now
- so, to go through all of 1000 rows, I do 10 disk reads
- hence, findOne is not O(n) but, O(no. of total disk blocks) ?
- Also, when inserting -
- Again, can we not put these 10 disk reads in RAM
- re-arrange and flush all back to the file, the disk (similar to what we did for B+ tree) ?
1. that is exactly how disk io happens but how will you do random reads
2. Today you are talking about 10, but you will not have just 40kb of data, it will go in GBs if not TBs. Then how will you do it?
Sir , I'm not able to imagine things when you say b tree is serialised and stored in the disk. Could someone please help me understand this better and visualize this?
each node is B+ tree is stored in one disk block. naive way to image each node in b+ tree as one separate file on the disk.
One question here Arpit
Leaf nodes of the tree contains the actural rows of table. So leafs nodes are efficiently utilizing the disk block size.
But the internal nodes of the tree contains the child and their node information. So it seems internal nodes of the tree is not fully uitlizing the disk block size.
So overall if we have more interanl nodes then we are un nessessarly wasting the disk block.
Please let me know if I am thinking in right direction.
this is a trade off made in order to achieve better performance and efficient range searches
These videos are extremely helpful. Makes me curious to explore more on my own. Thanks!
Amazing. Thats precisely the attitude I wanted to percolate. Thank you for resonating 👍
loved your easy to understand explanation, thanks a lot for the new perspective on this!
So wonderful, very digestible content
Simple and clear !!!!. Looking for more of these kinds...
Can you share the Notes
unable to download them from your website
Amazing, I really appreciate your effort, the way you teach us, is great Thanks man.
doubt,
so now 1st leaf node (block) contains 3 rows,
when we update its 1st row to an extent that it occupies whole space of the node leaf (block),
will the rebalancing happen on the 1st leaf node & a new node will be created for 2nd & 3rd row ?
how will this look like ?
Read more about B+ Trees, you will answers to all the questions you've asked.
Hi @Arpit just asking out of curiosity can one see the internal implementation of storage of data in a database?
MongoDB does not use b+ tree
In the b+ tree we have a duplicate node where the actual node is pointing to the leaf node from the leaf node we can directly get the value as well we can do range query as well
but for the file case also we can't we do a similar operation of reading the entire file, bringing it into memory and then shifting the data around and finally flushing it back to disk.
What advantage does B+ tree offer, I mean the only one which I can think if is that in case of B+ tress you wont have to touch the entire data but rather only those specific 100 rows, then can't have a design where we store 100 rows on file and the next 100 on another file and achieve the same output/performance ?
Hey Arpit - Help me to understand the case, when DB update and insert operation cannot take place in the given block due to limited space in the block then how DB manages insert and updates in such scenarios ?
how the dataBase manages that depends on how the database handles memory management. It could either reallocate bigger memory region and move the existing data over there or it might fill up any empty gaps caused by fragmentation within the existing memory region.
How can I get your notes
I have a doubt , if i insert another row in row 195 then all the next rows should also move forward (you have to do the same in B Trees also right ?) , i don't get how btrees solve this problem
exactly tree will store the pointer to the data not the entire data
For this very reason, indexes, such as B+ Trees, perform poorly in write-heavy systems. Indexes are primarily designed to enhance lookup speeds (reads), not writes. This inefficiency arises because each insert, update, or delete operation may require rearranging the tree to maintain its balanced nature. This rearrangement often involves splitting or merging nodes and updating pointers, which can be time-consuming. It is crucial to carefully consider the required access patterns and create only the necessary minimum number of indexes on a table.
No need to do linear scan in sequential sorted rows. You can do binary search.
Awesome explanation! Thank you
Why AVL trees are not used as for them also time complexity is log(n)?
Arpit, great explanation!
I have a few questions:
1. Do you think the "1|101" information isn't fully utilizing the 4KB size?
2. What happens if the 4KB space is full and we need to add more data to a leaf node?
3. How does the system balance things if the leaf node (e.g., 1-100) is full and we need to add 8 more rows in between? Would we need to move data to another node?
don't know about 1 but
2. a new node is added to the b+ tree based on the range it lies
3. i think it will be handled in such a way that the no. of rows that are admissible to be put in a row shouldn't exceed 4kb size. ie sizeof(1|101)
Can Clustered and Non-clustered index be related to B+ tree store of the database?
Like for instance if we create a primary key then a clustered index is created on that, and inside the index, pointers to the block stored. So is that pointer pointing to the leaf node of the B+ tree?
Indexes are implemented as B+ trees. Indexes are implicit tables with 2 columns (indexed value and row id). Storage is very similar to how a table is stored.
@@AsliEngineering so the numbers 100,200..500 you talked about in the video, are they row Ids?
when we index on a column, does it store a mapping of that column to rowId in B+ trees? In which case it has to do the B+ tree search?
Or is the index a mapping of column to physical address on the disk?
Awesome explaination
I wanna learn more about this, any good references?
Very insightful video. Could you please make a video explaining the internal functionality of graph databases..??
Thank you sir!!
beautiful! This is so cool
In case of update still we can face same issue of overflow which implies time complexity to O(n). How B+ trees handle this situation?
I mean new update might have larger value and my leaf node is already fully occupied.
do the leaf nodes contain data or pointers to actual data ?
I read in other places - its pointers
No. It is data. Read about the clustered index and how tables store data in the clustered index.
Great explanation Arpit👏
Does a DBMS behave more intelligent than an os by bypassing the filesystem I/O? If yes what is the system call for reading writing to disk block from user space?
Awesome explanation, loved it
Somehow, I am naturally very curious about how various kinds of databases/use cases store data at the lowest level i.e. disk level/SSD level.
How various social websites/apps model and store list of followers/followees of a user? because after each friend request, this entire list changes.
Sources of my confusion:
(1) If we store this entire data in one field, that field will keep changing.Do relational databases even provide such variable length field(I know varchar,string etc, but that doesnt seem right data structure for this, to begin with)
(2) instagram I read some time back uses postgres. How does postgres model such variable length field?
(3) is there any social platform which uses nosql just to meet this criteria. That would sound funny though.
(4) I am inclined to split the discussion into 2 parts :
(a) When this field will be updated in-place like regular B+ tree, then a separate block might be needed to be allocated everytime, the new field size causes the block to expand.
(b) if we use a database which uses sstable/memtable, and argue that anyway the field will be appended after every change, still we will cause intentional appending of this entire field(list of complete followers/followees after each new friend request) very frequently.
(5) Some reference from real world use cases of how actual systems implement this in practice, will be very useful. Given your vast readings, I am sure you will be able to add value to it.
(6) graph databases might be an option, but in real practice, are there social apps which use graph database for this use case?
Thanks a lot!,
If you find the doubt generic enough, you might make content for general consumption.
Jagrati
Hey Arpit
Really good content..but one doubt in findById when we reach the leaf node of B+ tree then to search a particular should we need to traverse one by one or we can use binary search as well to reach a particular node?
It's not ordered, so the search has to be linear. As the nodes can be scattered anywhere.. like.. imagine linked lists
If a table has too many columns such as the data in each row exceeds 4kb of block size
Will that block size will be of bigger value
Or the databse will partition the row on basis of columns
Great content as always.
Great detailed video, thanks Arpit. Had one doubt if you can explain:
At one point we're saying when we're finding one by Id = 3 it does 3 disk reads from root of the tree to leaf node, but isn't there caching involved and indexes are cached in memory so that the disk reads are avoided also is the caching involved only after 1st call is done pertaining to a row/table or are the indexes cached as soon as DB is ready to take reads/writes?
Thank you
awesome.
Thanks
Kya quality he bhaiya videos ki
What if we want to search a name? How B+ tree will work?
same way, strings are comparable.
So tthey will have different indexing? And we would have another B+ tree@@AsliEngineering
Sir, What will happen we use B trees instead of B+ trees.
intermediate nodes bloat up (given data now resides there as well). This means it will take us to read more data from disk to reach rows present in the leaf.
Performance will degrade.
At th-cam.com/video/09E-tVAUqQw/w-d-xo.html -> are we assuming always that all B+ tree nodes will be part of one file therefore concluding that B+ tree leaf nodes contain offset of the next leaf node?
with all due respect, you should choose better topics for the videos
Suggest a few then.