+Aksel Anker Henriksen thanks, I did my best to filter it but at the time we had to keep stopping and starting because of trucks stopping next to us etc... >Sean
I love how you pointed out some used cases of the linked lists in real life! It helps me appreciate the data structure and makes me want to learn more!
Lot's of people saying linked lists are sh*t in terms of performance, and that might be true, however they are very useful to start learning data structures, so you can more easily learn other kinds of structures like binary trees
Yes. That's why they teach linked lists in those computer science classes. It does have real world applications - namely that after that, you know how to use std::map without blowing things up. :3
The best explanation of anything in 10 minutes I've seen, and the best jumping into linked lists explanation I've seen. Thanks, Doc! Love the on the street tutorial. Reminds me of many on the fly, brilliant napkin tutorials I've had with professors and genius artists over the years, with ale, coffee, or tea.
I'm loving these recent programming concept explanation videos! I hope the channel progresses to the more complex data structures over time, as those would benefit from the explanations more than the simple stuff.
Brings back fond memories of undergrad Comp Sci classes. Not one mention though of memory deallocation, the worst developer sin in languages without memory management.
"Pointing at null" seems semantically slightly different than saying "null pointer", which is what my CS courses taught me to put at the end of a linked list. "Pointing at null" means it's pointing at something (that happens to be null), whereas the concept of null pointer is that it isn't pointing at anything at all.
Most architectures have the null pointer as 0, but I've seen a few rare ones which it was all 1's for the width of its word. And it has nothing to do with hardware. Just saying, pointing at (for example) a zero is not the same thing as a pointer whose machine value is zero.
I found the comments generally well mannered and useful, but leaving out the fact that you should never use linked lists in the real life is a big, glaring omission.
No this place is extra terrible for sure. Compare to sixty symbols, numberphile or even better deepskyvideos. I think it's because computer people aren't humbled enough as there's so many different correct answers. And we're taught to power through mistakes to a large degree.
Double linked lists all the way! You can also generalise if you want: add a reference to the “middle” node which can fairly easily be maintained. Then inserting or editing the Nth item in your list can be done either from front to end, front end to front, from middle to end, or from middle to front. If you have a very long list then this added overhead is worth it and can half your insertion/modify time cost. (You could gene realise further with a reference to the item at each quartile or decile or percentile too, if you had incredibly big lists.)
The noise wasn't too bad, I was just confused as to why you were casually explaining data structures in the streets of London, but oh well. Quite refreshing, I guess :D
My first class on C was in 1989, the professor started out by saying "I assume you all know how programming languages work, so we are going to focus on pointers and system programming in this class" I am a heavy user of linked lists but they also have their limits, I have found they are great for joining data into a pseudo array but when it comes to searching an array is much faster and an array is just a basic list. When it comes to multicore use a list with locks can be troublesome but far outweighs the impact of cache invalidations and excess memory bandwidth. For multiprocessors you can break down lists into per-core tasks and join them at the end of each task which will far out perform a array copy. Both have their advantages but profiling and ease of use also need to be considered.
It's a bit troubling though that linked lists are rarely more efficient and yet they are frequently taught as if they are extremely useful. They are a great learning tool, but they are generally very inefficient compared to contiguous arrays unless each element is very large, and it is rare to have very large elements. Always profile your code before settling on a linked list.
Uh no, you choose your list implementation based on the operations you plan on doing. if you want to keep adding at the end or front and wont be searching by index a linked list is perfect
probE466 You choose your list implementation by profiling your code. Even in the scenario you describe, a contiguous list may be more efficient. There's also the issue of memory fragmentation.
I think they are mainly taught because it is simple to implement. Making my first linked list about a year ago now helped me understand pointers like I had never before.
The problem however is that a linear search is necessary to find the item to remove or add and that takes way more time in an linked list than in an array. This makes the array in most cases faster than a linked list.
Linked lists by itself are not very useful. That's true. But the idea of linking "something" by references in memory _is_ useful. And can be used for, for example, implementing graphs. So, I think learning linked lists is very important.
really liked this video. Every other piece of information i could find just explained what they are instead of the practicality. Understanding the use case makes it much easier to understand for me.
In the Amiga OS, the doubly linked lists combined the head and tail pointers in a which eliminated the special cases for insertion or removal at the beginning or end.
I would have prefered if they teach to put the link of the new node before you move the link of the previous one. Because if moving the link is an atomic operation (as it should be) then the list was never on an invalid state. But if you start by moving the link of the previous node and then set the link of the new one, there is a chance that the state where the new link points to null (or trash depending on your language and compiler) is visible to other threads.
Right, but in regards to browser history, there are only forward and back buttons, or past and future histories. There's no option to switch to different branches of the history. It's not Git :P
+LoneTech Right, that's "partly" what the explorer in windows does. A "back" event is not considered as a change but if you manually click on any parent folder it's a seperate navigation step. Say you navigate a folder down to the third level. If you now click on the root folder, the back action will bring you back to the third level then to the second and so on. However If every undo counts as new undo step you run into trouble because you have to keep a seperate list of the current undo step. Because if "going back" is added as new undo step at the end, going two steps back would bring you back where you were at the beginning.
at 8:56 You wouldn't brak the pointer from Neil Armstrong to NASA when linking to ISS from NASA. it doesn't matter if two objects point to NASA, and it takes cycles to break the link for no reason.
I know he is just demonstrating on paper and showing the concepts but breaking the link before creating the new link for an insert operation is likely to cause concurrency problems. If you do it the other way around, you can do an atomic replacement of the pointer from "1" and "26" would already be pointing "2" (and not appear like the end of the list if a concurrent iteration of the list is taking place).
If I am not mistaken doing the same for a doubly-linked list would still be advantageous. Setup the links in the new node (next and prev) then change the next and prev of the prev and next respectively. The only problem that might occur between setting the new pointers is of consistency but an inconsistent state is better than an incorrect one. Although, since you can't do it atomically (at least I don't think you can set two addresses atomically) then this could cause a problem depending on what your application is doing.
That doesn't really solve concurrency. It might depend on the language / framework and the actual usecase though. If you have another thread currently iterates the list and you "cut off" a part two things could happen: Either the other thread is still before your cutting point in which case there's no problem. However if the other thread is already beyond the cutting point, how do you remove / destroy the cut-away nodes? You as the one who's changing the list can not know if someone is still using those nodes or not. If you want a datastructure thread-safe there are many ways to achieve this but it highly depends on the usecase and the language / framework. I mention that since frameworks like .NET / Mono or the Java VM have managed memory so you don't have to worry about invalidating memory (unless you use a node-pool) whereas in languages like C or C++ you have to free the memory yourself. Doing lock-free implementations is a dangerous field where you have to consider all possible cases. If you want to be safe, implement proper synchonisation in which case it doesn't matter in which order you update the pointers ^^. ps: How is inconsistant state better than incorrect state? If it's inconsistant it IS incorrect ^^. Inconsistancy is the worst you can do to your program, ever. You should never accept even the slightest bit of inconsistancy as it will, sooner or later, cause a race-condition in a multi-threaded environment.
That bothered me too. If you break the link to Node 2....it will be forever lost in memory....as well as every Node that it and its latter Nodes were referenced by.
When you are surfing on the internet you are not really traversing a linked list data structure; Internet is more like a big (really big and keeps getting bigger every second) directed graph. Where you can get the feel of linked list is any image viewer on your system -- in its UI, '->' button is node->next; the '
That's probably why a recursive function is better to search for entries in a linked list instead of an array because you don't need to specify an index because there is no index for each entry of a linked list. If no result is found, null (tail or head of the linked list) will terminate the recursive function. Else it will loop forever.
minor nitpick: in the drawing, the arrows point to the following pair. in the generated graphic, the arrows point to the *element* of the next pair. The latter is technically incorrect. Also, the last element doesn't point to null, its pointer *is* null.
When adding a node in the middle of the list, you cannot break the link first as the reference to the node that the link was referencing would be lost. Create a temp variable that points to this node, break the link the you can point to the new node ad the new node point to the next node and remove the temp variable.
Usually link lists contain two addresses, the forward link address and the address of the data element. Also, I've always pointed the last link to the first such that you can detect the end when the head address is the same as the final link address.
Unless your list is a list of primitives. I think it's better to point to null at the tail: checking if an address is zero is less computationally expensive than a comparison which may require loading the head value into a register to see if they are equal. It also blurs the responsibility in code as to who is going to determine that its the end of the list: at what point is the comparison made, and after the comparison is made what is the function going to do? If it's going to return null, you might as well have had null as the next pointer and not have to do any checks when reading the next pointer (leaving them to when the function caller checks if null).
Doubly-linked lists are different than singly-linked lists. Doubly-linked should only be used when you need to move backwards in the list, otherwise, it's a waste of both memory and processor cycles. Same with circular linked lists (where the tail points back to the head), but in this case, you're not losing anything, so that's more of a style choice than anything.
Yes very true, I hadn't consider the list of primitives. I don't disagree about the null end pointer, it's really a style choice. There's not a lot in it computationally.
tobortine It's a little more than a style choice. If you use null you don't have to pass the head around with each computation. It's a small cost but still worth remembering
Why do you have to pass the head around? With each new link address you compare it to the head address as opposed to comparing it to null. No extra overhead.
undo/redo operation i've always implemented using two "stacks" (arrays). do something new, push to the undo stack and clear the redo stack. press undo, pop the undo stack and push it to the redo stack. press redo, pop the redo stack and push it to the undo stack.
On the other hand, you can model undo/redo as a pair of stacks. I don't know if that would be more efficient, but it whittles the functionality down to only the bare necessities for that behavior.
To elaborate: when a change is made, push the current state onto the undo stack. When UNDO is pressed, pop the top of the stack, set the current state to that popped state, and then push the state onto the redo stack. When REDO is pressed, just do the same thing in reverse. (And of course, clear the redo stack when any other changes are made.)
IceMetalPunk That was my thought, but then I thought about how to best implement the stacks and a linked list is probably better than an array because you add frequently and don't require random access. Though I would probably create a memory pool so that allocating memory can be batched up. Maybe each node would contain a small set of operations to reduce the memory overhead of the pointers, but still better than implementing the stacks as arrays.
I had a similar thought. I can say for sure a stack is perfect if you just need to implement undo, but maybe a linked list would be better for both undo and redo(?). I can say I've used a stack for implementing time reversal in a video game, and it was well suited to it.
Yes a stack is an solution - until you consider that you may want to limit how many undo steps are being preserved. Since at some point you probably want to start discarding the oldest undo steps, in order to not consume more and more memory with every action taken.
+AnotherUselessNick Well an undo stack can be implemented as a circular buffer. So you have a fix sized buffer a start and an end pointer. The start doesn't have to be fix. When you reach the end of the buffer you simply wrap around and start again from the beginning, overwriting the oldest entry. At the same time the start pointer would be moved forward so the second element is now the "first" and the first in the buffer is actually the last.
+skunkFU25 No. A linked list is a data structure, a graph is, well, a graph. However, you can represent a linked list with a graph. In this case it would be a directed cycle graph for a singly linked circular list or an undirected cycle graph for a doubly linked circular list.
I used a linked list for a gamestate manager, the current gamestate was the head of the list and every time you started a new state (like if you opened you inventory) I would push the new state on to the the stack, and since I never had more than 3 items in the list, even if I had to access a state at the bottom of the list (which was never actually necessary) it wouldn't be to costly, that being said in retrospect it was dumb considering it would have virtually no performance benefit, and all I accomplished was wasting my time implementing a linked list in lua for one stupid purpose
Terrific video... Example was great plus the location is an icing on the cake.. makes it learning pretty pleasant and real .. unlike sitting on computer and talking like robot :D
They become orphaned and are no longer accessible. Poorly designed programs might just leave it there, and over time more and more memory is consumed but isn't actually being used, that's called a memory leak. The proper way to deal with it is to deallocate the memory, which is just telling the operating system that you don't need it anymore and it can be used for something else. Some languages do this automatically in a process called garbage collection, but some lower level languages like C require the programmer to do this manually
So a linked list is like back to the future basically.. you can go back and forth as much as you want until you change something, which them alters your "future" in the list.
I didnt see it, but you also have to account for the amount of data in the list. Example: first block, has number of bytes in the list, then tail says if its linked or not. Example, is the classic file system. you dont want to have a link list of single bytes, as the link pointers take up more data. just fyi info
One comment I felt is needed on this video to any students possibly watching this, when adding or deleting a node, you always make the new links before you remove the old ones. It was really bugging me when he was removing the old links before making the one ones. If you delete the link between 2 and 3 before you create the link from 26 to 3, you will lose access to 3 and all subsequent nodes.
If you paste your secrets somewhere, don't delete them, but undo them and overwrite the redo history. that way if somebody sneaks up on your word document, they won't find out... unless they are really geeky and for some reason it's still stored in memory... but realistically... meh.
0:47 I was wondering that what's the significance of showing the arrow curved, rather than straight just how he drew but 1:18, yeah, curved arrow are more suitable for conveying the non-contingency. nice detail.
I'm studying computer sicences in a mexican university and we have two data structure courses wich covers this topic in depth (alongside with other things like trees and graphs). Even we need to program from scratch the lists. That courses name (translated from spanish) are: Object-Oriented Algorythms and Linear Storing Patterns and Object-Oriented Algorythms and Non-Linear Storing Patterns
In real life very few programmers use Linked Lists directly, and when they do they use a Library, or they use a "growable aeeay" that is really implemented as a highly advanced data structure that probably is a hybrid between an array and a linked list.
Just an fyi for Computerphile and everyone I guess, the diagrams you did are a little bit wrong. The arrows should not be pointing inside the nodes, because this would mean the association is made with the value inside the node and not the actual node. As you can see, when Dr Alex Pinkney made his drawing on paper, he respected this law. He made the arrows pointing at the node, not inside of it.
Bad example. you should have inserted the number "1.4" :) If you look at dictionaries, internally these are often realized as linked lists. This could theoretically also be done with Arrays - once you add a layer of abstraction there is no need to use contiguous memory for them.
I can't believe that I've never noticed that when I create a new nod or action in things like do and undo or backward and forward buttons in a browser, I delete the previous forward history! I can't also believe I finally get this thing 😌 I couldn't get it in class and it's really been bothering me.
Would going to the ISS page not only break the NASA Armstrong link but also the ArmstrongApollo11 link, because if from ISS you'd go back to NASA and click the Armstrong link again, your forward button doesn't suddenly allow you to go forward to Apollo11 again
I think the example with the page navigation is better modeled by an array, since you're only pushing at the back of the structure and only popping from the back. It doesn't highlight any of the strengths of a linked list.
+SnowyCoder Not really. In fact, it's actually the opposite. With a linked list you explicitly have to free all the nodes you drop at the end, while with an array, there's no need to deallocate anything. You may need to deinitialize the values, but that's about it.
Not true. In fact, with a linked list it has O(n) (because you need to free the memory you used for the O(n) nodes which you are clearing) but with an array you can do it in O(1); with an array you don't have to do that; you just set the size of the array to a new number. See, an array is represented as a contiguous block of memory of size let's say MAX_N. Now... since you're not using this entire array, you store a variable array_size which holds the amount of memory you are currently using (for the purpose of this comment, let's assume the elements each have 1 byte in size and as such the physical size in memory of the array is equal to the number of elements). Now.. if want to push an element to the back of the array, you just have to overwrite the memory at address array_size with the new information, and then increase array_size by 1 to signify that you are using 1 more byte of memory. If you want to pop an element from the back of the array, all you have to do is decrease the variable array_size by 1. You do not need to free the memory (in fact you cannot) since the array is one contiguous block. You cannot free memory in the middle of it. It doesn't matter if the data is still physically present at that location in memory since the next time we add an element to the array, it will be overwritten anyway. Similarly, if you want to pop n elements from the back of the array, all you have to do is set the variable array_size to array_size - n. You don't need to delete each individual element, you are just marking them as "unused" and they will be overwritten once new elements get added. Note that if you don't know a-priori an upper bound for your array (that MAX_N I was talking about), you will have to sometimes extend your array, which means reallocating the memory to a different location, which does indeed take O(n). That's why we say that operations on an arbitrarily-sized array take amortized O(1) time, meaning they usually take O(1) but sometimes (very rarely since you won't need to resize your array often) they will take more. If you want to investigate further, you can read upon the std:vector structure from C++; it is implemented on this very notion. Note that in this example I assumed the elements stored in the array don't need to be forcefully closed when no longer needed (such as database connections or whatever) since in this case you cannot escape from the destruction of each individual element and therefore O(n). But that will happen with both arrays and linked lists.
You can achieve O(1) in Java as well, you just have to be a little tricky with the way you write your code. That is, don't use the already implemented ArrayList class; instead, write your own as described in my previous comment :). Or you can use ArrayList but don't use the add and remove methods provided. Not the "proper" way of doing things but hey.
Linked lists for browser history is a good example of a graph-based data structure that was implemented as a linked list and ruined my life. There are others.
So when we deleted a node, we essentially unlink it from the list and not really delete it from memory. For array, we use delete [ ] array; but how do we actually delete this unlinked node?
The traffic noise wasn't that bad. I heard everything that was being said and it added an interesting background.
+Aksel Anker Henriksen thanks, I did my best to filter it but at the time we had to keep stopping and starting because of trucks stopping next to us etc... >Sean
I agree with aksel, but in my inexperienced opinion indoors'd be better
I didn't notice it at all actually.
I went to the comments section to tell the exact same thing
It worked well.
I love how you pointed out some used cases of the linked lists in real life! It helps me appreciate the data structure and makes me want to learn more!
I was mind blown when he gave real examples that we can all try/observe at our own homes. 😌
Really nice video. You should do more videos on data structures like binary trees, stacks, and queues.
Lot's of people saying linked lists are sh*t in terms of performance, and that might be true, however they are very useful to start learning data structures, so you can more easily learn other kinds of structures like binary trees
Yes. That's why they teach linked lists in those computer science classes. It does have real world applications - namely that after that, you know how to use std::map without blowing things up. :3
The best explanation of anything in 10 minutes I've seen, and the best jumping into linked lists explanation I've seen. Thanks, Doc! Love the on the street tutorial. Reminds me of many on the fly, brilliant napkin tutorials I've had with professors and genius artists over the years, with ale, coffee, or tea.
This is better than an entire college lecture
I'm loving these recent programming concept explanation videos! I hope the channel progresses to the more complex data structures over time, as those would benefit from the explanations more than the simple stuff.
Brings back fond memories of undergrad Comp Sci classes. Not one mention though of memory deallocation, the worst developer sin in languages without memory management.
Nerds make me happy. Makes me miss the CS culture from college. Keep up the awesomeness on this channel!
9:20 my thought 'unless you're using vim'. Few seconds after in the video 'unless you're using something like vim'. Great video :-)
The comparison to tab history was genius! Thanks for the explanation!
Oh my goooooood, an actual damn example wtf. Its mad how a real world example just makes everything click
I'm very impressed with Dr A here bcs he is so simplifying a concept that is extremely complicated.
I really like how he explains things! I am working now on linked lists and it really helps to understand ! Thanks :)
You have no idea how helpful is this for new learners like me
"Pointing at null" seems semantically slightly different than saying "null pointer", which is what my CS courses taught me to put at the end of a linked list. "Pointing at null" means it's pointing at something (that happens to be null), whereas the concept of null pointer is that it isn't pointing at anything at all.
Most architectures have the null pointer as 0, but I've seen a few rare ones which it was all 1's for the width of its word. And it has nothing to do with hardware. Just saying, pointing at (for example) a zero is not the same thing as a pointer whose machine value is zero.
The wikipedia example he gave was a pretty good application of linked list. Nicely explained!
Jesus christ. I said this on another video but the computerphile community is toxic. Like they criticize the shit out of every video.
I think it's more of a personality trait of the nerds.
but the numberphile and sixty symbols communities are nothing like this. It's specifically computer nerds.
I found the comments generally well mannered and useful, but leaving out the fact that you should never use linked lists in the real life is a big, glaring omission.
No this place is extra terrible for sure. Compare to sixty symbols, numberphile or even better deepskyvideos.
I think it's because computer people aren't humbled enough as there's so many different correct answers. And we're taught to power through mistakes to a large degree.
The live example with the animation running as he explains it is great.
Wow. Where did he get all of that antique fan-fold printer paper? I haven't seen that much ffp since my freshman engineering days back in the 80's.
Double linked lists all the way!
You can also generalise if you want: add a reference to the “middle” node which can fairly easily be maintained. Then inserting or editing the Nth item in your list can be done either from front to end, front end to front, from middle to end, or from middle to front. If you have a very long list then this added overhead is worth it and can half your insertion/modify time cost. (You could gene realise further with a reference to the item at each quartile or decile or percentile too, if you had incredibly big lists.)
The noise wasn't too bad, I was just confused as to why you were casually explaining data structures in the streets of London, but oh well. Quite refreshing, I guess :D
My first class on C was in 1989, the professor started out by saying "I assume you all know how programming languages work, so we are going to focus on pointers and system programming in this class"
I am a heavy user of linked lists but they also have their limits, I have found they are great for joining data into a pseudo array but when it comes to searching an array is much faster and an array is just a basic list.
When it comes to multicore use a list with locks can be troublesome but far outweighs the impact of cache invalidations and excess memory bandwidth. For multiprocessors you can break down lists into per-core tasks and join them at the end of each task which will far out perform a array copy.
Both have their advantages but profiling and ease of use also need to be considered.
Engineering is has and will always be about trade-offs
It's a bit troubling though that linked lists are rarely more efficient and yet they are frequently taught as if they are extremely useful. They are a great learning tool, but they are generally very inefficient compared to contiguous arrays unless each element is very large, and it is rare to have very large elements. Always profile your code before settling on a linked list.
Uh no, you choose your list implementation based on the operations you plan on doing. if you want to keep adding at the end or front and wont be searching by index a linked list is perfect
probE466 You choose your list implementation by profiling your code. Even in the scenario you describe, a contiguous list may be more efficient. There's also the issue of memory fragmentation.
I think they are mainly taught because it is simple to implement.
Making my first linked list about a year ago now helped me understand pointers like I had never before.
The problem however is that a linear search is necessary to find the item to remove or add and that takes way more time in an linked list than in an array. This makes the array in most cases faster than a linked list.
Linked lists by itself are not very useful. That's true. But the idea of linking "something" by references in memory _is_ useful. And can be used for, for example, implementing graphs. So, I think learning linked lists is very important.
Very good video. I particularly liked him using the web page examples as a means of explaining applications of double linked lists.
Oh i need to know this for my exam in 4 days.
If you are interested, I have a video on creating a linked list using C++ on my channel, it might help out with your exam.
Had an exam on this last week. I guess I just got unlucky with the timing that they decided to post this. Good luck on yours btw :)
Matt Hopson I need to do it in scheme.
thank you for the use cases!!
I like the traffic noise. It reminds me of the before times.
Awesome video, thanks for laying things out clearly without going boringly slow. I also liked the use of music paper lol
really liked this video. Every other piece of information i could find just explained what they are instead of the practicality. Understanding the use case makes it much easier to understand for me.
thanks for showing the streets, having seen that in awhile :D
The back/forward web browser example was delicious
I hope at this channel makes a video about algebraic data types and how to define a linked list using them.
In the Amiga OS, the doubly linked lists combined the head and tail pointers in a which eliminated the special cases for insertion or removal at the beginning or end.
I would have prefered if they teach to put the link of the new node before you move the link of the previous one. Because if moving the link is an atomic operation (as it should be) then the list was never on an invalid state.
But if you start by moving the link of the previous node and then set the link of the new one, there is a chance that the state where the new link points to null (or trash depending on your language and compiler) is visible to other threads.
The example of webpage history is very interesting and intuitive
undoing the future and creating a new future destroys the ability to visit the other future unless you have a tree structure...
Right, but in regards to browser history, there are only forward and back buttons, or past and future histories. There's no option to switch to different branches of the history. It's not Git :P
You just repeated something said in the video, genius.
Agent M Another option is to treat the undos themselves as new edits, like Emacs does. That way they can in turn be undone, and you keep the history.
+LoneTech Right, that's "partly" what the explorer in windows does. A "back" event is not considered as a change but if you manually click on any parent folder it's a seperate navigation step. Say you navigate a folder down to the third level. If you now click on the root folder, the back action will bring you back to the third level then to the second and so on.
However If every undo counts as new undo step you run into trouble because you have to keep a seperate list of the current undo step. Because if "going back" is added as new undo step at the end, going two steps back would bring you back where you were at the beginning.
at 8:56
You wouldn't brak the pointer from Neil Armstrong to NASA when linking to ISS from NASA. it doesn't matter if two objects point to NASA, and it takes cycles to break the link for no reason.
I know he is just demonstrating on paper and showing the concepts but breaking the link before creating the new link for an insert operation is likely to cause concurrency problems. If you do it the other way around, you can do an atomic replacement of the pointer from "1" and "26" would already be pointing "2" (and not appear like the end of the list if a concurrent iteration of the list is taking place).
If I am not mistaken doing the same for a doubly-linked list would still be advantageous. Setup the links in the new node (next and prev) then change the next and prev of the prev and next respectively.
The only problem that might occur between setting the new pointers is of consistency but an inconsistent state is better than an incorrect one. Although, since you can't do it atomically (at least I don't think you can set two addresses atomically) then this could cause a problem depending on what your application is doing.
That doesn't really solve concurrency. It might depend on the language / framework and the actual usecase though. If you have another thread currently iterates the list and you "cut off" a part two things could happen: Either the other thread is still before your cutting point in which case there's no problem. However if the other thread is already beyond the cutting point, how do you remove / destroy the cut-away nodes? You as the one who's changing the list can not know if someone is still using those nodes or not.
If you want a datastructure thread-safe there are many ways to achieve this but it highly depends on the usecase and the language / framework. I mention that since frameworks like .NET / Mono or the Java VM have managed memory so you don't have to worry about invalidating memory (unless you use a node-pool) whereas in languages like C or C++ you have to free the memory yourself. Doing lock-free implementations is a dangerous field where you have to consider all possible cases. If you want to be safe, implement proper synchonisation in which case it doesn't matter in which order you update the pointers ^^.
ps: How is inconsistant state better than incorrect state? If it's inconsistant it IS incorrect ^^. Inconsistancy is the worst you can do to your program, ever. You should never accept even the slightest bit of inconsistancy as it will, sooner or later, cause a race-condition in a multi-threaded environment.
He was talking about linked list, not concurrent linked list. Off-topic.
That bothered me too. If you break the link to Node 2....it will be forever lost in memory....as well as every Node that it and its latter Nodes were referenced by.
This is just a high level example... Not actually implementing the functions to add a node... lol relax
The OS/2 Web browser DIDN'T break the forward links when you chose a new path. It kept a tree view of all visited pages. Man, I miss that.
Don't worry too much about the noise. I have seen many videos from Royal British Famous Institutions which were worse.
Awesome ❤
The linked list surely creates an exciting pointer handling job, but is cool if your only option is to always access heap.
When you are surfing on the internet you are not really traversing a linked list data structure; Internet is more like a big (really big and keeps getting bigger every second) directed graph. Where you can get the feel of linked list is any image viewer on your system -- in its UI, '->' button is node->next; the '
That's probably why a recursive function is better to search for entries in a linked list instead of an array because you don't need to specify an index because there is no index for each entry of a linked list. If no result is found, null (tail or head of the linked list) will terminate the recursive function. Else it will loop forever.
minor nitpick: in the drawing, the arrows point to the following pair. in the generated graphic, the arrows point to the *element* of the next pair. The latter is technically incorrect.
Also, the last element doesn't point to null, its pointer *is* null.
well shot and edited
I like the term "non-blow-away-able".
When adding a node in the middle of the list, you cannot break the link first as the reference to the node that the link was referencing would be lost. Create a temp variable that points to this node, break the link the you can point to the new node ad the new node point to the next node and remove the temp variable.
Usually link lists contain two addresses, the forward link address and the address of the data element. Also, I've always pointed the last link to the first such that you can detect the end when the head address is the same as the final link address.
Unless your list is a list of primitives.
I think it's better to point to null at the tail: checking if an address is zero is less computationally expensive than a comparison which may require loading the head value into a register to see if they are equal.
It also blurs the responsibility in code as to who is going to determine that its the end of the list: at what point is the comparison made, and after the comparison is made what is the function going to do? If it's going to return null, you might as well have had null as the next pointer and not have to do any checks when reading the next pointer (leaving them to when the function caller checks if null).
Doubly-linked lists are different than singly-linked lists. Doubly-linked should only be used when you need to move backwards in the list, otherwise, it's a waste of both memory and processor cycles.
Same with circular linked lists (where the tail points back to the head), but in this case, you're not losing anything, so that's more of a style choice than anything.
Yes very true, I hadn't consider the list of primitives. I don't disagree about the null end pointer, it's really a style choice. There's not a lot in it computationally.
tobortine It's a little more than a style choice. If you use null you don't have to pass the head around with each computation. It's a small cost but still worth remembering
Why do you have to pass the head around? With each new link address you compare it to the head address as opposed to comparing it to null. No extra overhead.
Nice! Especially the case examples you used.
undo/redo operation i've always implemented using two "stacks" (arrays).
do something new, push to the undo stack and clear the redo stack.
press undo, pop the undo stack and push it to the redo stack.
press redo, pop the redo stack and push it to the undo stack.
On the other hand, you can model undo/redo as a pair of stacks. I don't know if that would be more efficient, but it whittles the functionality down to only the bare necessities for that behavior.
To elaborate: when a change is made, push the current state onto the undo stack. When UNDO is pressed, pop the top of the stack, set the current state to that popped state, and then push the state onto the redo stack. When REDO is pressed, just do the same thing in reverse. (And of course, clear the redo stack when any other changes are made.)
IceMetalPunk That was my thought, but then I thought about how to best implement the stacks and a linked list is probably better than an array because you add frequently and don't require random access. Though I would probably create a memory pool so that allocating memory can be batched up. Maybe each node would contain a small set of operations to reduce the memory overhead of the pointers, but still better than implementing the stacks as arrays.
I had a similar thought. I can say for sure a stack is perfect if you just need to implement undo, but maybe a linked list would be better for both undo and redo(?). I can say I've used a stack for implementing time reversal in a video game, and it was well suited to it.
Yes a stack is an solution - until you consider that you may want to limit how many undo steps are being preserved. Since at some point you probably want to start discarding the oldest undo steps, in order to not consume more and more memory with every action taken.
+AnotherUselessNick
Well an undo stack can be implemented as a circular buffer. So you have a fix sized buffer a start and an end pointer. The start doesn't have to be fix. When you reach the end of the buffer you simply wrap around and start again from the beginning, overwriting the oldest entry. At the same time the start pointer would be moved forward so the second element is now the "first" and the first in the buffer is actually the last.
I was just going over this in CS50! now do tries! Those hurt my brain :P
Could you have a repeating list where you pointed the last node back towards the first?
It is called a circular linked list.
Matt Hopson
cheers
skunkFU25
a graph is a very general term, circular linked list is specifically what this guy asked for.
+skunkFU25 No. A linked list is a data structure, a graph is, well, a graph. However, you can represent a linked list with a graph. In this case it would be a directed cycle graph for a singly linked circular list or an undirected cycle graph for a doubly linked circular list.
+hellterminator Graphs are data structures too, much like arrays and hash tables.
Great video. I want to see more videos like this.
I used a linked list for a gamestate manager, the current gamestate was the head of the list and every time you started a new state (like if you opened you inventory) I would push the new state on to the the stack, and since I never had more than 3 items in the list, even if I had to access a state at the bottom of the list (which was never actually necessary) it wouldn't be to costly, that being said in retrospect it was dumb considering it would have virtually no performance benefit, and all I accomplished was wasting my time implementing a linked list in lua for one stupid purpose
Terrific video... Example was great plus the location is an icing on the cake.. makes it learning pretty pleasant and real .. unlike sitting on computer and talking like robot :D
what happens to the linked list elements that are cut away when you click another link in memory terms?
They become orphaned and are no longer accessible. Poorly designed programs might just leave it there, and over time more and more memory is consumed but isn't actually being used, that's called a memory leak. The proper way to deal with it is to deallocate the memory, which is just telling the operating system that you don't need it anymore and it can be used for something else. Some languages do this automatically in a process called garbage collection, but some lower level languages like C require the programmer to do this manually
So a linked list is like back to the future basically.. you can go back and forth as much as you want until you change something, which them alters your "future" in the list.
I don't mind the traffic audio. It makes the video interesting.
I didnt see it, but you also have to account for the amount of data in the list. Example: first block, has number of bytes in the list, then tail says if its linked or not. Example, is the classic file system. you dont want to have a link list of single bytes, as the link pointers take up more data. just fyi info
One comment I felt is needed on this video to any students possibly watching this, when adding or deleting a node, you always make the new links before you remove the old ones. It was really bugging me when he was removing the old links before making the one ones. If you delete the link between 2 and 3 before you create the link from 26 to 3, you will lose access to 3 and all subsequent nodes.
I prefer a balanced binary tree, which allows sequential traverse of the entire list as well as binary search of the ordered list.
If you paste your secrets somewhere, don't delete them, but undo them and overwrite the redo history. that way if somebody sneaks up on your word document, they won't find out... unless they are really geeky and for some reason it's still stored in memory... but realistically... meh.
0:47 I was wondering that what's the significance of showing the arrow curved, rather than straight just how he drew
but 1:18, yeah, curved arrow are more suitable for conveying the non-contingency. nice detail.
Sounds like a queue list, with only one link it will just jump forward as you dequeue.
I love the comments telling the Doctorate how he should have done something
Very Informative, Thank You
I'm studying computer sicences in a mexican university and we have two data structure courses wich covers this topic in depth (alongside with other things like trees and graphs). Even we need to program from scratch the lists.
That courses name (translated from spanish) are: Object-Oriented Algorythms and Linear Storing Patterns and Object-Oriented Algorythms and Non-Linear Storing Patterns
why is he nervous ? he explains very well 😃
Great explanation indeed! Thank you very much, it was all clear. I'm subscribing.
In real life very few programmers use Linked Lists directly, and when they do they use a Library, or they use a "growable aeeay" that is really implemented as a highly advanced data structure that probably is a hybrid between an array and a linked list.
yes, just how nowadays engineers use programs instead of calculating stuff on paper but still every engineer must know math and physics
Just an fyi for Computerphile and everyone I guess, the diagrams you did are a little bit wrong. The arrows should not be pointing inside the nodes, because this would mean the association is made with the value inside the node and not the actual node. As you can see, when Dr Alex Pinkney made his drawing on paper, he respected this law. He made the arrows pointing at the node, not inside of it.
Bad example. you should have inserted the number "1.4" :)
If you look at dictionaries, internally these are often realized as linked lists. This could theoretically also be done with Arrays - once you add a layer of abstraction there is no need to use contiguous memory for them.
Kudos for mentioning vim and its history tree
couldn't think of something more natural to come between 1 and 2? like 1.5?
Brought to you by Frank Harris & CO
please do more videos on data structures
'So a linked list is the first serious data structure you learn about because it's _very simple_' Damn you guys are smort.
I can't believe that I've never noticed that when I create a new nod or action in things like do and undo or backward and forward buttons in a browser, I delete the previous forward history! I can't also believe I finally get this thing 😌 I couldn't get it in class and it's really been bothering me.
Would going to the ISS page not only break the NASA Armstrong link but also the ArmstrongApollo11 link, because if from ISS you'd go back to NASA and click the Armstrong link again, your forward button doesn't suddenly allow you to go forward to Apollo11 again
Peterolen mmm... yeah, didn't think about that, thanks for the explanation :)
I think the example with the page navigation is better modeled by an array, since you're only pushing at the back of the structure and only popping from the back. It doesn't highlight any of the strengths of a linked list.
But clearing the next history (when you go back and then change link) with an array takes O(n) iterations
+SnowyCoder Not really. In fact, it's actually the opposite. With a linked list you explicitly have to free all the nodes you drop at the end, while with an array, there's no need to deallocate anything. You may need to deinitialize the values, but that's about it.
Yeah sorry, I was thinking with java xD
Not true. In fact, with a linked list it has O(n) (because you need to free the memory you used for the O(n) nodes which you are clearing) but with an array you can do it in O(1); with an array you don't have to do that; you just set the size of the array to a new number. See, an array is represented as a contiguous block of memory of size let's say MAX_N. Now... since you're not using this entire array, you store a variable array_size which holds the amount of memory you are currently using (for the purpose of this comment, let's assume the elements each have 1 byte in size and as such the physical size in memory of the array is equal to the number of elements). Now.. if want to push an element to the back of the array, you just have to overwrite the memory at address array_size with the new information, and then increase array_size by 1 to signify that you are using 1 more byte of memory. If you want to pop an element from the back of the array, all you have to do is decrease the variable array_size by 1. You do not need to free the memory (in fact you cannot) since the array is one contiguous block. You cannot free memory in the middle of it. It doesn't matter if the data is still physically present at that location in memory since the next time we add an element to the array, it will be overwritten anyway. Similarly, if you want to pop n elements from the back of the array, all you have to do is set the variable array_size to array_size - n. You don't need to delete each individual element, you are just marking them as "unused" and they will be overwritten once new elements get added. Note that if you don't know a-priori an upper bound for your array (that MAX_N I was talking about), you will have to sometimes extend your array, which means reallocating the memory to a different location, which does indeed take O(n). That's why we say that operations on an arbitrarily-sized array take amortized O(1) time, meaning they usually take O(1) but sometimes (very rarely since you won't need to resize your array often) they will take more. If you want to investigate further, you can read upon the std:vector structure from C++; it is implemented on this very notion. Note that in this example I assumed the elements stored in the array don't need to be forcefully closed when no longer needed (such as database connections or whatever) since in this case you cannot escape from the destruction of each individual element and therefore O(n). But that will happen with both arrays and linked lists.
You can achieve O(1) in Java as well, you just have to be a little tricky with the way you write your code. That is, don't use the already implemented ArrayList class; instead, write your own as described in my previous comment :). Or you can use ArrayList but don't use the add and remove methods provided. Not the "proper" way of doing things but hey.
Vim represent!
Why array wouldn't work for browsing history?
What happens when item 1 points to item 2 and item 2 points to item 1?
really cool video
Thank you for the lesson
Can you do one on positional lists next?
why in the singly linked list you show an example of numbers?
and for the doubly linked list you show an example of objects?
What's the difference between linked lists compared to Queues and deque's?
you got Goodrich and Tamassia book?
I think you just described time travel.
Linked lists for browser history is a good example of a graph-based data structure that was implemented as a linked list and ruined my life. There are others.
There's a back to the future joke waiting to happen at the end.
is this something like array list?
can you do a video on time & memory complexity?
So when we deleted a node, we essentially unlink it from the list and not really delete it from memory. For array, we use delete [ ] array; but how do we actually delete this unlinked node?
Can you please do a video on segment tree
its like saying, hey mate can you please explain linked lists to me? I will buy you the coffee, and he says yeah sure lets go