@@romir_07 we add an element at the end but to find which element is the end…we need to start from head and all the way to the end ad worst vase here is obviously o(n)
*My takeaways:* *Computation cost: array vs linked list* 1. Cost of accessing an element 1:10: array O(1) and linked list O(n) 2. Memory usage 4:26 3. Cost of inserting/deleting an element 8:30: array O(n) and linked list O(n) 4. Easy to use 11:35: array is easier than linked list
Hi Prasanth, Thanks for your kind words. We will first complete some of the existing playlists and then we will move to OOP concepts. We are building a mechanism to track video requests on our website. Anyway, for now, your request is in queue.
Time complexity is always the worst case scenario, which in this case is adding an element at the beginning of the array, requiring all elements to shift.
srsly, i don't know your name and who you are but for now you are a god to me. thank you so much. i longed so badly for learning this, now i understood this
another really great video - i love the way you teach and how clearly you explain the concepts. i find this useful even when i'm not trying to implement in C / C++
part 2.. contd...we often say that our machine is 32-bit or 64 bit. On a 32-bit machine, there are 32 bits on address bus that is used to transport address of a memory to processor. So, maximum number of addresses can be 2^32. On 64-bit architecture, maximum number of addresses can be 2^64. Pointers size depends upon whether we are compiling for 32 bit architecture or 64 bit architecture. Typically, we compile for 32 bit. So that's why its 4 bytes.
Wonderful series of tutorials ..! only 10% of the videos in youtube makes sense and this is one of them. Good job mycodeschool. It would be good if you could cover some of the OOPS and virtual functions.
The person who knows the syntax without knowing data structures is a beginner on data structures. That's what I meant. Derek's videos are much better to watch after you learn the basics from other who teaches slower. Then, Derek's videos are going to be awesome, you will refresh your memory, practice and master it at a hell of a fast pace.
@@monishkumardurairaj3038 Im not really sure what you are asking but you must understand that linked list is already an established data structure and if you want to change something you can implement a new class that does whatever you want , most likely it will be ineffective , but if you are a great programmer maybe you will create something usefull :) If a particular data structure is not working in some case just use another.
@@mistersmith6752 what i am asking is suppose if we have a struct named list inside which if you have two variables one for storing value and another one(pointer) for storing address. why cant we have a third variable or pointer which may be static to store the last address of the list. since it is static the pointer can only be created by one instance and can be used by all the instances. eg: struct node { int value; node* next; static node* last; }
@@mistersmith6752 i understood what i have asked is not possible in c.since the static variable cannot be declared inside a struct. so this cannot be done... thanks for the help .. note: but in c++ what i have asked previously is possible i guess.
Pointers will always take same number of bytes. Pointers store the address of a particular byte in memory. Address of data is the address of starting byte. We look at rest of the bytes in the data using arithmetic. Size of pointer depends upon total addressable bytes in memory and not upon what data we are pointing to. With 32 bits, we can have 2^32 possible addresses. contd ...
Adding element to the end of a Linked List doesn't need to be O(N) if you store and control the position of the last element, like you do with your head element, and like you do on the array It could be O(1)
Really very good explanation. But there is a small mistake in this video and the video previous videos U mentioned that inserting an element in the array will have a cost of O(n), that's quite not right. Because it's Big-Theta(n) NOT Big_Oh(n). This is because you may insert an element in index and do the shifting staff for number of iterations less than the total length of the array. Let me show an example: Assume that u have array of 10 Elements, U only use the first 3 elements. Then you need to add an element in the first index, so you will shift the elements three times only (NOT 10 times). If you like to use the Big-Oh lower bound, then it will be O(m), & (m
why the time complexity for inserting an element at the end of linked list is O(n) i think it should be o(1) because we just make the Null to the inserted element then we add the address of the element of last element to the previous element !! and Thanks for these Awesome Videos
Muhammed Hussein Because you need to start from the address of the head node to arrive subsequently to the exit node.We have no knowledge of the last node address,without first having been directed by the head.What you suggest is like trying to arrive to any destination on a map,but having no awareness of where you are.How can you arrive to your destination,if you have no point of reference.
Correction: time complexity for inserting at the end of a linked list is O(1) if you have a tail pointer pointing to the last node of the list. Example (in Python): class Node: def __init__(self, d): self.data = d self.next = None class LinkedList: def __init__(self, d): self.head = Node(d) # Head stays the same self.tail = self.head # Tail points to ending Node def append(self, d): self.tail.next = Node(d) # NEW NODE self.tail = self.tail.next
Why is adding a node to the end of the list O(n)? If you maintain a tail you can add nodes to the list in constant time no different than added to the head. I have seen many implementations with both head and tail. Is a linked list with a tail not a valid implementation?
Hey this was a great video! Since I use Python, which uses dynamic arrays(arrayLists) by default, could you explain how exactly these work and the time/space complexity for these? Thanks
Thank you very much for all the videos!! I have a question, in the linked list, inserting an element at the end and at ith position should be O(1), isn't it? Because only pointers need to be changed..? Only access to elements at the end and at ith position would be O(n)?
In the case of complex high size data, the fact that you will use exactly as many memory blocks as you need in the linked list than in case of the array is dominating over the fact that each memory block in the linked list contains an extra variable (of type node*) of size 4 bytes(which is pretty less in comparison to size of complex data part).
Your videos are awesome! I learnt a lot from them... I have a question. At 6:54 min at your video, the memory requirement section... shouldn't data and pointer variable be the same number of bytes? in your example, the data is 16 bytes but the pointer variable only 4bytes. Please clarify, thanks.
size of pointer variable will always be same for all data types. the data type that we specify for pointers denote the data type of the variable whose address will be stored in the pointer variable.
Thank you for the great explanation!! I'd a doubt on linked list, when we insert a data at the end of linked list or in between linked list the time consumption is proportional to O(n). I thought however we're going to add it and just have to change the address of last one O(1). but we've to iterate from the first to find the tail node address so it comes to O(n) am I right?
I thought arrays had O(n) and not O(1)? because isn't big-O concerned with worst case scenarios where we may have to iterate through the entire array before finding our element, and would thus need n-time for an array of size n? I thought this because I understood hash tables to be special because they had a faster run-time than arrays, having a general case of O(1) run-time. Am I mistaken somewhere in there? Thanks for the amazing lecture!
oh wait, it would be constant time if we knew ahead of time where in the array (which index) an element is found. I think my question is still valid in terms of finding an element, but is that not what big-O is concerned with? is big-O concerned only with look up for a particular known element? *EDIT* gosh, I missed the part that said "Cost of accessing an element". this all makes sense now.
Kindly any one clarify my doubts.. in linked list why to traverse the entire list inorder to add the new node at the end. Instead just maintain a static variable inside the linked list class that will always hold the end address of the list. inadditon why to have the head variable to store the starting address can we not just access the starting address of the list which is gonna be the same. eventhough if you want to add the node at the beginning of the list you just have to change the address of the list to the new node created at the beginning . why waste 4 byte variable head for storing starting address?????????????
shouldn't linked lists consume more memory than arrays? in each node, there will be memory allocated for data AND the link, whereas in arrays, there will only be memory allocated for data.
- In an array you will always have empty unallocated memory - Let's assume that, in an array, we store big data of 16 bytes / element - if we have 10 empty elements in the array - 16*10 = 160 unallocated bytes in memory - In linked lists this will not be the case - We will have 20 bytes / element - the address of the next element is stored in an integer of 4 bytes - thus 16 bytes + 4 bytes - But we will never have unallocated memory. So it's an overall win for Linked Lists
does a pointer variable require 4 bytes here..?? I heard from some sources that any pointer variable irrespective of the data type consumes 8 bytes..isnt this correct..???
finally someone explaining this information like their talking to beginners not seasoned engineers
I have one doubt...why time complexity in adding an element at ith position in linked list is big-O(n)..it should be big-O(i)
@@romir_07 in worst case ith position can be even nth position therefore we consider it as O(n)
@@romir_07 we add an element at the end but to find which element is the end…we need to start from head and all the way to the end ad worst vase here is obviously o(n)
@@romir_07 big o is used to denote the worst case, that is why n. which ever is the max!
dude,you basically just saved me my whole semester studying with difficult lecture before,now all is ez
The way you explain, I could listen to these videos for hours and not get bored. Absolutely mindblowing content man, keep doing the good work.
*My takeaways:*
*Computation cost: array vs linked list*
1. Cost of accessing an element 1:10: array O(1) and linked list O(n)
2. Memory usage 4:26
3. Cost of inserting/deleting an element 8:30: array O(n) and linked list O(n)
4. Easy to use 11:35: array is easier than linked list
Are you Chinese?👀
As a 9th grader I really appreciate how you are using pictures and analogies to explain what can be confusing concepts at first.
Hi Prasanth,
Thanks for your kind words.
We will first complete some of the existing playlists and then we will move to OOP concepts. We are building a mechanism to track video requests on our website. Anyway, for now, your request is in queue.
Time complexity is always the worst case scenario, which in this case is adding an element at the beginning of the array, requiring all elements to shift.
where have you explained oops?
I have one doubt...why time complexity in adding an element at ith position in linked list is big-O(n)..it should be big-O(i)
@@romir_07 i is a constant, you need to shift (n-i) elements to right. This means the time complexity id O(n-1) , which is same as O(n)
What a loss! :( :(
I can watch your videos as a movie and not get bored :) Awesome!
Best of the Developers who knew about these Legends already raise you hand pls. ✌🏻🙌🏻
Good videos, makes more sense than what was taught in my data structures course here in the U.S.
nice layout and subtitles. I really like the video. very professional. keep it up :)
srsly, i don't know your name and who you are but for now you are a god to me. thank you so much. i longed so badly for learning this, now i understood this
another really great video - i love the way you teach and how clearly you explain the concepts. i find this useful even when i'm not trying to implement in C / C++
Excellent. Very simplified and easy to understand for first time learners as well.
Learning DSA Now in 2021 and still so
Helpful
Harsha left a legacy behind. RIP
It's so geat even now when there are lots of videos on this topic. Good job...
Dynamic Array that adjusts the size by increasing when it got exhausted to its previous maximum length, is called ArrayList in Java
The next "Khan Academy" but this time from India.
really??
unfortunately, he died before achieve such a thing
@@danilo2735 He did't die, but a friend of him.
@@mohammedfawaz289 go to they facebook and you'll see the post
@@danilo2735 Yea I saw it years ago, but I think that was his friend, not the one who speak in the video.
Excellent Way of Explaining Programming Concepts... Great Work Sir !!!
nice online series of video of data structures i ever seen on u tube, mechanical engineer like me learn concepts clearly
love your tutorials. Wish you guys continued no matter what.
part 2.. contd...we often say that our machine is 32-bit or 64 bit. On a 32-bit machine, there are 32 bits on address bus that is used to transport address of a memory to processor. So, maximum number of addresses can be 2^32. On 64-bit architecture, maximum number of addresses can be 2^64. Pointers size depends upon whether we are compiling for 32 bit architecture or 64 bit architecture. Typically, we compile for 32 bit. So that's why its 4 bytes.
Easy to understand and quite precise in flow of learning. Amazed.
Superb explanation. Programming and software engineering made simple.
For Link List, By keeping two pointers one for Head and another for Tail of the List, inserting Element at the End will also of O(1)....
Best lectures I could have seen..
Regretting if I had got such lessons in my engineering schools :(
Wonderful series of tutorials ..! only 10% of the videos in youtube makes sense and this is one of them. Good job mycodeschool. It would be good if you could cover some of the OOPS and virtual functions.
This is honestly better than watching the MIT presentations. Thank you!
The cost of inserting an element at the end 10:24 could be a constant O(1) if you use doubly linked list :)
Thank you, excellent explanation of cost and memory use.
The thing is ....I get bored easily lol but he actually made me interested and ask why this like this.
Thanks mycodeschool, no one ever explained concept like this.
Inserting node at end of linked list would be constant time if we have tail pointer!
But why is there no tail pointer?
@@nurilha Tail pointers are optional and can be made the same way as a head pointer. He just didnt use it in the video.
These videos are really helpful. You are doing a great job.!!!!! Please keep making more such videos.
excellent explaining
really awesome... knowing the things dosent matter much fr a teacher but explaning it does.. nd myschool did it...
thank u sir for such an amazing playlist
Wish you would make your videos in Java, nonetheless you are the best on TH-cam.
Derrick Banas does this in Java
True but his tutorials are not for beginners
A beginner shouldn't be on data structures and algorithms anyway. They still have to learn basic syntax.
The person who knows the syntax without knowing data structures is a beginner on data structures.
That's what I meant. Derek's videos are much better to watch after you learn the basics from other who teaches slower.
Then, Derek's videos are going to be awesome, you will refresh your memory, practice and master it at a hell of a fast pace.
Are you sure because his videos are pretty slow? I normally turn it up to around 2X speed with how slow he goes sometimes
When insetring an element at the end of a linked list why we will be needed to traverse whole memory.
It can be done by changing the last pointer.
first you need to figure out which one is last - for it you traverse from the start :)
Hi @@mistersmith6752.can we not use a static variable inside the class of the linked list to store the last address..???????
@@monishkumardurairaj3038 Im not really sure what you are asking but you must understand that linked list is already an established data structure and if you want to change something you can implement a new class that does whatever you want , most likely it will be ineffective , but if you are a great programmer maybe you will create something usefull :) If a particular data structure is not working in some case just use another.
@@mistersmith6752 what i am asking is suppose if we have a struct named list inside which if you have two variables one for storing value and another one(pointer) for storing address.
why cant we have a third variable or pointer which may be static to store the last address of the list.
since it is static the pointer can only be created by one instance and can be used by all the instances.
eg:
struct node
{
int value;
node* next;
static node* last;
}
@@mistersmith6752 i understood what i have asked is not possible in c.since the static variable cannot be declared inside a struct.
so this cannot be done...
thanks for the help ..
note:
but in c++ what i have asked previously is possible i guess.
So tremendously helpful. Thank you so much!
he has the best playlist on linked list
Pointers will always take same number of bytes. Pointers store the address of a particular byte in memory. Address of data is the address of starting byte. We look at rest of the bytes in the data using arithmetic. Size of pointer depends upon total addressable bytes in memory and not upon what data we are pointing to. With 32 bits, we can have 2^32 possible addresses. contd ...
Awesome !! What a way of EXPLAINING THINGS .... +LEGIT
mycodeschool you're the best
Made ppt using your content xD. Thank you
this is so informative omg , thanks alot
amazing, i am preparing for googles kickstart.
btw im 15 and python programmer ,web developer.
You are champ man ! Nicely explained
Adding element to the end of a Linked List doesn't need to be O(N) if you store and control the position of the last element, like you do with your head element, and like you do on the array
It could be O(1)
you mean if you stored the position of the last node as another pointer variable in the struct definition? that could work but feels complicated
not all heroes wear capes! tysm
All these videos on data structures are very useful. Thanks a lot. But can you upload videos of OOP using JAVA too please.
It's all about allocation. Huge data set or small data set.
Really very good explanation. But there is a small mistake in this video and the video previous videos
U mentioned that inserting an element in the array will have a cost of O(n), that's quite not right. Because it's Big-Theta(n) NOT Big_Oh(n). This is because you may insert an element in index and do the shifting staff for number of iterations less than the total length of the array. Let me show an example:
Assume that u have array of 10 Elements, U only use the first 3 elements. Then you need to add an element in the first index, so you will shift the elements three times only (NOT 10 times).
If you like to use the Big-Oh lower bound, then it will be O(m), & (m
We talk about worst case right???
Awesome course thanks for sharing !
why the time complexity for inserting an element at the end of linked list is O(n)
i think it should be o(1) because we just make the Null to the inserted element then we add the address of the element of last element to the previous element !!
and Thanks for these Awesome Videos
Muhammed Hussein Because you need to start from the address of the head node to arrive subsequently to the exit node.We have no knowledge of the last node address,without first having been directed by the head.What you suggest is like trying to arrive to any destination on a map,but having no awareness of where you are.How can you arrive to your destination,if you have no point of reference.
thanks for this awesome explanation :)
plemyk
awesome explanation
your channel is great..keep it up!
Great videos! Keep up the good work and Thank you!
best lecture.... thankas
Correction: time complexity for inserting at the end of a linked list is O(1) if you have a tail pointer pointing to the last node of the list.
Example (in Python):
class Node:
def __init__(self, d):
self.data = d
self.next = None
class LinkedList:
def __init__(self, d):
self.head = Node(d) # Head stays the same
self.tail = self.head # Tail points to ending Node
def append(self, d):
self.tail.next = Node(d) # NEW NODE
self.tail = self.tail.next
i think that your concept works on Double linked list
Nice lecture
crystal clear!
Thank you sir.
awesome!! really well explained, but what about dynamic array or vector in c++
really great video!!!
Thanks sir well done job
Your explanation is awesome man !
Why did u stopped posting videos ?
Why is adding a node to the end of the list O(n)? If you maintain a tail you can add nodes to the list in constant time no different than added to the head. I have seen many implementations with both head and tail. Is a linked list with a tail not a valid implementation?
You are absolutely right :)
good explanation ...
I dont get the Memory requirements part. How does Linked List consume lot less memory if it uses more memory because of its pointer?
Because there is no unused memory in linked list.
thank you very much! perfectly described!
The way you are teaching is superb and do you have pdf version or any other format notes for quick revise.thanks...;
Can someone tell me that these videos are helpful for the gate (for data structure) preparation or not?
yes its helpful
Hey this was a great video! Since I use Python, which uses dynamic arrays(arrayLists) by default, could you explain how exactly these work and the time/space complexity for these? Thanks
Thanks, that makes a lot of sense...also thanks for the prompt response!
:)
Thank you very much for all the videos!!
I have a question, in the linked list, inserting an element at the end and at ith position should be O(1), isn't it? Because only pointers need to be changed..? Only access to elements at the end and at ith position would be O(n)?
i did not understand that part that a link list will take less memory as compared to that of an array..
please explain!!
In the case of complex high size data, the fact that you will use exactly as many memory blocks as you need in the linked list than in case of the array is dominating over the fact that each memory block in the linked list contains an extra variable (of type node*) of size 4 bytes(which is pretty less in comparison to size of complex data part).
Instead of dynamically memory allocation why arrays are of fixed size
Your videos are awesome! I learnt a lot from them...
I have a question. At 6:54 min at your video, the memory requirement section...
shouldn't data and pointer variable be the same number of bytes? in your example, the data is 16 bytes but the pointer variable only 4bytes. Please clarify, thanks.
size of pointer variable will always be same for all data types. the data type that we specify for pointers denote the data type of the variable whose address will be stored in the pointer variable.
Thank you for the great explanation!! I'd a doubt on linked list, when we insert a data at the end of linked list or in between linked list the time consumption is proportional to O(n). I thought however we're going to add it and just have to change the address of last one O(1). but we've to iterate from the first to find the tail node address so it comes to O(n) am I right?
hi what about self referential pointers in a structure what amount of memory would they take?
Do pointers of Linked list needs extra memory to store them?
Can you make a video on how to calculate address of 3-d array?
I thought arrays had O(n) and not O(1)? because isn't big-O concerned with worst case scenarios where we may have to iterate through the entire array before finding our element, and would thus need n-time for an array of size n? I thought this because I understood hash tables to be special because they had a faster run-time than arrays, having a general case of O(1) run-time. Am I mistaken somewhere in there? Thanks for the amazing lecture!
oh wait, it would be constant time if we knew ahead of time where in the array (which index) an element is found. I think my question is still valid in terms of finding an element, but is that not what big-O is concerned with? is big-O concerned only with look up for a particular known element?
*EDIT* gosh, I missed the part that said "Cost of accessing an element". this all makes sense now.
I now know why google hires bunches of Indian CS engineers...
Nice explains
Kindly any one clarify my doubts..
in linked list why to traverse the entire list inorder to add the new node at the end.
Instead just maintain a static variable inside the linked list class that will always hold the end address of the list.
inadditon why to have the head variable to store the starting address can we not just access the starting address of the list which is gonna be the same.
eventhough if you want to add the node at the beginning of the list you just have to change the address of the list to the new node created at the beginning .
why waste 4 byte variable head for storing starting address?????????????
thank you so much for you greattttttt work!!!!!
If you have a tail reference on the LinkedList would insertion at the end be O(1) ??
yes, right :)
Data Structure Best Tutorial
Check here for free
th-cam.com/play/PLbRk-vKGcVtbyctZWP9SUYewwUfx75CpZ.html
shouldn't linked lists consume more memory than arrays? in each node, there will be memory allocated for data AND the link, whereas in arrays, there will only be memory allocated for data.
+Sherman Hung
Rightly said - he compares 7 elements on an array vs 4 elements on a linked list.
- In an array you will always have empty unallocated memory
- Let's assume that, in an array, we store big data of 16 bytes / element
- if we have 10 empty elements in the array
- 16*10 = 160 unallocated bytes in memory
- In linked lists this will not be the case
- We will have 20 bytes / element
- the address of the next element is stored in an integer of 4 bytes
- thus 16 bytes + 4 bytes
- But we will never have unallocated memory. So it's an overall win for Linked Lists
Yeah but why would you ever have empty elements in a dynamic array.
does a pointer variable require 4 bytes here..?? I heard from some sources that any pointer variable irrespective of the data type consumes 8 bytes..isnt this correct..???
i think deleting at the end of array is O(1), not O(n) so deleting times are not the same
Isn’t deleting the last element in the array always constant time O(1)? We never have to resize the array or shift any elements
yes, I think it is O(1)
well explained
Thanks very good video!
Very nice!
one doubt. If we store the start and end nodes in separate pointers, then adding a new element at the end should take O(1), shouldn't it?
to Reach the end node in Linked List you have to start from head and reach till tail (Traverse across the list ) , so the time complexity is O(n)
Please comeback and add more
can you please post lssons on heaps and multiway trees..?
Y is d complexity O(n) for deleting d element at d end of an array (when it is full) ?
I have one doubt...why time complexity in adding an element at ith position in linked list is big-O(n)..it should be big-O(i)
We are taking worst case