Distributed Cache System Design - Part II | Google Interview Question

แชร์
ฝัง
  • เผยแพร่เมื่อ 25 พ.ย. 2024

ความคิดเห็น • 148

  • @ThinkSoftware
    @ThinkSoftware  5 ปีที่แล้ว +1

    Thanks for watching this video. Let me know in comments below if you can solve the challenge question or would like me to make a video on it. Also let me know if you like this video by commenting below. Also let me know in comments below any topics on which you want me to make future videos. Thanks.

    • @swatigupta6881
      @swatigupta6881 5 ปีที่แล้ว +1

      Can you tell us about the working of topology manager in detail

    • @ThinkSoftware
      @ThinkSoftware  5 ปีที่แล้ว

      Thanks for the comment. Yes I will make a separate video on that if there is enough interest.

    • @carlaludgate6597
      @carlaludgate6597 4 ปีที่แล้ว

      @@ThinkSoftware Did you end up making this video? I would love to see it

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว +2

      Didn't find enough interest :(

    • @sakshipandeywishicudgetit
      @sakshipandeywishicudgetit 4 ปีที่แล้ว

      @@ThinkSoftware It would be great if you get some time to make it ! Thanks already :)

  • @mc3newsmcocconcierge504
    @mc3newsmcocconcierge504 2 ปีที่แล้ว +2

    Your channel is quite literally an angel for system design. No one is covering the granular details like this!

  • @palspal2329
    @palspal2329 2 ปีที่แล้ว +2

    Best explanation in internet, haven't found any video which went to such depth

  • @karanbhatia2834
    @karanbhatia2834 ปีที่แล้ว

    I’ve been watching a lot of your videos recently for my upcoming system design interview and I really appreciate the effort you’ve put in your explanations. Your videos offer one of the most detailed, justified and elegant explanations I’ve found on the internet.
    Thanks for this! Keep up the amazing work!

  • @hrishidypim
    @hrishidypim 2 ปีที่แล้ว +2

    Excellent..underated you tuber..

  • @gautamtyagi8846
    @gautamtyagi8846 3 ปีที่แล้ว

    ur videos are adding more and more deeper understanding for designs. thanks a lot. u r doing great work.

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว +1

      Many thanks for the nice words 😊

  • @KemoLeno
    @KemoLeno 5 ปีที่แล้ว +2

    Hi, thanks for the video. It was well explained and designed. Just a couple of notes for improvement:
    1- iIf you can go to the point right away at the beginning of the video, it will be better. No one wants to hear the usual "Please like..etc". It is usually put at the end (especially that the people would have already got a chance to listen to your lecture and recommend it by themselves without stating it out, assuming you did a really good job t here).
    2- Second comment is about the "pause here black screen". You already mentioned us to pause, so we will stop the video ourselves if we are keen on pausing. But putting the black screen won't add any benefit. On the contrary, It abruptly cuts out the video, which makes it uneasy for me to collect my thoughts again.
    Thanks again and best of luck :)

    • @ThinkSoftware
      @ThinkSoftware  5 ปีที่แล้ว +2

      Thanks for the feedback. I will incorporate this in my future videos.

  • @leolee2743
    @leolee2743 2 ปีที่แล้ว

    This may not be a perfect video to attract views, limited by the audience group, but we do appreciate what you did :)

  • @wanderlovers6181
    @wanderlovers6181 3 ปีที่แล้ว

    Thanks for the videos! In the first video, you discussed the different types of cache and database configurations: Read-Through, Write-Through, Write-Behind, and Refresh-Ahead. It would have been nice to have some more detail on when each should be picked, or have it be clear here if you'd picked one.

  • @arnavjoshi-v2q
    @arnavjoshi-v2q ปีที่แล้ว +1

    Excellent video

  • @danielpark4204
    @danielpark4204 4 ปีที่แล้ว +1

    Simply amazing.

  • @ibrahimshaikh3642
    @ibrahimshaikh3642 4 ปีที่แล้ว

    Loved ur video, answering ur challenge questions.
    We can use bucket level in HasMap instead of global lock on HashTable. This will improve multiple thread thread to operate simultaneously.

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      Thanks for the comment 🙂. There is also a double linked list.. so we need to make sure access to it also not a bottleneck.

  • @deepakdude9515
    @deepakdude9515 4 ปีที่แล้ว +1

    Nice video, plz cover those last qns also in separate videos. very helpful indeed

  • @HiteshKumar-md5yk
    @HiteshKumar-md5yk 3 ปีที่แล้ว

    Great explanation!!! Could you please continue the topic of distributed Cache and also please make a seperate videos on how to make datastores distributed similar to this series

    • @HiteshKumar-md5yk
      @HiteshKumar-md5yk 3 ปีที่แล้ว

      And also I would really like to know about the challenge question

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว

      Thanks for the comment 🙂

  • @shamstabrez2986
    @shamstabrez2986 ปีที่แล้ว

    its been 3 years that u have uploaded this video i jst wnna ask u that these videos r still valid for 2023 bcoz things r changing continiously new technology new techniques r available

  • @hannnah689
    @hannnah689 3 ปีที่แล้ว

    Thanks for the video and it is very helpful!

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว

      Thanks for the feedback 🙂

  • @apoorvaranjan787
    @apoorvaranjan787 3 ปีที่แล้ว

    Very Useful Man. Keep it up. Thanks alot.

  • @vamsihemadri
    @vamsihemadri 2 ปีที่แล้ว

    Thank you for the informative video.

  • @padam_discussion
    @padam_discussion ปีที่แล้ว +1

    very nice video

  • @Mohamed-uf5jh
    @Mohamed-uf5jh 3 ปีที่แล้ว

    Thanks sir,
    Thanks for the video and it is very helpful

  • @arundhwajiiith
    @arundhwajiiith 4 ปีที่แล้ว +1

    Hi its very nice effort !!!

  • @chitrabasukhare2998
    @chitrabasukhare2998 4 ปีที่แล้ว +3

    Hi,
    Can the solution of challenge be implemented using compare and swap non-blocking algorithm?
    Every time a read or write request comes, the update in linked list can happen in following way:
    1) value of each key is a pointer. It hold the reference to its next node and the node in the back.
    2) during read/write, when inserting the node in the front of the head. Compare algorithm will check if the head is referencing to the same value when the thread started. If yes then update the node. Else read the head again. The else condition is possible when an another thread had/is simultaneously updating the head. This will be implemented in the while loop.
    3) similarly for eviction, we can have a global counter. Value of this counter can be updated the same way as point 2 and tail can also be updated same as point 2.
    In this approach we are not using the lock, thought there is the problem of starvation.
    Can you confirm if this approach is right or if it is incorrect, please give some hints?

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      Yes you are thinking in the right direction. What is the problem of starvation?

    • @chitrabasukhare2998
      @chitrabasukhare2998 4 ปีที่แล้ว +1

      @@ThinkSoftware In the point 2, a thread loops until it updates the value. The loop is on compare and swap algorithm. This could be possible that while other threads are concurrently updating the value a given thread never gets a chance to perform the update, (similar to bounded waiting problem of os). And this would lead to starvation of that given thread.

  • @ankitg200
    @ankitg200 2 ปีที่แล้ว

    Very nice and detailed video..regarding the first question about maintaining synchronization, I think synchronization can be taken care if write is happening on 1 node only else distributed lock need to be taken ..but yes it will reduce the performance...not sure how to handle the performance..may be by using optimistic locks..

    • @ThinkSoftware
      @ThinkSoftware  2 ปีที่แล้ว

      Thanks for the comment

    • @ankitg200
      @ankitg200 2 ปีที่แล้ว

      @@ThinkSoftware please reply for the answer of synchronization

    • @ThinkSoftware
      @ThinkSoftware  2 ปีที่แล้ว

      This is discussed in the course. Short answer this is only taken care in a single node.

    • @ankitg200
      @ankitg200 2 ปีที่แล้ว

      @@ThinkSoftware thank you

  • @rahulsinghai3033
    @rahulsinghai3033 5 ปีที่แล้ว +2

    Hi can you discuss on how to design a version control system like git hub and build tool system like jenkins or team city

    • @ThinkSoftware
      @ThinkSoftware  5 ปีที่แล้ว

      Thanks for the comment. Yes I am adding this ask to my backlog of different systems that I am going to design in future.

  • @ThinkSoftware
    @ThinkSoftware  5 ปีที่แล้ว +2

    What do you guys think how topology manager will be implemented? I did give a hint about its internal implementation in the video though.

    • @vaniyal
      @vaniyal 4 ปีที่แล้ว

      Can we use Zookeeper as an abstraction layer between application layer and distributed cache layer?

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      There are reasons why Zookeeper cannot be use as an abstraction layer for distributed cache. I will let others chim in for now for this.

    • @ankuj2004
      @ankuj2004 4 ปีที่แล้ว

      can this be implemented without topology manager. We can send a request to one node of cache, which itself can route it to correct node. Could you elaborate why ZK can't be used and what are the alternatives

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      There still need to have a component that is responsible for topology updates (which is topology manager in our case). Are you suggesting to use same node for the routing layer and the cache? this is doable. The question above was using ZK as an abstraction layer between application layer and cache layer..this we cannot do that..ofcourse ZK can be use to distribute the topology information among the nodes in the cache.

    • @ankuj2004
      @ankuj2004 4 ปีที่แล้ว

      One another way i am thinking is having a topology info in a config file which could be read by the app server or cache client. This could be hosted in s3. The downside will be it had to be updated wach time new server is added.

  • @neilteng1735
    @neilteng1735 3 ปีที่แล้ว

    Love this video!

  • @carlaludgate6597
    @carlaludgate6597 4 ปีที่แล้ว +3

    Hi, great video. At time 5:00 you mention a design challenge and you give a good question.
    "What are the other ways we can access the data without using a global lock, which does not effect the performance of the cache?"
    You stated that you would record a video on this if there was enough comments about it. Did you eventually record this video? I am a very interested in the answer and watching you do this.

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      No didn't see much interest yet. You are the only one who asked for it.

    • @carlaludgate6597
      @carlaludgate6597 4 ปีที่แล้ว

      @@ThinkSoftware I'm studying for my design interview and I'm wondering about the answer.
      Would the answer be: pessimistic lock on the key/value we're updating if we prioritise consistency, and optimistic lock if we prioritize availability? Or is there a better way to do this?

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      There is a lock free implementation that you can perform.

    • @carlaludgate6597
      @carlaludgate6597 4 ปีที่แล้ว

      @@ThinkSoftware Oh okay, thanks. Are there any hints you can give about what that implementation might be? Any suggestions about what I could investigate to figure this part out or do you know another person who has recorded a video on it?

    • @carlaludgate6597
      @carlaludgate6597 4 ปีที่แล้ว

      @Think Software By the way, by optimistic lock I was referring to using version numbers or timestamps to keep track of the data during concurrent calls, not sure if that was clear

  • @RichJRZ
    @RichJRZ 4 ปีที่แล้ว

    To solve the lock problem, perhaps I can propose something.
    Separate the read command into read and update, with the update part being moving the key to the head of the linked list. The read part can be done in parallel without any locks as it does not change anything. The update part can be batch processed as follows. Suppose n read requests come in at the same time, that require n keys to be updated. These n keys are passed together into a batch update function that take any number of keys as parameters and figures out how to move them all to the head of the list by figuring out which nodes must be redirected as a result (this may be slow, O(n)). The redirection can be done in parallel.
    Write commands are composed of insert and possibly evict. The insert works similarly to the update with respect to the list, except there isn't an additional collection of adjacent nodes to be redirected, so it's faster. With respect to the table, the insert can be done in parallel. The evict with respect to the list only changes in that we may need to evict k nodes, which can be done by moving the tail pointer k times (this may be slow, O(k)) and returning the k keys to be deleted from the table, which can then be done in parallel.

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      Thanks for the comment. There are many issues in this implementation. First of all you need to have additional space for K (some arbitrary number of keys) that you will be keeping over the size. Then there is one writer thread (that is updating the linked list) and other reader threads. You still need to synchronize between that one writer thread and all N reader threads. This would be still a hard problem. If you acquire the writer lock then it means no reader thread will be able to access the list. Similarly, if there are huge number of reads coming (and coming all the time) then your writer thread might starve and will never get time to run. Also how the writer thread will know the order in which it needs to put the nodes?

    • @RichJRZ
      @RichJRZ 4 ปีที่แล้ว

      @@ThinkSoftware Thank you for the response! I appreciate the constructive comments to help me think of potential issues I haven't thought of. So I'm thinking of the following. We can separate the cache get request into a read request into the hash table followed by a write request to the linked list, and the cache put request into a write request into the hash table followed by a write request to the linked list, which may result in a write request (eviction) back to the hash table.
      The hash table can handle read and write requests in parallel as long as they are not directed at the same entry, since hash table entries are independent (open addressing would change that, but I think the following still works). When writing to an entry, we place a lock on that entry, preventing any other reads and writes. When reading an entry, we place a lock on that entry, preventing any writes. We can then queue up rejected requests, based on locked entries, to be tried again later.
      The linked list doesn't have any read requests, only write requests. The schema above doesn't work because list entries are not independent, as moving a node affects its 2 neighbors and the head of the list. Thus the list has a single master write thread. All write requests that come to it are queued up until the master write thread is available, so yes I think we need a buffer, although it should be much smaller than the hash table. The master write thread, when available, then grabs the whole queue as a parameter, and the queue order tells its where to put the nodes. Once it computes nodes to move, the actual movement can be done by slave write threads.

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      Thanks for the comment. I don't fully understand the approach you are mentioning though.

  • @RS7-123
    @RS7-123 4 ปีที่แล้ว

    Keep going. nice video

  • @yoyonevertheless7226
    @yoyonevertheless7226 4 ปีที่แล้ว

    interested in the single/multiple master and leaderless approach
    also, interested in the lock free way to make LRU cache thread-safe. CAS is the way i am thinking. but, ya, starvation make this problem more compliocated

  • @prafulrg
    @prafulrg 3 ปีที่แล้ว

    Thanks for the session, it was really good.
    I have one doubt, let's say for a single key there are multiple read/write threads (Requests) received, then how to handle it?

  • @pcs432
    @pcs432 4 ปีที่แล้ว

    quite interesting topic. thank u for posting this video. i am trying to find out cache queries and different types of caching properties like consistency=strong/eventual, timeToIdle.. if possible, can you please post video on this. i am unable to find out any good tutorials also on these caching attributes.

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      Thanks for the comment. I have added the request in the list of my backlog.

  • @michaelhon7184
    @michaelhon7184 3 ปีที่แล้ว

    In the video, we have read replicas we can use for 'get' requests . However, a 'get' in LRU implementation can also update head of list, but effectively becomes an update/write on cache . If so, is there still benefit in read vs write replicas ?

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว

      Thanks for the question. There is no such thing as read/write replicas. There are only primary and secondary replicas. The only difference between them is any write is initiated by the primary. As far as read is concerned, all replicas take part in them. How LRU is implemented seems simpler in case of a single node but it is a bit tricky in case of multiple nodes (i.e. a primary and one or more secondary replicas). This is discussed in the course.

  • @akashjain4184
    @akashjain4184 4 ปีที่แล้ว

    @Think Software , can you consider covering below line items is further videos:
    DDD (Domain Driven Design)
    Saga Pattern for Managing Distributed Transactions
    Different ways to structure Saga Co-ordination - Orchestration & Choreography
    Workflow Engine Based Service Orchestration
    Service Orchestration vs BPM
    Thanks

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว +1

      Thanks for the comment 🙂. Noted.

  • @desifullpet8164
    @desifullpet8164 4 ปีที่แล้ว

    nice video....

  • @GauravKawatrakir
    @GauravKawatrakir 3 ปีที่แล้ว

    Time for "Logical Components in a Distributed Cache System" is mention in description seems wrong. Please check

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว +1

      Thanks for the comment. Will check.

  • @ravindrabhatt
    @ravindrabhatt 3 ปีที่แล้ว

    Does Redis work this way? Where does it apply Quorum logic to determine the master?

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว

      Which quorem logic you are talking about? W + R > N? This one is not used to determine/decide master node. There are different algorithms for master selection. The quorum logic is for determining consistency level.

  • @puneetjain9177
    @puneetjain9177 4 ปีที่แล้ว

    @Think Software: I am still not able to figure out how to implement the synchronization mechanism for the doubly linked list for the huge number of read and write requests. Some pointers from you would be really helpful.

  • @gajendrathakur4833
    @gajendrathakur4833 3 ปีที่แล้ว

    Thanks

  • @akashjain4184
    @akashjain4184 4 ปีที่แล้ว

    1 doubt related to the LRU cache implementation when we are saying we are using a hashtable, the worst case complexity to map let's say A to the pointer in the DLL will be O(n) in case of collisions right?

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว +1

      Hashmap lookup is amortized O(1). Now worst case complexity depends on the internal implementation of hashmap. It will be O(n) if the bucket in the hashmap uses a linked list to store all the nodes. It will be O(log n) if the bucket uses a BST. Ofcourse, in that case, it would be better to re-hash the hashmap. Now, once you have the node value from the hashmap, then accessing the node in the DLL and moving it to the head should always be O(1).

    • @akashjain4184
      @akashjain4184 4 ปีที่แล้ว

      @@ThinkSoftware Agreed, in the overall scheme of things the worst case time complexity for insertValue or getValue or invalidateKey can be O(logn) + O(1) ~ O(logn)(if using a bst based implementation for the Hashtable) OR O(n) + O(1) ~ O(n) if using linked list based implementation.

  • @monikasikri6097
    @monikasikri6097 3 ปีที่แล้ว

    please explain multi master and leaderless as well

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว

      Thanks for the comment. Will consider them.

  • @GauravSharma-wb9se
    @GauravSharma-wb9se 2 ปีที่แล้ว

    duration 23:00 - 23:10 you said after writing data to any of the 2 replicas if read happens to any pair of replica then at least 1 server will have data and read will be successful, but my doubt is, read will happen to any 1 of the replica or it will search for all or 2 replica for data. and if 2 replica then it will be more delay for searching any thing...isn't it ?

  • @khuranaishaan5
    @khuranaishaan5 4 ปีที่แล้ว +1

    What do you mean by logical replicas? A little perplexed there.

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว +1

      This is actually explaining that even if you start a system with 1000 partitions and 3000 replicas (3 replicas per partition), still these won't be taking any compute/storage resources when they are empty. So multiple of them can be mapped to a single physical node/machine. These replicas are logical constructs but physically they might be mapped to simply 3 machines in the beginning when most of them are empty. And as their size increases, we can add more machines and move some replicas to those new machines.

    • @sagartyagi2450
      @sagartyagi2450 3 ปีที่แล้ว

      @@ThinkSoftware having the replicas in the same machine defeats the whole purpose of replicas.

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว

      All the replicas in a partition will be in separate machines. But it does not mean the same machine cannot host replicas of other partitions.

  • @mohamedshouman1813
    @mohamedshouman1813 4 ปีที่แล้ว

    very nice videos, seeing alot of interest for lock free implementation. I am only seeing alternatives such as concurrentlinkedhashmap, is lock free applicable to concurrent read and write? would it use buffer and probabilistic approaches?

  • @taishancorp7720
    @taishancorp7720 3 ปีที่แล้ว

    Around 9:00 , you suggested to increase replicas as an independent solution of option 2)more partitions, with same cache size. How will that work ? How can you increase replicas when you need to lock across replicas ?

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว

      What lock across replicas are you talking about?

    • @taishancorp7720
      @taishancorp7720 3 ปีที่แล้ว

      I was wondering how will increasing replicas as an independent solution help ? As an independent solution, it means we don't partition the data, right? That means all the replicas need to have same data .

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว

      For a read heavy system, increasing replicas help because then read queries can be distributed among them

    • @taishancorp7720
      @taishancorp7720 3 ปีที่แล้ว

      @@ThinkSoftware Thanks . If read happens to 2 servers but a quorum is not achieved at that time due to some delay, does app server keep retrying until conflict is resolved? Or does app server try to manually trigger a conflict resolution on the read replicas ?

  • @manu-qf4oh
    @manu-qf4oh 3 ปีที่แล้ว

    hello sir,
    regarding the formula R+W>N .. for 3 replicas I understood.. but for large no. of replicas I am not getting what will the way to define the value for R & W.
    Could you please explain the logic to define these values for n number of replicas?
    thanks

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว

      It is a topic for another video :)

  • @mehranbehbahani3050
    @mehranbehbahani3050 2 ปีที่แล้ว

    As for the challenge, is using a thread-safe queue to process the writes a bad idea? if yes, why?

    • @ThinkSoftware
      @ThinkSoftware  2 ปีที่แล้ว

      What does a thread safe queue means? A queue that is protected by a lock will incur significant performance overhead due to the lock.

    • @mehranbehbahani3050
      @mehranbehbahani3050 2 ปีที่แล้ว

      @@ThinkSoftware Like a ConcurrentLinkedQueue in Java. Threads will add to the queue without getting blocked and writes in the queue will be constantly processed and removed from the other end. as a result, at any given time we have a bunch of writes waiting in the queue and only one of them will be processed and written to the DB.

  • @HarishAmarnath
    @HarishAmarnath 3 ปีที่แล้ว

    Regarding the question on syncronous read and write operations ...answer : we will always fetch the cache based on key right, why not simply use a hashmap .keys can be cache keys ,values can be values +time stamp+ frequency of access..based on this we can simply use concurrenthashmap , and cache can run a daemon thread to evict all applicable entries based on eviction policy... One optimization would be to keep 20 % more size for concurrent hashmap and this daemon thread can evict when the size reaches 100% ..but original size would be 120% if specified size... Idea is we ll have enough time to evict all cache before it reaches 120 %

  • @saupausau
    @saupausau 4 ปีที่แล้ว

    Nice video...Had a question related to Primary and Secondary.Are they within partition or across partitions? I would guess they are across different partitions in case a partition goes down

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      Thanks for the comment 🙂. You do understand what a partition means?

    • @saupausau
      @saupausau 4 ปีที่แล้ว

      @@ThinkSoftware I think of partition as a region within one physical node. I guess that's what you meant in your video too? Once node goes down, all partitions within node are unavailable? Thanks again

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      @@saupausau This is not what I meant by partition and this is not what partition is that you described. You should search for what a data partition means. Or simply you can also find this in my course. You are confusing a partition with some availability zone.

  • @ShivamKumar-xl2zp
    @ShivamKumar-xl2zp 4 ปีที่แล้ว

    Hi thanks for the nice content.
    Please make one for partitions and how a partitions get further divided and merged.

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      Thanks for the comment. I have noted this request.

  • @GauravKawatrakir
    @GauravKawatrakir 3 ปีที่แล้ว

    It's better to have article on this for better understanding.

    • @ThinkSoftware
      @ThinkSoftware  3 ปีที่แล้ว

      You can find more details in the course.

  • @naveens5809
    @naveens5809 4 ปีที่แล้ว +1

    Hi, make video on global locking issue

    • @ThinkSoftware
      @ThinkSoftware  4 ปีที่แล้ว

      Please elaborate more what do you mean by global locking here... I will put this in my list of videos to make

  • @sumitdesai5584
    @sumitdesai5584 ปีที่แล้ว

    Concurrent hashmap