In this video, let's take a deeper look at 5 Node.js Backend Concepts which you need to master! Drop a comment and let us know if you watched this video till the end! ⚛️ Do you want to become a full-stack web developer? Check out codedamn's full-stack learning path: cdm.sh/fullstack ⚛️ Get started with web3 and decentralized technologies today: cdm.sh/web3
I have seen many tutorial videos, but the best part of this channel is you not only describes the language specific concepts, but you also describe the theory behind it, like the concurrency v/s parallelism concept , deep clone v/s shallow clone concept , I really liked to watch your videos.
Given the subject matter expertise with Node, I would love to see video that shows how to execute backend processes in the most performant way. Taking advantage of process forking or worker threads. Often times we see shots fired against Node with other devs comparing it to languages like Golang and how much faster the other language'a are. What I often see is that when other devs are performing these benchmarks they are not taking full advantage of node multiprocess/threading capabilities. Can we get a video deep dive of this, perhaps also demonstrating just how fast you can make node?
Hats off to your courses brother! I am in high school and I am learning what you are teaching , with all the knowledge that I will gain I am gonna make a website for my school's IT club.
These "advanced" videos by other creators are usually just some intermediate things. This is some esoteric stuff. It pays to keep an eye on this channel.
First of all, thanks for your videos! They are really great! I wanted to put some clarification about Lambda cold starts. For Node.js cold starts of a few seconds can only happen if you are not using tools correctly (bundling and minimising the bundle size), which is quite easy now days with IaC frameworks, but if you are using some huge libs for ML or similar stuff, then yes it might be an issue. In normal world, if you use multiple, complex libraries, bundle size for Lambda will be quite small and the cold start should be around 500-600 ms. In real life scenarios on production workloads, cold starts doesn't really matter that much, since single container can live up to 2-3 hours and handle tens of thousands of requests in that time. So P99 for latency won't be affected with that
To be more simplified and concise, Node.js is a C++ application that embeds the V8 engine. You can also use V8 from C++. So nodejs basically reads your code and give it to C++ which run your code through its V8 lib.
So nodejs uses a pool of worker threads for two particular reasons. The first is to perform CPU intensive tasks (like crypto). The second is to handle some types of asynchronous I/O, like filesystem I/O or DNS querying, because back when libuv was implemented some OSs did not provide asynchronous I/O interfaces for those, so the worker thread in this case really blocks on the I/O task, and a callback is registered to be executed on the main thread, am I right?
I have a question, when working with node in a machine where the cpu is multicore but not all cores are same, some cores are low end cores ( ex. Intel 12th gen efficiency cores ) in that case when we spin up the node process if scheduler assigns it to a low end core will we loose performance? is there any way to tell node js which core # to use?
Boss ,trust me you should teach operating Systems. I have never understood Operating System concepts better before.You are way better than any PhD professor that ever taught me. Trust me make an Operating System Crash Course(practical one unlike the theory one that we study) it will be a boom.I have always hated Operating Systems , but now I have got the curiosity to learn more about it. Thanks a lot Mehul Bhaiya Cheers!!
Let's say I have just one single core system, and it is running node js. So now whether we use async operations ( fetch , I/O ), ultimately those do need their own cpu cycles to finish computing. And the OS scheduler would take processing away from Nodejs when executing those fetch, I/O stuff. So in this case the advantage node is giving us we can accept more requests, it's just that they have been added to the queue. Those fetch, i/o stuff will take the same amount of cpu cycles wether I use async or not, it's just when using async operations , node handles it so to any third party user of my backend server, I can serve their requests. The time for server to respond will be longer, because their I/O calls are really far away from being processed, but technically they are being served? Am I right about this? I think I am not too sure about the last part of what I said. Can anyone help clarify ?
If you ever worked with the MCU the fundamental part is even using a single core processors by delecating I/O and network calls you are pretty much freeing up your core, as MCU has a dedicated computation units for such work loads.
@@rishabhgusai96 so I think for MCU you mean multi controller unit. So are you basically saying MCU handles a lot of I/O and network stuff and thus freeing up the CPU even more? ( So it can focus on doing other things )
@@riebeck1986 Sorry I forgot to mension with MCU I meant microcontrollers, think it like SoC on your smartphone, It has a dedicated chip for pretty much everything. CPU is just a part of it. For example decoding a 4K video on just CPU we are going to get sluggy play back, with hardware decoding ( Hardware Accelartion) we actually use another dedicated chip ( ASIC) for decoding the file while the cpu is just resting. Another example is cryptography, most of the cryptographic algorithms are powered by a dedicated Crypto ASIC, that's why CPU don't have to worry about encrypting/decrypting websites all the time. Just google SoC diagrams you will get to know how far computation has came to be.
Back when I was a kid, while purchasing a new smartphone I used to look at spec which states it supports MP4, MKV, FLV video formats and I used to think why they have to mention file format? I mean we can just install a software that knows how to play these files and we are good right? Umm nah, these are the formats which can be decoded by dedicated hardware effectively , rendering them on software( CPU) will cause high cpu usage and the battery will die much sooner.
i think its better they made it to deepClone instead of structuredClone, i mean it's shorter and much readable but at least people doesn't need to use other library for deep cloning
In this video, let's take a deeper look at 5 Node.js Backend Concepts which you need to master!
Drop a comment and let us know if you watched this video till the end!
⚛️ Do you want to become a full-stack web developer? Check out codedamn's full-stack learning path: cdm.sh/fullstack
⚛️ Get started with web3 and decentralized technologies today: cdm.sh/web3
Node.js vs spring plz
I have seen many tutorial videos, but the best part of this channel is you not only describes the language specific concepts, but you also describe the theory behind it, like the concurrency v/s parallelism concept , deep clone v/s shallow clone concept , I really liked to watch your videos.
Given the subject matter expertise with Node, I would love to see video that shows how to execute backend processes in the most performant way. Taking advantage of process forking or worker threads. Often times we see shots fired against Node with other devs comparing it to languages like Golang and how much faster the other language'a are. What I often see is that when other devs are performing these benchmarks they are not taking full advantage of node multiprocess/threading capabilities. Can we get a video deep dive of this, perhaps also demonstrating just how fast you can make node?
WIth clustering you can scale your application the number of times the cores in the system, Nest.js framework has built in capablities for this.
Hats off to your courses brother! I am in high school and I am learning what you are teaching , with all the knowledge that I will gain I am gonna make a website for my school's IT club.
These "advanced" videos by other creators are usually just some intermediate things. This is some esoteric stuff. It pays to keep an eye on this channel.
by far the best video about Node.js concept I've seen. Thank you
I didn't realize it was 40 minutes long until the end. Well done!
First of all, thanks for your videos! They are really great!
I wanted to put some clarification about Lambda cold starts. For Node.js cold starts of a few seconds can only happen if you are not using tools correctly (bundling and minimising the bundle size), which is quite easy now days with IaC frameworks, but if you are using some huge libs for ML or similar stuff, then yes it might be an issue. In normal world, if you use multiple, complex libraries, bundle size for Lambda will be quite small and the cold start should be around 500-600 ms.
In real life scenarios on production workloads, cold starts doesn't really matter that much, since single container can live up to 2-3 hours and handle tens of thousands of requests in that time. So P99 for latency won't be affected with that
To be more simplified and concise, Node.js is a C++ application that embeds the V8 engine. You can also use V8 from C++. So nodejs basically reads your code and give it to C++ which run your code through its V8 lib.
Love this concept series.. more please... You are good in explaining complicated things.ty
extrodnary content and perfect teaching
Watched it till end and from your every video i learn something new and try to implement that in my projects! Thanks a lot !
I watched this till the end. Thanks a lot Mehul for these videos.
Awesome video. Pretty much required for experienced Node.js dev.
I watched this video till the end. 👍
Watched till the end 🔚
Regarding the Cloudflare Workers, have you looked at their Durable Objects? Would love for you to do a video centered on them if you haven't already.
Brilliant explanation!
So nodejs uses a pool of worker threads for two particular reasons. The first is to perform CPU intensive tasks (like crypto). The second is to handle some types of asynchronous I/O, like filesystem I/O or DNS querying, because back when libuv was implemented some OSs did not provide asynchronous I/O interfaces for those, so the worker thread in this case really blocks on the I/O task, and a callback is registered to be executed on the main thread, am I right?
Thanks for making such a awesome videos !
Thanks ❤️
I watched this video till the end
min 21. how do you difference the copy object? can you notice the default value?
So if I have four cores on my system, I can only have four terminals running a node server?
This is wonderful
i watched this video till end
I watched till end
I have a question, when working with node in a machine where the cpu is multicore but not all cores are same, some cores are low end cores ( ex. Intel 12th gen efficiency cores ) in that case when we spin up the node process if scheduler assigns it to a low end core will we loose performance? is there any way to tell node js which core # to use?
thanks
I watched this whole damn video😼✌️
ThankX 👍
Boss ,trust me you should teach operating Systems. I have never understood Operating System concepts better before.You are way better than any PhD professor that ever taught me. Trust me make an Operating System Crash Course(practical one unlike the theory one that we study) it will be a boom.I have always hated Operating Systems , but now I have got the curiosity to learn more about it.
Thanks a lot
Mehul Bhaiya
Cheers!!
Let's say I have just one single core system, and it is running node js. So now whether we use async operations ( fetch , I/O ), ultimately those do need their own cpu cycles to finish computing. And the OS scheduler would take processing away from Nodejs when executing those fetch, I/O stuff. So in this case the advantage node is giving us we can accept more requests, it's just that they have been added to the queue. Those fetch, i/o stuff will take the same amount of cpu cycles wether I use async or not, it's just when using async operations , node handles it so to any third party user of my backend server, I can serve their requests. The time for server to respond will be longer, because their I/O calls are really far away from being processed, but technically they are being served? Am I right about this? I think I am not too sure about the last part of what I said. Can anyone help clarify ?
If you ever worked with the MCU the fundamental part is even using a single core processors by delecating I/O and network calls you are pretty much freeing up your core, as MCU has a dedicated computation units for such work loads.
@@rishabhgusai96 so I think for MCU you mean multi controller unit. So are you basically saying MCU handles a lot of I/O and network stuff and thus freeing up the CPU even more? ( So it can focus on doing other things )
@@riebeck1986 Sorry I forgot to mension with MCU I meant microcontrollers, think it like SoC on your smartphone, It has a dedicated chip for pretty much everything.
CPU is just a part of it.
For example decoding a 4K video on just CPU we are going to get sluggy play back, with hardware decoding ( Hardware Accelartion) we actually use another dedicated chip ( ASIC) for decoding the file while the cpu is just resting.
Another example is cryptography, most of the cryptographic algorithms are powered by a dedicated Crypto ASIC, that's why CPU don't have to worry about encrypting/decrypting websites all the time.
Just google SoC diagrams you will get to know how far computation has came to be.
Back when I was a kid, while purchasing a new smartphone I used to look at spec which states it supports MP4, MKV, FLV video formats and I used to think why they have to mention file format? I mean we can just install a software that knows how to play these files and we are good right?
Umm nah, these are the formats which can be decoded by dedicated hardware effectively , rendering them on software( CPU) will cause high cpu usage and the battery will die much sooner.
@@rishabhgusai96 Thanks !!
awesome
i think its better they made it to deepClone instead of structuredClone, i mean it's shorter and much readable
but at least people doesn't need to use other library for deep cloning
360p hmmm
too early i guess
I literally skipped this video 😅
why the video quality is very low in this video? but no issue we only want to get the knowledge.
because you are too early it takes youtube time to process high quality videos
I watched this video till the end