Tensor Processing Units: History and hardware
ฝัง
- เผยแพร่เมื่อ 17 ก.ย. 2024
- In this episode of AI Adventures, Yufeng Guo goes through the logistics and history of TPU’s (Tensor Processing Units) and how they differ from CPU’s and GPU’s.
In-Datacenter Performance Analysis of a Tensor Processing Unit → goo.gle/319B2DJ
Check out the rest of the Cloud AI Adventures playlist → goo.gl/UC5usG
Subscribe to get all the episodes as they come out → goo.gl/S0AS51
Product: TensorFlow; fullname: Yufeng Guo;
#AIAdventures - วิทยาศาสตร์และเทคโนโลยี
Excellent explanation of the TPU, especially the example of the letters registered in my mind on the difference between CPU, GPU & TPU
There is a deep bass that can be heard with nice headphones. AC?
Here because of Pixel 6 announcement. 😂
Me 2!
very awesome explanation
Great analogy of CPU, GPU and TPU =)
yeah when I heard the analogy I was like "aaah"
very nice explanation for begginers like me
Glad to hear that
thank you :) but
where's the link plz?
Thanks for the video!
TPU? More like "Totally great information for you." Thanks for sharing!
Please explain this technique of quantization where Google mapped 32 bits to 8 bits in TPU v1.
nice explanation, thanks
So is TPU replacement for only GPU or it replacement for both CPU and GPU?
I think a TPU is more like a piece of hardware geared towards certain applications such as deep learning. As there should still be a need for both general computation and computer graphics, I think CPUs and GPUs should be here to stay.
TPUs are geared toward neural network machine learning.
I use Google's TPU cloud computing to process multimodal AI image generation.
Thanks!
Amazing! Which other tasks can be a target for a new specific processor?
I have seen it last year and came here again for Pixel 6
any higher precision?
Who's watching here after Google announces they will be using custom design Tensor Soc.
Why is their weird sub base/ low notes in this video?
My Pixel 6😍
Which do you think is the best for TensorFlow training models: GPU or TPU?
Well you can try this yourself. Use colab notebook. You can select GPUs like T4, V100 or A100 or TPU. I tried this experiment (but with torch, not tf, and for inference only) and got pretty disappointing results. TPUs have only 16g of memory and they were slower than the slowest GPU. Maybe TPU use less watts, maybe colab instances have suboptimal config, maybe torch performance is bad, anyway interesting experience :)
pixel 6 will use yess
Hettinger Rapids
and 6 pro
Apple silicon joining the room.
This video is getting too little likes and views given the AI hype now
i was hopeing to actually see some but no just boring graphs boring how does this video get 44,000 views but our videos get 230 views over 3 years