I was inspired and learned the basics of TensorFlow after I completed the TensorFlow specialization on coursera. Personally I think these videos I created give a similar understanding but if you wanna check it out you can. Below you'll find both affiliate and non-affiliate links, the pricing for you is the same but a small commission goes back to the channel if you buy it through the affiliate link which helps me create more future videos. affiliate: bit.ly/3JyvdVK non-affiliate: bit.ly/3qtrK39 Here's the outline for the video: 0:00 - Introduction and Dataset Overview 1:39 - Load using TextLineDataset 4:13 - Filtering Dataset 8:12 - Creating Vocabulary 13:43 - Numericalizing with TokenTextEncoder 18:10 - Applying map on datasets 20:35 - Simple Model 22:30 - Dataset in Several Files 25:50 - Sketch Load Translation Dataset 29:22 - Ending This has shaped out to be a pretty long and thorough TensorFlow Playlist, hopefully you guys find these videos useful! :)
Hiii Aladin. In the line 66 why is not "vacabulary.update(word)" ? because we want to include that word that was not befor because didnt surpass the threshold ?. Very possibly it is a stupid question but dont get at all why is used "tokenized_text" instead of "word"
Good day sir. If I complete this playlist, will I be in a good position to take the Tensorflow Developer Certification Exam by Google and Pass? Thanks for the great content.
I haven't done the developer exam so I don't know, but I took the TensorFlow specialization and if I've succeeded these videos should be more in depth and concise than that specialization because I thought it could be improved in some aspects
Hi, I have this error with this line of code : -> tokenizer = tfds.features.text.Tokenizer() -> module 'tensorflow_datasets.core.features' has no attribute 'text' Is it possible to know the exact tensorflow-dataset version of your environment? THX Alex
I was inspired and learned the basics of TensorFlow after I completed the TensorFlow specialization on coursera. Personally I think these videos I created give a similar understanding but if you wanna check it out you can. Below you'll find both affiliate and non-affiliate links, the pricing for you is the same but a small commission goes back to the channel if you buy it through the affiliate link which helps me create more future videos.
affiliate: bit.ly/3JyvdVK
non-affiliate: bit.ly/3qtrK39
Here's the outline for the video:
0:00 - Introduction and Dataset Overview
1:39 - Load using TextLineDataset
4:13 - Filtering Dataset
8:12 - Creating Vocabulary
13:43 - Numericalizing with TokenTextEncoder
18:10 - Applying map on datasets
20:35 - Simple Model
22:30 - Dataset in Several Files
25:50 - Sketch Load Translation Dataset
29:22 - Ending
This has shaped out to be a pretty long and thorough TensorFlow Playlist, hopefully you guys find these videos useful! :)
Yet another awesome video. 🔥🔥 Never done NLP in tensorflow before... though now I can't wait to get my hands dirty :)
Thank you! I appreciate your support 🙏
Hiii Aladin. In the line 66 why is not "vacabulary.update(word)" ? because we want to include that word that was not befor because didnt surpass the threshold ?. Very possibly it is a stupid question but dont get at all why is used "tokenized_text" instead of "word"
Good day sir. If I complete this playlist, will I be in a good position to take the Tensorflow Developer Certification Exam by Google and Pass? Thanks for the great content.
I haven't done the developer exam so I don't know, but I took the TensorFlow specialization and if I've succeeded these videos should be more in depth and concise than that specialization because I thought it could be improved in some aspects
@@AladdinPersson Thank you sir.
sir how can use this to incorporate with ocr? its like text classification but from image
After finishing this, where do you think we should go to continue learning?
Continue with TensorFlow official tutorials (but more advanced ones), implement research papers, do courses, and try doing projects :)
What the different if I used between tokenizer and TextVectorization?
Hi people, This may be a silly doubt but why Aladdin considered same encode_map_fn for the test dataset?
Hi,
I have this error with this line of code :
-> tokenizer = tfds.features.text.Tokenizer()
-> module 'tensorflow_datasets.core.features' has no attribute 'text'
Is it possible to know the exact tensorflow-dataset version of your environment?
THX
Alex
hello can you do that for machine translation dataset. I want to replicate Opus machine translation dataset with my own datas.
Dude you could have used RaggedTensor instead of padding?
The default threshold of 200 is too high, it only gets the punctuation marks except commas. Try threshold=10 instead