I was inspired and learned the basics of TensorFlow after I completed the TensorFlow specialization on coursera. Personally I think these videos I created give a similar understanding but if you wanna check it out you can. Below you'll find both affiliate and non-affiliate links, the pricing for you is the same but a small commission goes back to the channel if you buy it through the affiliate link which helps me create more future videos. affiliate: bit.ly/3JyvdVK non-affiliate: bit.ly/3qtrK39
Hey! I absolutely love this series. It has taught me a lot about ML programming in a very short time. I also like how you have given resources to theory lectures that help people understand the basic principles involved alongside the advanced math. I'd love to see more such videos, and a suggestion on those lines could be deploying ML models for various platforms, majorly the web and IOT devices. Kudos to you for the amount of sheer effort that has gone into making these videos, and I hope you deliver more such videos in the future.
Hey, many thanks for the tutorials. They are awsome. In case you want to build a hierarchical classifier, where output 1 could help predict output 2 or output 2 depends on result of output 1, what would you change here? The last dense layer? Ex: Dense(10, name=second_num)(output1), does this make sense? Many thanks
@@AladdinPersson Basically if you want to fix the error I think you should define the labels as first_number and second_number in the read_image function itself, that should fix the dictionary error. As an alternative, we can define these labels somewhere else as (global) variables lab1 = "first_number" and lab2="second_number", and refer to them twice, once inside the read_image function and other while defining the output so that (labels = {"first_number": label[0], "second_number": label[1]})(output1 = layers.Dense(10, activation="softmax", name="first_number")(x) output2 = layers.Dense(10, activation="softmax", name="second_number")(x) becomes (labels = {lab0: label[0], lab1: label[1]})t(output1 = layers.Dense(10, activation="softmax", name="lab0)(x) output2 = layers.Dense(10, activation="softmax", name=lab1)(x)
The most important part is all about the data management and structure... having two sets of out puts... how do you train test split ... Basically, you just copy pasted some basic code... without going to in to debth as into how the data is going to be managed. Thanks for the video, but would be only interesting for people who dont really understand neural networks but do it to be cool
Hi, thank you for your great tutorials. one question. how you remove methods and attribute with a shortcut key? for example you remove GRU from layers.GRU at once. thanks
Just out of curiosity, since you were you using data from a CSV against real images wouldn't the flow_from_dataframe function have been more efficient?
Hi, what if I want to change the ratio of two losses? For example, if I care more about the first digit accuracy. What should I put in the Keras.losses?
what if we want to do the regression and classification both with one model? since the input feature/columns for classification is different than the input feature for regression. How can we combine this? could you please help?
for input you specified the shape of input but in case of classification and regression the shape of input for both task will be different so how should I specify the input shape in keras final model(two outputs)?
Yes, if you pass validation to model.fit() it will disable dropout and rescale the weights, that's also the case when using model.evaluate or model.predict
error occur beacuse we are giving labels as array of dict and those dict keys are first_num and second_num so for output to go in that branch we need to have that name in that dense layers output
@@AladdinPersson Wow that's great 👍 . I have been running my notebooks on colab having gpu runtime switched on but was not knowing until now that it was running on gpu. Amazing.. Is it same with the TPU or TPU needs some configuration?
Great STUFF! Thanks Aladdin. However, when I try to replicate this tutorial, I get errors as follows: Traceback (most recent call last): File "C:\Users\USER\CODE\PycharmProjects\pythonProject2Conda\main.py", line 94, in model.fit(train_dataset, epochs=5, verbose=2) File "C:\Users\USER\anaconda3\envs\tf_cpu\Lib\site-packages\keras\src\utils\traceback_utils.py", line 123, in error_handler raise e.with_traceback(filtered_tb) from None File "C:\Users\USER\anaconda3\envs\tf_cpu\Lib\site-packages\tensorflow\python\eager\execute.py", line 53, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ tensorflow.python.framework.errors_impl.NotFoundError: Graph execution error: Detected at node ReadFile defined at (most recent call last): NewRandomAccessFile failed to Create/Open: C:\Users\USER\CODE\PycharmProjects\pythonProject2Conda/train_images/129_97.png : The system cannot find the file specified. ; No such file or directory [[{{node ReadFile}}]] [[IteratorGetNext]] [Op:__inference_one_step_on_iterator_2745] ANY HELP??
For me it takes a ridiculous 180 seconds to run the first epoch but the rest 4 epochs run in 36 sec each and the training in 15 sec idk why this happens, plz provide an explanation
@@AladdinPersson actually I think it's because of the way the data is loaded because when I first run the python script the data is being loaded in the first epoch so it's taking time and for the rest of the epochs the data is cached so they run faster and the test set also takes time in the first execution of the script, when the test data is loaded once then if I execute the script for a second time then everything runs fast the first epoch and the training set everything runs smoothly becoz everthing is cached
I got an error while initiating training it couldn't load the images. I placed my code on the same directory as the csv file tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found. (0) Not found: /media/omar/Backup/Kaggle/custom_mnist/train_images/985_09.png; No such file or directory [[{{node ReadFile}}]] [[IteratorGetNext]] [[IteratorGetNext/_6]] (1) Not found: /media/omar/Backup/Kaggle/custom_mnist/train_images/985_09.png; No such file or directory [[{{node ReadFile}}]] [[IteratorGetNext]] 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_2147] Function call stack: train_function -> train_function
Could you try and use the code for this video (you can find this on the github repository: github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/Basics/tutorial7-indepth-functional.py) And then I think the csv file should be in the same location as the python file, with two subfolders train_images, and test_images which I believe you've already downloaded. I think the error originates from just a mistake in the folder locations. I think otherwise you need to specify that the folder train_images to be within custom_data folder that you have
@@AladdinPersson I tried it using this code. I am getting the error again. I am using tensorflow 2.2. Could that be the reason for this error message? Because the image is there inside the directory
This example somehow very wrongly implies that the first output recognizes the first number and the second one the second number. Which in no case is what is happening.
When I follow the link that leads to the code I get: 404 - page not found The master branch of Machine Learning Collection does not contain the path ML/TensorFlow/Beginner/Tutorial7-Functional_API.py
I was inspired and learned the basics of TensorFlow after I completed the TensorFlow specialization on coursera. Personally I think these videos I created give a similar understanding but if you wanna check it out you can. Below you'll find both affiliate and non-affiliate links, the pricing for you is the same but a small commission goes back to the channel if you buy it through the affiliate link which helps me create more future videos.
affiliate: bit.ly/3JyvdVK
non-affiliate: bit.ly/3qtrK39
Hey! I absolutely love this series. It has taught me a lot about ML programming in a very short time. I also like how you have given resources to theory lectures that help people understand the basic principles involved alongside the advanced math. I'd love to see more such videos, and a suggestion on those lines could be deploying ML models for various platforms, majorly the web and IOT devices. Kudos to you for the amount of sheer effort that has gone into making these videos, and I hope you deliver more such videos in the future.
Great video!
Regarding the peculiar error, maybe it`s caused by the label keys at line 33 (shown at ~1:55 in video).
Hey that makes sense now, thanks a lot :)
@@AladdinPersson So what should the correction be?
@@coding10yearold He gave labels as first_num and second_num in 1:55 but in output he gave first_number and second_number
By far the tensorflow tutorials.... keep it up
That means a lot, thank you
in labels you used first_num i guess for the read_image() function. thats why its taking first_num for the labels
I couldn't get the data from the given link in the description. It says You are not authorized to perform this action
Love your lessons ♥
hi. It seems the names you selected for output1 and output2 must be exactly the same names used in the csv file, otherwise you get an error message.
where can I find the csv files? it doesn't seem to be in the GitHub.
Hey, many thanks for the tutorials. They are awsome. In case you want to build a hierarchical classifier, where output 1 could help predict output 2 or output 2 depends on result of output 1, what would you change here? The last dense layer? Ex: Dense(10, name=second_num)(output1), does this make sense? Many thanks
That was such a lol error, this is the reason sometimes I gotta struggle with tf, like sometimes the error wouldn't be explanatory at all....
Yeah that was hilarious, I am very curious as to what caused that error 😆
@@AladdinPersson Basically if you want to fix the error I think you should define the labels as first_number and second_number in the read_image function itself, that should fix the dictionary error.
As an alternative, we can define these labels somewhere else as (global) variables lab1 = "first_number" and lab2="second_number", and refer to them twice, once inside the read_image function and other while defining the output so that (labels = {"first_number": label[0], "second_number": label[1]})(output1 = layers.Dense(10, activation="softmax", name="first_number")(x)
output2 = layers.Dense(10, activation="softmax", name="second_number")(x)
becomes
(labels = {lab0: label[0], lab1: label[1]})t(output1 = layers.Dense(10, activation="softmax", name="lab0)(x)
output2 = layers.Dense(10, activation="softmax", name=lab1)(x)
Also really nice tutorial!
What if both the outputs just read the first number from the training images?
The most important part is all about the data management and structure... having two sets of out puts... how do you train test split ... Basically, you just copy pasted some basic code... without going to in to debth as into how the data is going to be managed. Thanks for the video, but would be only interesting for people who dont really understand neural networks but do it to be cool
Wonderful playlist.
Thank you 🙏
Hi, thank you for your great tutorials. one question. how you remove methods and attribute with a shortcut key? for example you remove GRU from layers.GRU at once. thanks
What if we have multiple objects in one object? Is it same approach or different?
Thanks sir
Just out of curiosity, since you were you using data from a CSV against real images wouldn't the flow_from_dataframe function have been more efficient?
You're right, sometimes I get confused over the many ways you can load data in TF
@@AladdinPersson I guess the so many ways to kill a cat applies in this case, there's just so many methods now and each of them has its own merits
Hi, what if I want to change the ratio of two losses? For example, if I care more about the first digit accuracy. What should I put in the Keras.losses?
Yes. You can specify the loss_weights argument in model.compile().
I have a question, i want to save trained parameters after each batch. how can i do this
Thank you
what if we want to do the regression and classification both with one model? since the input feature/columns for classification is different than the input feature for regression. How can we combine this? could you please help?
for input you specified the shape of input but in case of classification and regression the shape of input for both task will be different so how should I specify the input shape in keras final model(two outputs)?
@Aladdin Persson: again amazing video. One question about dropout, if we use validation in model.fit, will dropout automatically be turned off??
Yes, if you pass validation to model.fit() it will disable dropout and rescale the weights, that's also the case when using model.evaluate or model.predict
error is because in read_img() in label dict we used first_num and second_num
error occur beacuse we are giving labels as array of dict and those dict keys are first_num and second_num so for output to go in that branch we need to have that name in that dense layers output
I am getting this wrror:
AttributeError: module 'keras.api._v2.keras.layers' has no attribute 'conv2D'
what can be the reason?
case sensitive error. Use 'Conv2D' instead of 'conv2D'
Why does keras.Model() now take arguments? previous examples required no arguments.
It always does (inputs and outputs), maybe you didn't pay attention in the previous examples.
Hey man, in tensorflow, how do we move the data/model to gpu? In pytorch we would do like this : data.to(device). But in tf I didn't get it.
It should be moved automatically if you have a GPU enabled device
@@AladdinPersson Wow that's great 👍 . I have been running my notebooks on colab having gpu runtime switched on but was not knowing until now that it was running on gpu. Amazing.. Is it same with the TPU or TPU needs some configuration?
Great STUFF! Thanks Aladdin.
However, when I try to replicate this tutorial, I get errors as follows:
Traceback (most recent call last):
File "C:\Users\USER\CODE\PycharmProjects\pythonProject2Conda\main.py", line 94, in
model.fit(train_dataset, epochs=5, verbose=2)
File "C:\Users\USER\anaconda3\envs\tf_cpu\Lib\site-packages\keras\src\utils\traceback_utils.py", line 123, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\USER\anaconda3\envs\tf_cpu\Lib\site-packages\tensorflow\python\eager\execute.py", line 53, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tensorflow.python.framework.errors_impl.NotFoundError: Graph execution error:
Detected at node ReadFile defined at (most recent call last):
NewRandomAccessFile failed to Create/Open: C:\Users\USER\CODE\PycharmProjects\pythonProject2Conda/train_images/129_97.png : The system cannot find the file specified.
; No such file or directory
[[{{node ReadFile}}]]
[[IteratorGetNext]] [Op:__inference_one_step_on_iterator_2745]
ANY HELP??
For me it takes a ridiculous 180 seconds to run the first epoch but the rest 4 epochs run in 36 sec each and the training in 15 sec idk why this happens, plz provide an explanation
I don't know :/
@@AladdinPersson actually I think it's because of the way the data is loaded because when I first run the python script the data is being loaded in the first epoch so it's taking time and for the rest of the epochs the data is cached so they run faster and the test set also takes time in the first execution of the script, when the test data is loaded once then if I execute the script for a second time then everything runs fast the first epoch and the training set everything runs smoothly becoz everthing is cached
I got an error while initiating training it couldn't load the images. I placed my code on the same directory as the csv file
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
(0) Not found: /media/omar/Backup/Kaggle/custom_mnist/train_images/985_09.png; No such file or directory
[[{{node ReadFile}}]]
[[IteratorGetNext]]
[[IteratorGetNext/_6]]
(1) Not found: /media/omar/Backup/Kaggle/custom_mnist/train_images/985_09.png; No such file or directory
[[{{node ReadFile}}]]
[[IteratorGetNext]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_2147]
Function call stack:
train_function -> train_function
Could you try and use the code for this video (you can find this on the github repository: github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/TensorFlow/Basics/tutorial7-indepth-functional.py)
And then I think the csv file should be in the same location as the python file, with two subfolders train_images, and test_images which I believe you've already downloaded. I think the error originates from just a mistake in the folder locations. I think otherwise you need to specify that the folder train_images to be within custom_data folder that you have
@@AladdinPersson I tried it using this code. I am getting the error again. I am using tensorflow 2.2. Could that be the reason for this error message? Because the image is there inside the directory
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfc in position 218: invalid start byte
anyone knows fix?
Use a different encoder eg ISO-8859-1
This example somehow very wrongly implies that the first output recognizes the first number and the second one the second number. Which in no case is what is happening.
Could you elaborate?
Hi,
I know why it is give error when you put first_number.... that's because at
labels = {"first_num": label[0], "second_num": label[1]}
When I follow the link that leads to the code I get: 404 - page not found
The master branch of Machine Learning Collection
does not contain the path
ML/TensorFlow/Beginner/Tutorial7-Functional_API.py