Had to sign in to say thank you. Breaking down each line is incredibly useful. I also love how you continually point out that there's areas for improvement and reference us to what may be classified as overkill. It gives an excellent gateway into machine learning by linking this seemingly simple explanation to actually recognised CNNs and FC layers etc.!
Hi guys. Just want to help anyone following this later like I was. Instead of using tensorflow (if you aren't seeing tf.keras.datasets which is what had me stuck), you can get the mnist variable from : from keras.api.datasets import mnist. Also if you import keras.api as k_api (or whatever name i suck at naming), replace all the tf.keras with k_api. It should work the same.
I followed this tutorial and it runs with similar loss and accuracy results to the video. My problem is when I make my own written numbers, the accuracy is usually less than 50%. I have tried playing around with the number of hidden layers, numbers of neurons, and number of epochs and I don't see any real change. What other likely places I could look to find a fix.
Try replacing all of the model.add lines with this 😉 model.add(tf.keras.layers.Conv2D(32, (3,3), activation='sigmoid', input_shape=(28, 28, 1))) # The one in "(28, 28, 1)" is required or else it'll error out model.add(tf.keras.layers.MaxPool2D((2, 2))) model.add(tf.keras.layers.Conv2D(48, (3,3), activation='sigmoid')) model.add(tf.keras.layers.MaxPool2D((2, 2))) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(500, activation='sigmoid')) model.add(tf.keras.layers.Dense(10, activation='sigmoid'))
great tutorial. how can i increase the accuracy. shoudl I just screw up the epochs. I tried this, but some digits like a angeled one or a 5 that get "recognized" as a 6 or a 3(as example). sorry for my bad english :D
I got this error. Changing the extension of the model to keras solved it. File "C:\Users\user\anaconda3\envs\main\Lib\site-packages\keras\src\saving\saving_api.py", line 106, in save_model raise ValueError( ValueError: Invalid filepath extension for saving. Please add either a `.keras` extension for the native Keras format (recommended) or a `.h5` extension. Use `model.export(filepath)` if you want to export a SavedModel for use with TFLite/TFServing/etc. Received: filepath=handwritten.model.
Why did we choose to use 128 neurons for our first Dense layer? I have always wondered what is the intuition that should be used for the number of neurons needed for a layer? Is it kinda just random
Why did you normalise the train and test data separately. Shouldn’t we first normalise train data and then apply the same normalisation factor to test data?
I'm at 14:04 and I'm getting the following error after loading the saved model 'logits and labels must have the same first dimension, got logits shape [32,10] and labels shape [25088]'. Is anyone getting the same error, I've researched it and it apparently has something to do with the loss function being sparse_categorical_crossentropy or categorical_crossentropy?
I found the problem and fixed it. I wrote 'test_y = tf.keras.utils.normalize(test_X, axis=1) ' instead of 'test_X = tf.keras.utils.normalize(test_X, axis=1) '
I'm very new to this and have a small doubt when training the data we normalized the pixel values to be between 0 and 1, but when we are predicting using jpgs we dont normalize it, why is that?
I think the reason he normalized the pixel values to be between 0 and 1 is because as he said their values range between 0 and 255. Since this is a big range of numbers normalizing the pixel data would be most ideal as it would reduce the impact of spread on our model. However, when predicting the jpgs they only take on a value between 0 - 9 so label data cannot necessarily be as spread out as the pixel data can be, rendering it kinda useless to do so. Again this is just my thought process. If some of my logic is incorrect, please don't hesitate to correct me internet.
Hi F, very useful video as always! when you can, could you make a video about voice recognition/transcription in Python like youtube autosubtitle. To eliminate background noises ... what would you do? Thx
it seems my created model keeps giving wrong output so i increased epochs but same issue, i noticed it looks at a specific pattern and it finds it similar to another it checks to two numbers as the same number. For example, the 3 and 8 gets confused and it gets termed as 3 all the time. i wonder if it needs more training or do i need to change the code somewhere...
I followed the video to the word, frame by frame. I got this "Mistake" before saving the model (before applying to self made numbers). The loss is 2.3, and the accuracy is 0.097. During the training though, all accuracies are over 90. Can anyone tell me what is the problem? Is there anything wrong with the test set? This is a great video. I just want to make it work !!
Hi sir , if we give the image in which we give any paragraph so they can detect it or not ,for example in image we give A ,so the they detect it or not
this might be a stupid question but it isn't obvious for me in the code. How is the output layer ordered? what I mean is how does index 0 directly map to value '0', index 1 to '1', etc. I also searched this up for letter recognition and they have 0-25 mapping in order to letters sequentially. maybe someone can explain
@@BasedArmenian-kh8pb yes but we turned the black on white paint images to white on black. So I was expecting it to work. The owner of the channel used black on white too
Hi buddy, thank you for your useful videos. However, I am getting this error "NameError: name "keras" is not defined when I try to classify the datasets into training and test datasets. How can I solve this error, please?
Hi. Can someone help me to find my mistake. Where i can save the folder of image so that my programs will read directly to that folder. bcoz, it does not read the image number when i run the programs. really appreciate to someone help and enlighten me. tnx.
Amazing tutorial man, i'm a complete beginner and this is totally understandable and it's so awesome, i only have a question, what do i do if i want to use larger images, is it possible?
I think the reason he normalized the pixel values to be between 0 and 1 is because as he said their values range between 0 and 255. Since this is a big range of numbers normalizing the pixel data would be most ideal as it would reduce the impact of spread on our model. However, when predicting the jpgs they only take on a value between 0 - 9 so label data cannot necessarily be as spread out as the pixel data can be, rendering it kinda useless to do so.
im not fully sure about this but you can try changing the numbers in the "model.add(tf.keras.layers.Flatten(input_shape = (28,28)))" so change it to be like "model.add(tf.keras.layers.Flatten(input_shape = (32,32)))"
In line #8 you are loading data, but you dont specify any files. I dont see were you are creating the actual training data and testing data. All I see is at the end , you creating data to actually use the model. where is the training data?
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. this is the error i had
This is not an error but a statement u can ignore, but if there is no output that probably mean program was not able to get the digits directory so check your path is correct or not, for me it was to include the directory name where the digits directory was stored
errors come up for every piece of code you write. I am unsure what wizardry you're doing to get it to work on your end. it always fails on mine and it pisses me off.
he doesnt really understand neural networks too, just using tensorflow, there are far better videos on the subject, just look around , these "programmer" youtubers really are the bad side of the platform, they gain massive viewers and they are junior at best.
It is really helpful and all. Thank you. For me, there could be more illustration of what happens to our data after what command line. Often you just guess it, but you can not be sure, especially when you are not using standard python methods, as you just see the output. Like with neural networks itself. :)
unbeliavable, no one before me not comment is this only works for 28,28. really unbeliavable. do you really understand what he mentioned ? I think you do not.
Had to sign in to say thank you. Breaking down each line is incredibly useful. I also love how you continually point out that there's areas for improvement and reference us to what may be classified as overkill. It gives an excellent gateway into machine learning by linking this seemingly simple explanation to actually recognised CNNs and FC layers etc.!
This video is really, really good. Used it to resurrect my 20y old ANN skills. Will use this in my classes now!
Totally underrated video, perfect for people getting started with Neural Networks!
Hi guys. Just want to help anyone following this later like I was.
Instead of using tensorflow (if you aren't seeing tf.keras.datasets which is what had me stuck), you can get the mnist variable from : from keras.api.datasets import mnist. Also if you import keras.api as k_api (or whatever name i suck at naming), replace all the tf.keras with k_api. It should work the same.
so with tensorflow keras we don't need to code the weights and biases from scratch?
that's the purpose of ML frameworks like pytorch and tensorflow buddy
are there repositories out there where we can get training data?
I followed this tutorial and it runs with similar loss and accuracy results to the video. My problem is when I make my own written numbers, the accuracy is usually less than 50%. I have tried playing around with the number of hidden layers, numbers of neurons, and number of epochs and I don't see any real change. What other likely places I could look to find a fix.
I have the same issue. The more I trained the data, the more inaccurate it actually got.
got the same issue here, have you fixed it any leads?
@@HaxeHere
Try replacing all of the model.add lines with this 😉
model.add(tf.keras.layers.Conv2D(32, (3,3), activation='sigmoid', input_shape=(28, 28, 1))) # The one in "(28, 28, 1)" is required or else it'll error out
model.add(tf.keras.layers.MaxPool2D((2, 2)))
model.add(tf.keras.layers.Conv2D(48, (3,3), activation='sigmoid'))
model.add(tf.keras.layers.MaxPool2D((2, 2)))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(500, activation='sigmoid'))
model.add(tf.keras.layers.Dense(10, activation='sigmoid'))
@@Splendisimo
for anyone wondering, the changes are essentially:
- Using a convolutional neural network
- Using a Sigmoid function instead of ReLU
great tutorial. how can i increase the accuracy. shoudl I just screw up the epochs. I tried this, but some digits like a angeled one or a 5 that get "recognized" as a 6 or a 3(as example). sorry for my bad english :D
same man
I have this problem at 12:12.
It keeps giving me an error in tensorflow stating “DLL load failed while importing _pywrap_tf2
Thanks for a flawless video NeuralNine!
I got this error. Changing the extension of the model to keras solved it.
File "C:\Users\user\anaconda3\envs\main\Lib\site-packages\keras\src\saving\saving_api.py",
line 106, in save_model
raise ValueError(
ValueError: Invalid filepath extension for saving. Please add either a `.keras` extension for
the native Keras format (recommended) or a `.h5` extension. Use `model.export(filepath)` if you want to export a SavedModel for use with TFLite/TFServing/etc. Received: filepath=handwritten.model.
Why did we choose to use 128 neurons for our first Dense layer? I have always wondered what is the intuition that should be used for the number of neurons needed for a layer? Is it kinda just random
its the standard
Why did you normalise the train and test data separately. Shouldn’t we first normalise train data and then apply the same normalisation factor to test data?
was thinking same, fit(Xtrain).transform(Xtrain) and fit(Xtrain).transform(Xtest)
I did everything as done here, and the performance of the network is terrible, even after increasing the number of epochs!
Same problem , have you managed to fix it?
@@volleyball2676 same problem too, have u fixed it?
I’m about to try it. Let you know soon 💪 fashion mnist worked for me
For me the accuracy is pretty high but the loss is high too
Accuracy = 0.9778
Loss = 0.7503
I'm at 14:04 and I'm getting the following error after loading the saved model 'logits and labels must have the same first dimension, got logits shape [32,10] and labels shape [25088]'. Is anyone getting the same error, I've researched it and it apparently has something to do with the loss function being sparse_categorical_crossentropy or categorical_crossentropy?
I found the problem and fixed it. I wrote 'test_y = tf.keras.utils.normalize(test_X, axis=1)
' instead of 'test_X = tf.keras.utils.normalize(test_X, axis=1)
'
I'm very new to this and have a small doubt when training the data we normalized the pixel values to be between 0 and 1, but when we are predicting using jpgs we dont normalize it, why is that?
I think the reason he normalized the pixel values to be between 0 and 1 is because as he said their values range between 0 and 255. Since this is a big range of numbers normalizing the pixel data would be most ideal as it would reduce the impact of spread on our model. However, when predicting the jpgs they only take on a value between 0 - 9 so label data cannot necessarily be as spread out as the pixel data can be, rendering it kinda useless to do so.
Again this is just my thought process. If some of my logic is incorrect, please don't hesitate to correct me internet.
mine is super innacurate. could you please help?
Hi F, very useful video as always! when you can, could you make a video about voice recognition/transcription in Python like youtube autosubtitle. To eliminate background noises ... what would you do? Thx
it seems my created model keeps giving wrong output so i increased epochs but same issue, i noticed it looks at a specific pattern and it finds it similar to another it checks to two numbers as the same number. For example, the 3 and 8 gets confused and it gets termed as 3 all the time. i wonder if it needs more training or do i need to change the code somewhere...
really helped me a lot.
Thank you so much😄😄
Your videos are really amazing!
am i the only one whos digits are not recognized ? The mnist data set works fine and gets recognized but my own digits no chance, why ist that ?
Bro just dissed his own handwriting
Go onto Paint and make the canvas 28x28 pixels, then colour everything black, then write on it with white brush/pen. It works for me.
How would you suggest to go from beginner to expert in ML/AI in practical terms. Assuming theoretical foundations are there
I followed the video to the word, frame by frame. I got this "Mistake" before saving the model (before applying to self made numbers). The loss is 2.3, and the accuracy is 0.097. During the training though, all accuracies are over 90. Can anyone tell me what is the problem? Is there anything wrong with the test set? This is a great video. I just want to make it work !!
Same Problem here
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)
# model = tf.keras.models.Sequential([
# tf.keras.layers.Flatten(input_shape=(28, 28)),
# tf.keras.layers.Dense(128, activation='relu'),
# tf.keras.layers.Dense(128, activation='relu'),
# tf.keras.layers.Dense(10, activation='softmax')
# ])
# model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# model.fit(x_train, y_train, epochs=10)
# model.save('project.keras')
model = tf.keras.models.load_model('project.keras')
loss, accuracy = model.evaluate(x_test, y_test)
print(f'Loss: {loss}')
print(f'Accuracy: {accuracy}')
That fixed my problem, you can try it out if you want to
Love the vid! But I have a slight problem, my 8 and 9 are not getting recognised even with more epochs.
lol, that intro music on a topic like this :D
hey can anyone help please i am writing this code in jupyter notebook how to import images in jupyter will the process be the same???????
please help
asap
Hi sir , if we give the image in which we give any paragraph so they can detect it
or not ,for example in image we give A ,so the they detect it or not
if any one have idea about that so tell me
what is the application u are using??
pycharm
this might be a stupid question but it isn't obvious for me in the code. How is the output layer ordered? what I mean is how does index 0 directly map to value '0', index 1 to '1', etc. I also searched this up for letter recognition and they have 0-25 mapping in order to letters sequentially. maybe someone can explain
That is correct.
How did you comment the whole section at 12:28?
crtl + /
but what if I want to know different hxw images????? this only works for 28,28.
How could I train neural networks without a dataset? Like if I would like it to idk play a video game
Great!👍 More videos like this ❤️
Where did you download your training data and where was it loaded into your program? That is, I don't see any filename loaded...
the training data is part of the tenserflow library itself u dont need to download it additionaly
how to downlode data set which u used ?
OMG.. I was in full screen mode and on 7:45 I thought someone hacked my computer! haha! It was scary!
I'm unable to evaluate the model, the error says they dont have same array size
The recognition sucked when i used my paint pictures. I found that it recognizes it way better when i have white on black images from paint.
The MNIST dataset are literally all black on white pictures so this is why, that is the only thing its trained on
@@BasedArmenian-kh8pb yes but we turned the black on white paint images to white on black. So I was expecting it to work. The owner of the channel used black on white too
Hi, anyone knows which theme of PyCharm is this?
Dracula theme
I want a python code to convert handwritten image into plain text with accurate i have tried but i didnt got you can try it and show me..
why it's showing invalid file path use either. keras extension for saving. please add either a keras extension for the native keras format
There's a comment by @somith4259 where they write the whole updated code. They did 'handwritten.models.keras' to fix the problem.
Use ".keras" as the extension as it just told you
Hi buddy, thank you for your useful videos. However, I am getting this error "NameError: name "keras" is not defined when I try to classify the datasets into training and test datasets. How can I solve this error, please?
Hi. Can someone help me to find my mistake. Where i can save the folder of image so that my programs will read directly to that folder. bcoz, it does not read the image number when i run the programs. really appreciate to someone help and enlighten me. tnx.
Cool vid man, I want to get started, just curious as to what IDE you are using?
Looks like pycharm, not sure though
pycharm
Sir imges not display when i run this code
Hey, thank you so much for this video.
i did this 2 years ago with live opencv drawing and detection
thx a lot
Is it will work for handwritten words or text?
Why the dense layer with '128' units ?
he just copied the value online, youtuber programmer modus operandi
AttributeError: module 'keras.api._v2.keras.models' has no attribute 'load'
You are awesome broo🔥🔥🔥🔥
is there a chance i can run a code that can transform text pdf , images, voices into a model for medical stuff
can u tell the digits directory thing.. u just skipped that part]
Amazing tutorial man, i'm a complete beginner and this is totally understandable and it's so awesome, i only have a question, what do i do if i want to use larger images, is it possible?
i think you might need image segmentation , you'll find a video about it in this chanel
Why did we make the hidden layers 128 neurons? This is the part no one explains in tutorials
No reason, it is default/standard and in various circumstances it could even cause minor problems changing it
shouldn't we normalize the input image pixel values before predicting it
I think the reason he normalized the pixel values to be between 0 and 1 is because as he said their values range between 0 and 255. Since this is a big range of numbers normalizing the pixel data would be most ideal as it would reduce the impact of spread on our model. However, when predicting the jpgs they only take on a value between 0 - 9 so label data cannot necessarily be as spread out as the pixel data can be, rendering it kinda useless to do so.
You really helped me a lot. Thank you so much🤩
Please can you make a video on automating social media with python?
he just uses others tools to make stuff that he doenst really understand. so dont expect much from him
kindly explain banking projects or fraud manageent or loan projects
can anyone help me the no module found name tensorflow
can i do handwritten character recog using this,just by changing the dataset?
how do i change the dataset
HI nice video ! Do you know how to analyse images that are bigger than 28x28px ?? Thanks ;)
im not fully sure about this but you can try changing the numbers in the "model.add(tf.keras.layers.Flatten(input_shape = (28,28)))" so change it to be like "model.add(tf.keras.layers.Flatten(input_shape = (32,32)))"
i am not able to load tensor flow
Awesome bro!
Can we use raspberry pie with touch screen for input
Helpful video , thank you ;)
Nice video bro 👍👍👍👍👍
I followed your code word by word still the result is showing error. Please help me!!
What error
Can you asist me to identify symbols from drawing using CNN. It will be much helpful for my project
Anyone know what program he's coding this in?
Py charm
@@pandulathennakoon3826 Thank you!
thank you very much , bro .
Hi friend ! Nice video. Could you help me in making Handwritten Digits Recognition but using HMM?
is this only works 28,28 ????
Can you make this same video with alphabet instead of number ? Pretty pls
In line #8 you are loading data, but you dont specify any files. I dont see were you are creating the actual training data and testing data. All I see is at the end , you creating data to actually use the model. where is the training data?
The initial training data and testing data are supplied by mnist.
Awesome bro
cant download tensorflow
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. this is the error i had
This is not an error but a statement u can ignore, but if there is no output that probably mean program was not able to get the digits directory so check your path is correct or not, for me it was to include the directory name where the digits directory was stored
For me it's not printing anything why?
Same ! Had you solved that?
Awesome!!! Thanks
which ide is this?
PyCharm by Jetbrains
keras isn't working. anyone know how to fix
use .h5
@@rishyth explain?
@@the-notorious-khaki
accepted file extensions
Amazing!
Please make video Multi Layer Perceptron How To build With Manual Coding without tensorflow
Nice video !!!
predict function in line 31 isnt defined according to vscode, error message is " "predict" is not a know member of "None" ". help me
thank you!
Will u plzz provide source code for this ?
Hey there is some problem in load_model() i cannot use predict or evaluate if i get my model from load_model() function
errors come up for every piece of code you write. I am unsure what wizardry you're doing to get it to work on your end. it always fails on mine and it pisses me off.
he doesnt really understand neural networks too, just using tensorflow, there are far better videos on the subject, just look around , these "programmer" youtubers really are the bad side of the platform, they gain massive viewers and they are junior at best.
Early, and love the video!
It is really helpful and all. Thank you. For me, there could be more illustration of what happens to our data after what command line. Often you just guess it, but you can not be sure, especially when you are not using standard python methods, as you just see the output. Like with neural networks itself. :)
why does the neural network immediately correctly determine the numbers without any training?
@@НикитаБаронов-с6н It was trained for 3 epochs with a standard dataset
Nice video
my AI thinks 3 is a 6. :(
Can anyone share the source code ?
can you u upload the source code
Thanku so much bro
vary practical
unbeliavable, no one before me not comment is this only works for 28,28. really unbeliavable. do you really understand what he mentioned ? I think you do not.