These were videos that I requested. Please make more Project videos in Machine learning and deep learning videos and real-world machine learning projects in PYTHON because You Are The Best to learn from
This is so true. I've been so frustrated trying to learn this topic and a lot of videos are just people explaining what neural networks are, like the begging of the previous video in this series, but nobody actually gets into the code and explains how to set up functions. Like there is A HUGE DIFFERENCE BETWEEN DRAWING A NEURAL NETWORK AND ACTUALLY CODING ONE!!! ANYONE CAN DRAW ONE
@@a.n.7338 That's to initialize it to an empty array. But at 15:30, he learned that it did not initialize the type. and that's something I especially like about his videos: he doesn't just edit out the mistakes, any talks about things like the need to reshape being kind of stupid.
Here's a course you'll need. Face Mask Detection Using Deep Learning . It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
For those who want to use RGB/color images, modify these lines! Change these: img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE) plt.imshow(img_array, cmap='gray') X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1) To: img_array = cv2.imread(os.path.join(path,img)) plt.imshow(img_array) X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 3) And this should work, good luck!
Another amazing set of tutorials. You truly are helping me understand Python and Deep Learning at a whole different level. Thank you for your time and expertise, Sentdex.
Works like a charm. For those(beginners like me) who had issue in layers like "kernel size not defined". Just replace "(3,3 ) by kernel_size=3 " in layer 1 and 2 and it will be good to go.
Hi Harrison. You've been doing an absolutely amazing list of implementating Deep learning videos with Python, Tensorflow, Keras, etc. This is the most useful job you've ever done. I've learned the Machine learning, Deep learning theory easily but implementation and application is something difficult to me. Keep doing this please.
Here's a course you'll need. Face Mask Detection Using Deep Learning . It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
This is currently the best TensorFlow tutorial on youtube. Can't express how thankful I am, after wasting much time on crappy videos named TensorFlow in 10 mins, 5 mins, 1 min and etc...
The value of these videos is fucking incredible. After some setup with anaconda to get tensorflow and python 3.6 to work in pycharm, i was able to reproduce all of this with my own data. Your explanations are absolutely on point and i have no questions left after this part.
Thank you so much for this video. As a programmer who just wants to start prototyping a simple model without a great DL background this video gave me the tools to get on with my work.
Thanks have been looking forward to this tutorial will help with my thesis. For windows, if you have anaconda installed and cannot find module cv2, you may simply have to do: pip install opencv-python if you are on linux you can do : pip install opencv-python
If you are having trouble here use anaconda prompt. This is in the Anaconda Manager where you start jupyter. Then simply type in pip install opencv-python and mine at least worked great
Great tutorial, but the way you load the data is not very memory efficient and this will cause problems with large datasets. First the training_data list is written into RAM and afterwards the same amount of memory is reserved when converting into a numpy array. So this approach is only good for datasets < RAM size/2. Another option would be to create the numpy array at the beginning using np.empty and then write the data as entries into the array. This way the dataset can be as large as your RAM. If the dataset is larger than the RAM size it is suggested to use a generator that loads and yields the data during training. This way your dataset can be as large as your SSD, but training speed is most likely limited by the read speed of the drive. Just something I had to deal with during my thesis in the last couple of months. Maybe you could make a tutorial on the generator one, not a lot of people know about this. Anyways, keep up the good work!
@@will1337 Did you fixed your problem? my python returns "MemoryError" when doing np.array(X).reshape(-1 , IMG_SIZE, IMG_SIZE, 3) step I'm doing the colored version (so I have about 3 channels of colors which is causing me trouble)
i enjoyed this so much, i was from CS degree. But not have quite good moment with programming. So i decided to get job that not programming. But, since i was try to learn about pyautogui and selenium from your video, i was so exited to learn ML, and now here am i ... following your keras tutorial :D
Sentdex these are the best videos I have ever seen in Deep Learning. Amazing tutorials. You are the best at what you do. Why did it take me long to find this channel.
Here's a course you'll need. Face Mask Detection Using Deep Learning . It's paid but it's worth it. khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
this video is amazing, I am so glad I found your channel. I have tried learning this stuff for quite a while now through other TH-cam videos but nobody could explain it that well.
Heay sentdex pls keep up with your videos. They are really helpful in so many ways. Im just starting to get into ML and started studying Computer-Science just because of ML and your videos are so helpful. Thumbs up to you
Guys, the kaggle dataset that he is referring to no longer has folders named as cats and dogs... there are 2 folders, one is for training and another one is for test. You've gotta loop through images in the training folder and assign the labels using the image name
So far, so good! The first dog grayscale image was successfully displayed. I was getting nervous there for a minute! I got confused when you added all that space at the bottom. It threw off my Jupyter notebook. I followed you thereafter, but my output did not print. We'll on to the other videos from other channels. I got to keep moving on. It was good while it lasted.
What is your opinion on setting an aspect ratio and adding padding during resizing? I just feel like forcing an n x n dimension distorts images too much when we have the varied original resolutions.
Thanks. On Windows, I had problems with your DATADIR="X:/Datasets/PetImages" @2:10. I had my own version of course, but the code said (in effect) my path was not valid, even tho it was. I discovered Colab runs Linux-style OS. There are several methods of doing it; I used the Google Drive mount and zipfile to extract (instead of the Windows File Explorer Extract), ending with DATADIR=''/content/drive/MyDrive/datasets/training_set'. I finally got to see the gray dogs!
It basically tells numpy the following: given all the other parameters IM_SIZE,IM_SIZE,1 figure out the other dimension (in this case the amount of images). its an automatic way of writting np.reshape(amount_of_samples, im_size, im_size, 1).
Great tutorial! However, I got a question. What if you have an image in multiple categories. So you could be sorting images based on size and colour and you stumble on an image that is red and big.
Hey man. What should I write instead of the "training_data.append" line if I want a multiclass dataset? Yours has two classes, imagine I have a 5-class dataset.
Hi, great work! I have a question, though, upon the "homework challenge" ! reshape(-1, IMG_SIZE, IMG_SIZE, 3) pops a ValueError: cannot reshape array of size 239640576 into shape (224,244,3). What's your opinion and solution ?? Thank you
If your using keras you should use the flow_from_directory function ,it's really the same thing without the hassle of running out of memory trying to load the entire dataset.
Hello is it necessary to print all the images like it is printing only one dog image what about the others ? I am doing orb detection does it required to loop through all images?
Re: Changing the X list to X array @16:35 "X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)". This combines (conversion of list to np array) and (reshaping). (a) I can not find explanation of this syntax with 4 parameters. Any help? (b) The '-1' has various meanings in array reshape. What does it mean here? (c) edit: removed (d) The first array in X list starts as: [102 104 62 ... 69 75 83] (50 elements in dim). But X array starts as: [[[[102] [104] [ 62] ... [ 69] [ 75] [ 83]] One element in dim. Is that correct? (e) The last parameter is 1. Where does that show in the X array? (f) To simplify these questions for debugging, I used img size of (3, 2) (width, height) giving an array shape of 2r x 3c. And I process only two images, skipping random.
After changing code to handle color, the "light" appears. (b) The '-1' says to flatten the array. With color, there are 3 elements in each dim. (e) The last parameter '1', as you mentioned is for gray images (one value per item). When using color images, change this to 3. First part of X array starts as [[[ 43 55 78]. Hope this helps.
Hey sentdex Thanks for the video ! I can't see what you did to reshape the "y" list at the end of the video @ 16:20 ... Could you please clarify this ? Thanks again !
Hi Sir, I found you posted X, y empty lists to store the features and labels. However, I have no idea how you distinguish the corrected features and labels from training datasets. Please kindly advise me. Thanks in advance!
in 3:33 you converted the data to grayscale again in 14:45 you said you put 1 because its a grayscale if its already a grayscale data why do you have to put that 1. could you explain I am kinda confused
the plylist is amazing, however i came across this issues after running part 2 and part 3 back to back.. the y also needs to be an array, so the model.fit in part 3 can run... thank you once more :)
X.append(features) AttributeError: 'numpy.ndarray' object has no attribute 'append' I got this error while preparing the dataset @sentdex how can i resolve
I see no one has commented about the inline printing of images/plots in Jupyter Notebook so here it is: %matplotlib inline Add this line before or after importing libraries and you will not have to use plt.show() anymore.
The first video was great, looking forward to watching this one through as well. Can make a video about using CPU vs GPU for some of these training processes? I would like to learn more about forcing the script to use the GPU for running instead of the CPU. For instance some of your older videos (like the Monte Carlo Simulation series) could benefit from this. Thanks!
To use the GPU, you just install the GPU version of TensorFlow. Depending on your OS this is slightly different, but: Windows: th-cam.com/video/r7-WPbx8VuY/w-d-xo.html Ubuntu: th-cam.com/video/io6Ajf5XkaM/w-d-xo.html Obviously now you do the later version of TF and the correct matching CuDNN and cuda toolkit. Currently Cuda Toolkit 9.0 and CuDNN 7.0.
It seems like all the videos and tutorials on this topic only deal with binary situations. Outside of the Keras docs on flowers there is a lack of variety on multiple classification approaches (> 2 classes). I have a feeling that might be where complexity and accuracy dive off a cliff.
Cool Video as always, but as of TF 1.9 you can use tf.data with Keras to do what you did in here and it will make a much more efficient pipeline for training larger datasets. This will also work for converting to tf.records if you want to change the format. This becomes important when using fast GPUs/TPUs as they no longer are the bottleneck and loading of data into the model is the bottleneck.
I did mention there are methods for larger datasets, and I plan to eventually cover that, but that gets far more complex to do. I find that, for most applications and what 99% of what people are doing with deep learning, they don't need to be concerned with that added complexity, which is why I didn't cover it here in part 2, but will be something to cover later.
sentdex I understand the tf.records being too hard. But tf.data is now very easy to use with Keras and what we are trying to teach people to use going forward. There are very simple examples here www.tensorflow.org/guide/keras under tf.data datasets
I must be looking in the wrong spots then. What I've seen from the data api doesn't look very beginner friendly. I'll poke around more and see what I can find.
5:04 I don't know why, but the way you casually said "ha, bluedog", and then continue on, was hilarious to me xD
You are one of the most chill and laid-back smart teachers i ve ever seen. such an informative tutorial. Thank You :)
Just wow , didn't wanted to watch the whole video , but your are a magnet !! Excellent style of teaching
Thanks alot for the videos! Great way to start my Data Science Journey. Really grateful to you for posting such content for free.
Thank you for the super thanks, best wishes to you on your programming journey!
These were videos that I requested. Please make more Project videos in Machine learning and deep learning videos and real-world machine learning projects in PYTHON because You Are The Best to learn from
This is so true. I've been so frustrated trying to learn this topic and a lot of videos are just people explaining what neural networks are, like the begging of the previous video in this series, but nobody actually gets into the code and explains how to set up functions. Like there is A HUGE DIFFERENCE BETWEEN DRAWING A NEURAL NETWORK AND ACTUALLY CODING ONE!!! ANYONE CAN DRAW ONE
So true ! Make more such project videos. They prove to be of great help.
Can someone tell me what X=[] and Y=[] is used for?
@@a.n.7338 Yes I do want to know
@@a.n.7338 That's to initialize it to an empty array. But at 15:30, he learned that it did not initialize the type. and that's something I especially like about his videos: he doesn't just edit out the mistakes, any talks about things like the need to reshape being kind of stupid.
I love the way how you personified the NN. The part with shuffling makes me laugh!
I keep Getting:
error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize'
What does it mean, could not find anything on the web, help!
Here's a course you'll need.
Face Mask Detection Using Deep Learning . It's paid but it's worth it.
khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
A nice alternative to pickle is
np.save('features.npy',X) #saving
X=np.load('features.npy')#loading
Thanks for sharing!
thanks
Thanks
very helpful
thanks
For those who want to use RGB/color images, modify these lines!
Change these:
img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE)
plt.imshow(img_array, cmap='gray')
X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)
To:
img_array = cv2.imread(os.path.join(path,img))
plt.imshow(img_array)
X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 3)
And this should work, good luck!
you are the man
thanks
Another amazing set of tutorials. You truly are helping me understand Python and Deep Learning at a whole different level. Thank you for your time and expertise, Sentdex.
Works like a charm. For those(beginners like me) who had issue in layers like "kernel size not defined". Just replace "(3,3 ) by kernel_size=3 " in layer 1 and 2 and it will be good to go.
Hi Harrison.
You've been doing an absolutely amazing list of implementating Deep learning videos with Python, Tensorflow, Keras, etc.
This is the most useful job you've ever done. I've learned the Machine learning, Deep learning theory easily but implementation and application is something difficult to me. Keep doing this please.
Here's a course you'll need.
Face Mask Detection Using Deep Learning . It's paid but it's worth it.
khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
This is currently the best TensorFlow tutorial on youtube. Can't express how thankful I am, after wasting much time on crappy videos named TensorFlow in 10 mins, 5 mins, 1 min and etc...
Epic moment: "haa, blue dog" @ 5:05
hahahahaa
i laugh so hard
The value of these videos is fucking incredible. After some setup with anaconda to get tensorflow and python 3.6 to work in pycharm, i was able to reproduce all of this with my own data. Your explanations are absolutely on point and i have no questions left after this part.
Thanks, that's awesome to hear!
i started my python journey with you back in the university days, thanks for being there boss.
amazing videos, great AI tutorials, honestly one of the best programming channels on TH-cam. thank you for making these videos
Yeah, I also really enjoy his videos. He inspires me to make my own AI videos.
This series is so thorough and easy to understand (for me at least :D)!
I can't wait for the next part!
Great to hear!
THANK YOU SO MUCH!!! I just started with Machine Learning and Neural Networks and this video helped me a lot!!!
Thank you so much for this video. As a programmer who just wants to start prototyping a simple model without a great DL background this video gave me the tools to get on with my work.
Thanks have been looking forward to this tutorial will help with my thesis.
For windows, if you have anaconda installed and cannot find module cv2, you may simply have to do:
pip install opencv-python
if you are on linux you can do :
pip install opencv-python
If you are having trouble here use anaconda prompt. This is in the Anaconda Manager where you start jupyter. Then simply type in pip install opencv-python and mine at least worked great
opencv-python is installed, but it cannot find the module cv2
You are very good in teaching and the world need you, Sir.
one of the best programming channels on TH-cam. Subscribed and hit the bell ; )
Thanks!
6:50 "the hand of a dog" - it is called a paw! hahaha
Great tutorial, but the way you load the data is not very memory efficient and this will cause problems with large datasets. First the training_data list is written into RAM and afterwards the same amount of memory is reserved when converting into a numpy array. So this approach is only good for datasets < RAM size/2.
Another option would be to create the numpy array at the beginning using np.empty and then write the data as entries into the array. This way the dataset can be as large as your RAM.
If the dataset is larger than the RAM size it is suggested to use a generator that loads and yields the data during training. This way your dataset can be as large as your SSD, but training speed is most likely limited by the read speed of the drive.
Just something I had to deal with during my thesis in the last couple of months. Maybe you could make a tutorial on the generator one, not a lot of people know about this.
Anyways, keep up the good work!
This looks very interesting and I'm experiencing some errors with this as well on my thesis. Can I contact you via email about this?
Sure, can you contact me via youtube? Or post you email and I will contact you
@@will1337 delete the comment with your email
@@will1337 Did you fixed your problem? my python returns "MemoryError" when doing np.array(X).reshape(-1 , IMG_SIZE, IMG_SIZE, 3) step
I'm doing the colored version (so I have about 3 channels of colors which is causing me trouble)
@@stewie055 I did fix it with changing my sampling rate of my data. Maybe resize your images? I am not sure how to fix it with image data, sorry.
"the hand of a dog", so wise sentdex. forever indebted
i enjoyed this so much, i was from CS degree. But not have quite good moment with programming. So i decided to get job that not programming. But, since i was try to learn about pyautogui and selenium from your video, i was so exited to learn ML, and now here am i ... following your keras tutorial :D
Sentdex: understands neural nets
Also Sentdex: doesn't know what to call a dogs paw
More like neural pets lol
These vids always cheer me up :) You are by far my most favourite instructor. :) When I feel depressed i just watch your videos.
Nice to hear :)
Sentdex these are the best videos I have ever seen in Deep Learning. Amazing tutorials. You are the best at what you do. Why did it take me long to find this channel.
Welcome here :D
@@sentdex can I have your email if you don't mind?
@@sentdex Hello. I got stuck when instructing my directory on the file.
Kindly advise. Thank you.
11:54 that's a great imitation of a model trying to learn ^^
Here's a course you'll need.
Face Mask Detection Using Deep Learning . It's paid but it's worth it.
khadymschool.thinkific.com/courses/data-science-hands-on-covid-19-face-mask-detection-cnn-open-cv
I like the teaching style, it's simple to understand
1:52 I was taking it serious till the mug appears
this video is amazing, I am so glad I found your channel. I have tried learning this stuff for quite a while now through other TH-cam videos but nobody could explain it that well.
@Ruben can u share the code
@RubenUribe through via email if possible
Heay sentdex pls keep up with your videos. They are really helpful in so many ways. Im just starting to get into ML and started studying Computer-Science just because of ML and your videos are so helpful. Thumbs up to you
I've been looking for ways to upload 40k images to my Drive for 3 days. You in one word: you are perfect
you can download google drive on ur machine and sync it with your drive
Thanks Snowden, nice tutorial.
Guys, the kaggle dataset that he is referring to no longer has folders named as cats and dogs... there are 2 folders, one is for training and another one is for test. You've gotta loop through images in the training folder and assign the labels using the image name
Brother, you're amazing. This video has been a huge help. Thanks.
At 14:00 is there any reason why we dont just do:
training_data = np.array(training_data)
X = training_data.T[0]
y = training_data.T[1]
Great video, looking forward to see the next one.
seriously you teaches better than my professors
Thanks for teaching us. :)
Thanks @sentdex, just what I was looking for :)
This video was 2 years ago! Hey sentdex! THANKS :D
This was the most helpful video I've found. thank you!
Exactly the video I needed! Thx bro!
I like your way of teaching.
Amazing video man... Looking forward to the next one
plt.imshow(img_array, cmap="gray")
plt.show()' shouldn't retrieve all images, not just the first one?
he had "break" in the for loops. So looped once and then "broke" out the for loop. That is why "img_array" has only 1 image data.
@@venkuburagaddaacc Okay Thanks :)
@@venkuburagaddaacc ty
So far, so good! The first dog grayscale image was successfully displayed. I was getting nervous there for a minute! I got confused when you added all that space at the bottom. It threw off my Jupyter notebook. I followed you thereafter, but my output did not print. We'll on to the other videos from other channels. I got to keep moving on. It was good while it lasted.
It was really a very useful video...Thaaank u very much for ur timely help
Sir your videos are epic! You are an excellent teacher
Really interesting and objective explanation of the topic! lol'd hard bc of the sudden blue dog
Great tutorial. Looking forwart to next part!
Expect it tomorrow!
Cant wait to the next video!! congratulations!!
finally, I got through this video without any error
What is your opinion on setting an aspect ratio and adding padding during resizing? I just feel like forcing an n x n dimension distorts images too much when we have the varied original resolutions.
Idk why I lmfao when you said “ha, a blue dog” hahahahahah
14:42 I had to change both X and y into numpy array to make it work. y as a list didn't work.
Did your model still work in the end? Im having this issue now
I found for the Y labels you have to make it a numpy array if not the model will not take them. Other than that this is an amazing tutorial
Thanks you! Awesome video. Looking forward to the next one 👍👍👍
Very detailed and helpful and cute. Thank you
Thanks. On Windows, I had problems with your DATADIR="X:/Datasets/PetImages" @2:10. I had my own version of course, but the code said (in effect) my path was not valid, even tho it was. I discovered Colab runs Linux-style OS. There are several methods of doing it; I used the Google Drive mount and zipfile to extract (instead of the Windows File Explorer Extract), ending with DATADIR=''/content/drive/MyDrive/datasets/training_set'. I finally got to see the gray dogs!
Thank you so much this video helped out so much with an up coming video of mine
Sentdex, I didn't understand when you did the reshape what the -1 exactly meant... you glossed over it a little bit. What does it exactly mean? Thanks
i also want to know, could some1 explain please?
Same problem! I ve stuck on that!
I HAvent made it before, tried to make reshape after. But it didnt worked
It basically tells numpy the following:
given all the other parameters IM_SIZE,IM_SIZE,1 figure out the other dimension (in this case the amount of images).
its an automatic way of writting np.reshape(amount_of_samples, im_size, im_size, 1).
*sentdex* and *DeepLizard* have both been _VERY_ helpful with teaching me how to program.
Thanks.
holy shit these videos have helped me, thank you so much dude!
Man , you are the best !! Keep up doing like this !! :D
Waiting for the next video...thanx man you are an amazing teacher
Next one just released :)
Amazing! going through it now
Excellent work
That's an informative video. thank you so much
Great Video !!
10:22 what kinda cpu you have that could iterate over all of those images in few seconds :D
Thank you so much SentDex
This helps me a lot.
Great tutorial! However, I got a question. What if you have an image in multiple categories. So you could be sorting images based on size and colour and you stumble on an image that is red and big.
Hey man. What should I write instead of the "training_data.append" line if I want a multiclass dataset? Yours has two classes, imagine I have a 5-class dataset.
I suggest using context managers for file opening. Cleaner and is better for beginners as you don't have to remember to close the file
Hi, great work!
I have a question, though, upon the "homework challenge" !
reshape(-1, IMG_SIZE, IMG_SIZE, 3) pops a ValueError: cannot reshape array of size 239640576 into shape (224,244,3).
What's your opinion and solution ??
Thank you
If your using keras you should use the flow_from_directory function ,it's really the same thing without the hassle of running out of memory trying to load the entire dataset.
I'm not sure which ends up being better....the videos or the random (read: dope) coffee mugs you keep pulling out in them ;)
16:25 There isnt problem in the kernel, I restart it and still have same issue. Dont know how it repair
Same here...
@@내능지어디감 I think i found solution, just enter anaconda cmd and type "install cv2" (or any lacking library) it helped me
@@pawegoebiowski1641 Well, that didn't work for me. I think I should post it on stackoverflow.... but thanks anyway :D
Oh thank you! I've looking for the way to load my own dataset and here you go! :З
Love getting tought ML by Snowden!
Hello is it necessary to print all the images like it is printing only one dog image what about the others ?
I am doing orb detection does it required to loop through all images?
Re: Changing the X list to X array @16:35 "X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)". This combines (conversion of list to np array) and (reshaping). (a) I can not find explanation of this syntax with 4 parameters. Any help? (b) The '-1' has various meanings in array reshape. What does it mean here? (c) edit: removed (d) The first array in X list starts as: [102 104 62 ... 69 75 83] (50 elements in dim).
But X array starts as:
[[[[102]
[104]
[ 62]
...
[ 69]
[ 75]
[ 83]]
One element in dim. Is that correct? (e) The last parameter is 1. Where does that show in the X array? (f) To simplify these questions for debugging, I used img size of (3, 2) (width, height) giving an array shape of 2r x 3c. And I process only two images, skipping random.
After changing code to handle color, the "light" appears. (b) The '-1' says to flatten the array. With color, there are 3 elements in each dim. (e) The last parameter '1', as you mentioned is for gray images (one value per item). When using color images, change this to 3. First part of X array starts as [[[ 43 55 78]. Hope this helps.
Hey sentdex Thanks for the video ! I can't see what you did to reshape the "y" list at the end of the video @ 16:20 ... Could you please clarify this ? Thanks again !
Nice T-Rex mug. For anyone interested, it is the "3-D Shaped T-Rex Dinosaur Design Ceramic Mug" on amazon
Hi Sir,
I found you posted X, y empty lists to store the features and labels. However, I have no idea how you distinguish the corrected features and labels from training datasets. Please kindly advise me. Thanks in advance!
so good and so clear
in 3:33 you converted the data to grayscale again in 14:45 you said you put 1 because its a grayscale if its already a grayscale data why do you have to put that 1. could you explain I am kinda confused
the plylist is amazing, however i came across this issues after running part 2 and part 3 back to back..
the y also needs to be an array, so the model.fit in part 3 can run...
thank you once more :)
X.append(features)
AttributeError: 'numpy.ndarray' object has no attribute 'append'
I got this error while preparing the dataset @sentdex how can i resolve
Hey, because your X variable is numpy array, you need list.
X = [ ] # these are list
y = [ ] # list
and list has attribute "append".
you just need to write the :X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1) out of the for loop.
Hi how to use this method if we have 4 different categories?
I see no one has commented about the inline printing of images/plots in Jupyter Notebook so here it is:
%matplotlib inline
Add this line before or after importing libraries and you will not have to use plt.show() anymore.
How to load the image data set from my computer and split into two data set for training and testing?
The first video was great, looking forward to watching this one through as well. Can make a video about using CPU vs GPU for some of these training processes? I would like to learn more about forcing the script to use the GPU for running instead of the CPU. For instance some of your older videos (like the Monte Carlo Simulation series) could benefit from this. Thanks!
To use the GPU, you just install the GPU version of TensorFlow. Depending on your OS this is slightly different, but:
Windows: th-cam.com/video/r7-WPbx8VuY/w-d-xo.html
Ubuntu: th-cam.com/video/io6Ajf5XkaM/w-d-xo.html
Obviously now you do the later version of TF and the correct matching CuDNN and cuda toolkit. Currently Cuda Toolkit 9.0 and CuDNN 7.0.
It seems like all the videos and tutorials on this topic only deal with binary situations. Outside of the Keras docs on flowers there is a lack of variety on multiple classification approaches (> 2 classes). I have a feeling that might be where complexity and accuracy dive off a cliff.
awesome tutorial
Cool Video as always, but as of TF 1.9 you can use tf.data with Keras to do what you did in here and it will make a much more efficient pipeline for training larger datasets. This will also work for converting to tf.records if you want to change the format. This becomes important when using fast GPUs/TPUs as they no longer are the bottleneck and loading of data into the model is the bottleneck.
I did mention there are methods for larger datasets, and I plan to eventually cover that, but that gets far more complex to do. I find that, for most applications and what 99% of what people are doing with deep learning, they don't need to be concerned with that added complexity, which is why I didn't cover it here in part 2, but will be something to cover later.
sentdex I understand the tf.records being too hard. But tf.data is now very easy to use with Keras and what we are trying to teach people to use going forward. There are very simple examples here www.tensorflow.org/guide/keras under tf.data datasets
I must be looking in the wrong spots then. What I've seen from the data api doesn't look very beginner friendly. I'll poke around more and see what I can find.
i can't wait for the next video
Soon(tm)