Very very excellent and understandable videos - and in my case very interesting how other disciplines use ML /AI /deep learning. Thank you so much for sharing !! EXCELLENT 🙂
Sorry, I have one more question. I see that you apply the gabor filters on img2 that has already been reshape. Is it the same as apply it on the 2D img?
Excellent question. No, applying gabor on 1d is not same as applying it to 2D array. Please try applying it on the array before reshaping and then reshape the filtered array to add it to the data frame. I bet you’ll get better results as 2D retains the pixel context. 1d also works but 2D may be better.
@@DigitalSreeni I just came to say this. If you have the time it may be useful to change your code on github from img2 to img, just in case other people download the code and encounter this problem without realising it. I should also point out that I'm finding that all kernels with lambda=0 within your gabor loop are an array of NaNs. I guess these kernels will just be skipped in the random forest as they won't produce any useful information, so it's not the end of the world, but thought I would point it out. I can raise issues on github if you prefer that method of reporting. Overall these courses are brilliant though, thank you!
@@thomascwilkes5 I would suggest the same. All Gabor filters were applied to his img2 which is a1D array. Needs to fix the code to reflect the correct image 1024 x 996.
Hi Sreeni. Concerning the extracted features which are then to be used in the "Traditional Machine Learning" approach, do the features not need to be normalized? When playing around on my own image, the max pixel intensity is 255. However, the maximum value returned for the Sobel filter is 0.73. Does the machine learning model not need the features to be normalized to the name range as every other feature, including pixel intensity? And if not, when do we need to worry about normalizing features? Thanks
may i ask about the labeled image 19:10 , what do you mean by labeled image? is it the name? is it the same as the train image? is it just showing the original image pixel? could we use other ground truth, e.g. writer's id.? thank you sir.
I was asking the doctor about my father's mri and he showed me photos. they suspected he had some cancer hanging off his appendix long ago, but they could only guess. I suppose this method could be used to make data for a prediction. OMG what a ton of work it would be.
Thank you so much for this video series, they are incredibly informative. I was wondering why it is important to first convert the image to greyscale? I was thinking the original pixel values might be more informative? (eg if an index such as NDVI has been calculated per-pixel, and the resulting image used) Thanks for any help with this!
Hi, thank you for this great tutorial. I was wondering how you developed your labeled image in this video. Did you already apply some histogram segmentation to it beforehand?
I'm curious too on this! I also guess some histogram segmentation was used, although I wouldnt bet all on it because on earlier videos (I guess the one he developed the random forest) he said something about proximity of pixel value and sugested the entropy filter (if I remember well, it was clay and sandstone that had those conditions)
If histogram segmentation could do the job then why do you need machine learning? Histogram segmentation can work for some datasets but not if you have a lot of texture, for example mitochondria in an electron microscope image. There are a few labeling tools out there but they all seem to be very complicated. We, at work, are developing an easy to use and free tool for image labeling. It should be out soon so please stay tuned. In the mean time you can sign up for a free account at: www.apeer.com THis is where we will deploy the machine learning labeling tool. By the way, APEER is free to use cloud image analysis platform for academic and individual use.
great tutorial. Can you make a video on how to combine hand-crafted features with CNN? I want to ask whether the CNN features can encode angles information similar to HOG or Gabor filters. It will be great if we can combine them
I'm trying to get my material on Github, as soon as I do that I will share everything, code and images. Until then please bookmark this and keep an eye. github.com/bnsreenu/python_for_microscopists
if you have 2 Images of 1kx1k how would you determine your dataframe? just concatenating the second image pixel values and other filters to the initial one would work for it ?
Thanks for your videos sir I got an error like length of values does not match length of index while adding the color image's data frame to original df. Am using brain mri image. While colouring the image I just colored half portion of image not all portion as u did. Could u help me out with solving this error.
Thanks a lot. I want to know that how we decide on setting the parameters for Gabor filter? Like kernel, theta, lambda etc. How is this dependent on the dataset? Is there any standard that works for all types of video/image frames?
Hi, first of all thanks for your awesome work and clear explainations ! Quick question: you're working here with a single image and the dataframe is already quite big. What if you had to work with a full dataset ? I currently have a dataset of 2500 original images.. wouldn't the dataframe be so big it would be too hard to handle ? Or maybe use PCA + T-SNE on it ? How would you handle such a dataset for randomforest (or svm) predictions ? Thanks in advance
If you have a large training dataset then you will get much better results with deep learning approach. This method (traditional machine learning) works great for small training data, for example a handful of 1kx1k images and masks. With deep learning, you can load data in batches for training and update parameters. This is why you can work with very large data sets. You can find some tricks for traditional machine learning but if you have lots of data you may as well use deep learning.
@@DigitalSreeni Thanks for your quick reply :) I am required to use a "simple" ML model before using Deep Learning, I then used Unet thanks to your video too ^^ Keep up the great work, have a nice day :)
I'm trying to apply this example in a satellite image to identify citrus groves. I'm using a labeled image that contains citrus groves pixels labeled as 1 and non-citrus labeled as 2, but I don' have the entire image labeled. So I have pixels labeled as 0 (not labeled in any category). How can I remove the samples that contain df['Label']==0 ??? Thank you very much!
I did this in my video 159b while explaining how to use pre-trained deep learning odel for semantic segmentation. Here is the link: th-cam.com/video/vgdFovAZUzM/w-d-xo.html If you want to stick with fully traditional machine learning like I showed in this video then I did exactly that in my work channel. Here is the link to that playlist, watch videos 78-82: th-cam.com/play/PLHae9ggVvqPgyRQQOtENr6hK0m1UquGaG.html
Hi, I hope this reply is still helpful five months later, but it seems like the conversion from RGB/BGR to Gray is missing somewhere. Judging from the fact that the length of the index is 3 times the length of the values
Hey, sir. Is it possible that retraining with the random forest model will result in catastrophic forgetting? And, if such a thing exists, can you recommend a way to avoid it?
I'm curious why you're worried about catastrophic forgetting? Do you plan on branching your model into something that does multiple tasks? As far as I know, this concept is relevant for neural networks and I never bothered as I train my models to perform specific task. I try to generalize the model by working with diverse looking data but I always use the fully trained model for inference. I also save models after training on certain type of data and use that model for inference. You can continue training using new data, but your end model will be tuned towards new images.
We (at work) are creating a tool for labeling images, please stay tuned. It will be free for personal use or for academic work. Please sign up for your free account at www.apeer.com. Again, it is free online image analysis platform for image analysis.
Hi sir, thank you for this tutorial. but i have problem after adding the canny edges, "ValueError: Length of values does not match length of index" ,? is it because i'm using the classic python IDE?
It has nothing to do with IDE. Please verify your code and track the variables to see if you are trying to match values and index with different lengths - as the error suggests. A quick google search will help find a good solution, it is tough for me or anyone to help without context or details.
With only a few pixels labeled you need to consider how you'd like to handle unlabeled pixels. You can assign a value 0 to unlabeled pixels and drop them from training. You can use this for labeling: www.apeer.com/annotate (it is free). Please watch this video for more information on how to get from no labels to partial labels to full labels... th-cam.com/video/-u8PHmHxJ5Q/w-d-xo.html
Most functions in Python can handle 3 channels so you can extract features from a color image directly. For example, I just tried generating Gabor kernels (total 32 features) from a grey image and a color image, both 1024 x 996 pixels. For grey image, I got 32 features and total 1019904 entries (1024x996 pixels). For color image I got 3059712 entries corresponding to 3 channels. In other words, I got features at every pixel is each of the color channels. In summary, most of the time functions can handle color images. If not, you can split channels yourself and apply functions to each channel and then merge the channels again.
@@DigitalSreeni thank you very much for your clear explanation. by the way, I followed your instruction (refereed to this video), but the accuracy is only 85% (different image). Please kindly help how to make improve accuracy
@@DigitalSreeni What is the best approach regarding the classification for 3 channels? Do you now have 3x32=96 features to train the classifier, or do merge the channels sum the values and average, or?
I tried manually highlighting my data minimally. Equivalent to highlighting only the white spots as yellow on the photos, leaving background blue , highlighting everything else as green and ran the program. Lets say I just needed to be able to forecast where the white spots were only. I got 40% accuracy and the segmented pictures in video 66 or 67 were pathetic. Just in case anyone wants to try the approach.
Just add an imsave command inside the loop. Remember to add a string reflecting the filter number or something else each time it loops through. Otherwise, you will overwrite the image each time.
Sir, First of all you're absolutely great.. Sir as I earlier mentioned in some of your video that i'm working on brain tumor segmentation, sir I've done manual segmentation through six different processes, First I applied thresholding of Tozero to the image and then, then otsu thresholding, then segmentation through histogram.. and then applied morphological operations like opening and closing and erosion ... Sir my question is.. can I add these processes or features to this ML code... so that the whole process can be done through a Machined learned algorithm than by manually running different processes. In the end Thank You again sir for providing such great informative videos sir.
Machine learning is ideal for tumor image segmentation. Please try this method which uses traditional approach. Also try CNN or Unet. You won't be an expert overnight but you may find code that is half ready to use so you can further customize for your need.
I think img1 was a 2D matrix if you run the code and look at the variables view. Then he reshape img1 to one column (reshape(-1) ) in img2 to use in Filter2d
First of all thank you so much for the amazing course i am following your Channel and watched every sec of your Every Tutorial. Dear Teacher i am facing problem in converting .TIF images into GrayScale using OpenCv can you help me in this please
I would like to thank you with all my heart. Not for only this video but for all of your videos.
You are welcome.
Very very excellent and understandable videos - and in my case very interesting how other disciplines use ML /AI /deep learning. Thank you so much for sharing !! EXCELLENT 🙂
Thank you.
i enjoy watching this video, thank you for sharing
Glad you enjoyed it
Sorry, I have one more question. I see that you apply the gabor filters on img2 that has already been reshape. Is it the same as apply it on the 2D img?
Excellent question. No, applying gabor on 1d is not same as applying it to 2D array. Please try applying it on the array before reshaping and then reshape the filtered array to add it to the data frame. I bet you’ll get better results as 2D retains the pixel context. 1d also works but 2D may be better.
@@DigitalSreeni I just came to say this. If you have the time it may be useful to change your code on github from img2 to img, just in case other people download the code and encounter this problem without realising it. I should also point out that I'm finding that all kernels with lambda=0 within your gabor loop are an array of NaNs. I guess these kernels will just be skipped in the random forest as they won't produce any useful information, so it's not the end of the world, but thought I would point it out. I can raise issues on github if you prefer that method of reporting. Overall these courses are brilliant though, thank you!
@@thomascwilkes5 I would suggest the same. All Gabor filters were applied to his img2 which is a1D array. Needs to fix the code to reflect the correct image 1024 x 996.
Hi Sreeni. Concerning the extracted features which are then to be used in the "Traditional Machine Learning" approach, do the features not need to be normalized? When playing around on my own image, the max pixel intensity is 255. However, the maximum value returned for the Sobel filter is 0.73. Does the machine learning model not need the features to be normalized to the name range as every other feature, including pixel intensity? And if not, when do we need to worry about normalizing features? Thanks
may i ask about the labeled image 19:10 , what do you mean by labeled image? is it the name? is it the same as the train image? is it just showing the original image pixel? could we use other ground truth, e.g. writer's id.? thank you sir.
I was asking the doctor about my father's mri and he showed me photos. they suspected he had some cancer hanging off his appendix long ago, but they could only guess. I suppose this method could be used to make data for a prediction. OMG what a ton of work it would be.
Thank you so much for this video series, they are incredibly informative. I was wondering why it is important to first convert the image to greyscale? I was thinking the original pixel values might be more informative? (eg if an index such as NDVI has been calculated per-pixel, and the resulting image used) Thanks for any help with this!
Hi, thank you for this great tutorial. I was wondering how you developed your labeled image in this video. Did you already apply some histogram segmentation to it beforehand?
I'm curious too on this!
I also guess some histogram segmentation was used, although I wouldnt bet all on it because on earlier videos (I guess the one he developed the random forest) he said something about proximity of pixel value and sugested the entropy filter (if I remember well, it was clay and sandstone that had those conditions)
If histogram segmentation could do the job then why do you need machine learning? Histogram segmentation can work for some datasets but not if you have a lot of texture, for example mitochondria in an electron microscope image.
There are a few labeling tools out there but they all seem to be very complicated. We, at work, are developing an easy to use and free tool for image labeling. It should be out soon so please stay tuned. In the mean time you can sign up for a free account at: www.apeer.com THis is where we will deploy the machine learning labeling tool.
By the way, APEER is free to use cloud image analysis platform for academic and individual use.
@@DigitalSreeni Is there any tutorial available on labelling image?
Greate video, what do you uses to create the image masks?
great tutorial. Can you make a video on how to combine hand-crafted features with CNN? I want to ask whether the CNN features can encode angles information similar to HOG or Gabor filters. It will be great if we can combine them
Great explanation. Could you share the training image file and label file images?
I'm trying to get my material on Github, as soon as I do that I will share everything, code and images. Until then please bookmark this and keep an eye. github.com/bnsreenu/python_for_microscopists
how to do this without having to change the colours to grey and just keeping the original image?
"MemoryError: could not allocate 268435456 bytes"
is the program meant to need 256 megabytes for one image?
if you have 2 Images of 1kx1k how would you determine your dataframe? just concatenating the second image pixel values and other filters to the initial one would work for it ?
Thanks for your videos sir
I got an error like length of values does not match length of index while adding the color image's data frame to original df. Am using brain mri image.
While colouring the image I just colored half portion of image not all portion as u did. Could u help me out with solving this error.
Thanks a lot. I want to know that how we decide on setting the parameters for Gabor filter? Like kernel, theta, lambda etc. How is this dependent on the dataset? Is there any standard that works for all types of video/image frames?
Parameters depend on the features in your image. The common method would be to create banks of filters and evaluate which ones worked.
Hi, first of all thanks for your awesome work and clear explainations !
Quick question: you're working here with a single image and the dataframe is already quite big. What if you had to work with a full dataset ? I currently have a dataset of 2500 original images.. wouldn't the dataframe be so big it would be too hard to handle ? Or maybe use PCA + T-SNE on it ?
How would you handle such a dataset for randomforest (or svm) predictions ?
Thanks in advance
If you have a large training dataset then you will get much better results with deep learning approach. This method (traditional machine learning) works great for small training data, for example a handful of 1kx1k images and masks. With deep learning, you can load data in batches for training and update parameters. This is why you can work with very large data sets. You can find some tricks for traditional machine learning but if you have lots of data you may as well use deep learning.
@@DigitalSreeni Thanks for your quick reply :)
I am required to use a "simple" ML model before using Deep Learning, I then used Unet thanks to your video too ^^
Keep up the great work, have a nice day :)
I'm trying to apply this example in a satellite image to identify citrus groves. I'm using a labeled image that contains citrus groves pixels labeled as 1 and non-citrus labeled as 2, but I don' have the entire image labeled. So I have pixels labeled as 0 (not labeled in any category). How can I remove the samples that contain df['Label']==0 ??? Thank you very much!
I did this in my video 159b while explaining how to use pre-trained deep learning odel for semantic segmentation. Here is the link: th-cam.com/video/vgdFovAZUzM/w-d-xo.html
If you want to stick with fully traditional machine learning like I showed in this video then I did exactly that in my work channel. Here is the link to that playlist, watch videos 78-82: th-cam.com/play/PLHae9ggVvqPgyRQQOtENr6hK0m1UquGaG.html
The above code is throwing this error : "Length of values (1019904) does not match length of index (3059712)". How to fix this?
Hi, I hope this reply is still helpful five months later, but it seems like the conversion from RGB/BGR to Gray is missing somewhere. Judging from the fact that the length of the index is 3 times the length of the values
is there any tool to do masking on image ?
yeah you can use many annotation tool like labelme
Sir, should I convert the image to Gray? I need the RGB color feature to be extracted. How it come?
Thanks
Hey, sir. Is it possible that retraining with the random forest model will result in catastrophic forgetting? And, if such a thing exists, can you recommend a way to avoid it?
I'm curious why you're worried about catastrophic forgetting? Do you plan on branching your model into something that does multiple tasks? As far as I know, this concept is relevant for neural networks and I never bothered as I train my models to perform specific task. I try to generalize the model by working with diverse looking data but I always use the fully trained model for inference. I also save models after training on certain type of data and use that model for inference. You can continue training using new data, but your end model will be tuned towards new images.
Great videos, sir. How would I create labels for eye segmentation (I want to segment into pupil, iris, sclera)?
We (at work) are creating a tool for labeling images, please stay tuned. It will be free for personal use or for academic work. Please sign up for your free account at www.apeer.com. Again, it is free online image analysis platform for image analysis.
@@DigitalSreeni have u created that tool for labelling images ?
How do I get the label image, is it really one pixel label? It will be very time consuming
Hi sir, thank you for this tutorial. but i have problem after adding the canny edges, "ValueError: Length of values does not match length of index" ,? is it because i'm using the classic python IDE?
It has nothing to do with IDE. Please verify your code and track the variables to see if you are trying to match values and index with different lengths - as the error suggests. A quick google search will help find a good solution, it is tough for me or anyone to help without context or details.
@@DigitalSreeni thank you sir.
Would it work the same way with an image with just few pixels labelled? What will be the differences?
With only a few pixels labeled you need to consider how you'd like to handle unlabeled pixels. You can assign a value 0 to unlabeled pixels and drop them from training. You can use this for labeling: www.apeer.com/annotate (it is free).
Please watch this video for more information on how to get from no labels to partial labels to full labels...
th-cam.com/video/-u8PHmHxJ5Q/w-d-xo.html
Hello sir, can you make some videos on GNN on images? it would help.
can u e explain object feature extraction using euclidean distance
hello sir
thank you so much for sharing....
can you pls do videos related to ELM ?
Thanks in advance
Hi DigitalSreeni,
Are you still active??
These videos are from a long back.
I have got a few things to ask.
that's such a good tutorial. I love all of them. by the way, any way of feature extracting the 3 channels colors
Most functions in Python can handle 3 channels so you can extract features from a color image directly. For example, I just tried generating Gabor kernels (total 32 features) from a grey image and a color image, both 1024 x 996 pixels. For grey image, I got 32 features and total 1019904 entries (1024x996 pixels). For color image I got 3059712 entries corresponding to 3 channels. In other words, I got features at every pixel is each of the color channels.
In summary, most of the time functions can handle color images. If not, you can split channels yourself and apply functions to each channel and then merge the channels again.
@@DigitalSreeni thank you very much for your clear explanation. by the way, I followed your instruction (refereed to this video), but the accuracy is only 85% (different image). Please kindly help how to make improve accuracy
@@DigitalSreeni What is the best approach regarding the classification for 3 channels? Do you now have 3x32=96 features to train the classifier, or do merge the channels sum the values and average, or?
@@DigitalSreeni You are a very kind and honorable human sir. Live long ....
Your are replies are great. so helpful...Thanks a lot sir
Hello sir, Any idea about LandSat image feature extraction?
Please stay tuned, I will record some videos on satellite image segmentation.
@@DigitalSreeni do you already upload it, Sir? By the way, I just subscribed...
What software should install to run this coding pls guide me
I covered it in my first few videos. Please watch them in my playlist.
I tried manually highlighting my data minimally. Equivalent to highlighting only the white spots as yellow on the photos, leaving background blue , highlighting everything else as green and ran the program. Lets say I just needed to be able to forecast where the white spots were only. I got 40% accuracy and the segmented pictures in video 66 or 67 were pathetic. Just in case anyone wants to try the approach.
My data photos are also not nearly as crisp as these photos are.
how to save image for every gabor label?
Just add an imsave command inside the loop. Remember to add a string reflecting the filter number or something else each time it loops through. Otherwise, you will overwrite the image each time.
Can you please share the image, I didn't find it on your github?
It is indeed on my github, just verified. Please have a look inside images/Train_images folder.
@@DigitalSreeni Thank you very much!!
Sir, First of all you're absolutely great.. Sir as I earlier mentioned in some of your video that i'm working on brain tumor segmentation, sir I've done manual segmentation through six different processes, First I applied thresholding of Tozero to the image and then, then otsu thresholding, then segmentation through histogram.. and then applied morphological operations like opening and closing and erosion ... Sir my question is.. can I add these processes or features to this ML code... so that the whole process can be done through a Machined learned algorithm than by manually running different processes. In the end Thank You again sir for providing such great informative videos sir.
Machine learning is ideal for tumor image segmentation. Please try this method which uses traditional approach. Also try CNN or Unet. You won't be an expert overnight but you may find code that is half ready to use so you can further customize for your need.
@@DigitalSreeni thank you sir. Much appreciated.
@@waqarmirza390 can u help me in brain tumor segmentation?
@@amnasulman8267 Yeah I can. Sure
fimg = cv2.filter2D(img2, cv2.CV_8UC3, kernel) i dont understant here why img2 and not img
I think img1 was a 2D matrix if you run the code and look at the variables view. Then he reshape img1 to one column (reshape(-1) ) in img2 to use in Filter2d
First of all thank you so much for the amazing course i am following your Channel and watched every sec of your Every Tutorial. Dear Teacher i am facing problem in converting .TIF images into GrayScale using OpenCv can you help me in this please
Please watch my videos on reading images. Also, look into using tifffile library to read and write tiff files.
@@DigitalSreeni save_handler = SAVE[format.upper()]
KeyError: 'TIF' this error
thank you very much
Thank you
You're welcome
Thx
cool