This channel is a masterpiece. I'm doing a research using image processing and analysis, and I can't describe how much those videos are useful for my work, and how many skills I learned in short time. Thank you very much Sreeni, you are doing great work.
Not just one of the best, but the best explanation on machine learning, image classification, and segmentation. I can see that very soon this channel is going to be popular. I'm happy and proud to be a part of the first ten thousand subscribers!!!
Hi Sreenivas, Thanks a lot for this channel which is really helpful. There is one thing I don't get with this video (and the previous one) about the feature extractor. How the weights are set in the CNN model ? If I understood well you do not trigger any backpropagation here, so how do we know if the weights set randomly make a good feature extractor ? Thanks
There is no learning happening during feature extraction, therefore the filters are all based on initial random weights. If you actually know which features are appropriate you do not follow this method, you just do your feature engineering, just like I showed in video 67/67b. But if you do not know which filters/features work then you have to start with random filters. After Random Forest training you can look at feature ranking and then find out which ones actually work for your images. One thing you can also do is to take filters from first conv layer and then add filters from 2nd layer and then 3rd to create a really broad set of filters. Based on my experience, for most problems the random initial filters of about 64 work most of the time.
By the way, if you want to get features from trained network, you need to train it first and then chop off the layers you do not want. You can even import weights from pretrained networks such as VGG16.
@@DigitalSreeni Thank you for your answers, this is interesting to know that even without training, a CNN can be used as feature extractor. I don't want necessarily to do transfert learning, but it's seems more reproductible to me to force the weights values by any means than relying on a random instanciation. Using VGG seems a good Idea, maybe in your use case a least the firsts layers to capture low semantic features like texture or colors. On the other hand, with the custom CNN, I was wondering if doing a first training could be another way to intanciate the weights. This would make a poor classifier, but at least the first layers would start to discriminate some low level features that could be useful when using the CNN as feature extractor upstream of the random forest
Congratulations on your work! These videos are especially helpful. Thanks for your great work. I work in the field of remote sensing and geoprocessing. One year ago I had no coding knowledge, but 1 year ago I started working with R. Now I am trying to learn Python and these videos are especially useful. If you would allow me a suggestion, it would be great to find a video where you work with georeferenced raster data. I know these videos are dedicated to microscopists, but this could help a lot of people too. Thank you very much and greetings.
I am not familiar with remote sensing type datasets, hence I stay away from giving any advise. I guess most of my videos can be applied to many image formats once the problem is defined as an image analysis problem.
I think the SOTA framework for satellite image segmentation (for buildings and footprints) is the TernausNet, a variant of U-Net. Satellite data is huge and I think these images need high GPU power. Therefore SpaceNet has hosted their baseline model in AWS Elastic Cloud Compute instance. You can check the below link. medium.com/gsi-technology/a-beginners-guide-to-segmentation-in-satellite-images-9c00d2028d52
@@soorajs2474 thank you for sharing this tool. In the next few months i will do a course about deep learning using python. i will work with U-net architectures. I will try to use this tool. Trank you!
It might take some more months or potentially years, but I would love to see the traditional machine learning methods integrated in frameworks such as keras. Just imagine how powerful this approach could be in an end-to-end training fashion
Great channel. It helps a lot with research works. Sir could you please make a video on feature extracting from pre-trained models and then use that feature extractor as an input to traditional machine learning algorithms. And please explain how to calculate confusion matrix, precision, accuracy, classification report for multi-class image classification problems.
Please watch my latest video: 158b. It describes the process of using pretrained model as feature extractor and then Random forest for image classification. It also goes through the process of printing confusion matrix.
No doubt about the learning quality of the video...Its awsome... Just I am li'l confused about the dataset....The dataset you provided in link and the dataset you used in the video is not same or maybe same but I can not grab it.... Would you plz help.. Thanks a lot🧘♂ 🧘♂
Thank you very much for your helpful videos. I had a question: Can we use random forest to label some images and then use those labeled images to train a CNN. I have access to thousands of images from different rocks and I plan to train a CNN for each rock to be used by the other researchers in the future. But manually labeling all the images would be impossible.
Thank you for the video. It was really helpful. I have one question though. What if we want to train at the time of feature extraction (i.e. in the convolution layers) before going into random forest, without any transfer learning? HOW to CUT OFF the DENSE LAYERS of CNN in such case?(I assume we would then have to train entire CNN, and cut off the dense layers, but how?)
thanks for your great glad work. i watched part 1 to part 5 for ml especially ,such a nice work. i need some clarification i used retinal data along with google colab but it is not enough as compared to spyder ide. pls suggest system configuration to run a STARE dataset or a large dataset using traditional machine learning.
If colab is not enough then I don't know what to suggest other than paying money to upgrade your colab. If not, try transfer learning where you may not need as much data for training. Watch my video 158b.
Incredibly useful series of videos, thank you! I have one question regarding these few videos on convolutional filters + random forest and some previous videos on your channel (62-67) which seem to be similar to me. In the earlier videos you use Gabor filters for features extraction, with slightly more code, but here you are using keras Conv2D. Are these two methods essentially the same or am I missing some significant differences between the earlier video methods and this video's methods? Should one be preferred over the other? Thanks again for creating this content
I saved the machine learning model using the pickle library. However, when I tried to load the model again using the same library, it did not work as expected. I am unsure what the issue is. Can you help me troubleshoot the problem?
Hi Sreeni, Thank you for sharing. Can you please let me know how to extract the Region of Interest (ROI) using convolutional filters for classification. Looking forward to your reply.ధన్యవాదాలు
@@DigitalSreeni I want to select ROI to detect object using convolutional filters then based on the radius of the object I want to classify them in case of RBC and WBC cells images
Thanks for the great videos and coding, I am actually using these codes for astronomical data processing :D I would like to ask you a question, can I use your *How to predict covid 19 using AI* in my project? And of course I will credit you, your youtube channel and your GitHub.
Feel free to use my code for your project. I am trying to educate people on coding and I am sharing my code with no conditions. If you leave reference to this channel then others can also benefit.
Do we have to annotate the image samples shared by you in apeer first? Because if I use the same image samples as shared by you then the unique labels of mask comes out to be random numbers.
Yes, if you only use similar looking images for training. If you have used diverse looking images during training then the model should be generalized enough to be able to segment images with varying contrasts. I do recommend deep learning if you are looking to train a model that is generalized.
Thanks Sreeni! The dataset is quite large and I am trying to keep it under the Earth Engine framework. With DL such as UNet, one of the issues I have is the training of model within earth engine framework. In order to use UNet, I will have to download the imagery in order to run it with Pytorch unet. Also the pricing of the earth engine cloud AI makes me uncertain since I am not clear on how much of the resources I might need or are feasible for continent wide geospatial extent.
Did you consider image generator to directly load small batches from the disk and still stay within the limits of your resources? With large training data you should be looking at using the full deep learning approach.
Hi Sreeni, Yes, I did but then the next part of the issue is the prediction results being generated. Since the earth engine image objects cannot work directly with external models, I will have to first download the imagery that too has a limit on the size of the image..~18K x 18K. That results in roughly 600 images followed by breaking down the images to unet size 512x512. This will result in millions of images and will be followed by the prediction results that will be an image as well. Is there a way to keep everything within earth engine framework without the need to download imagery? Cost x time makes me wonder if it will take a long time to have the model setup and running to make predictions within earth engine and if Random Forest is a more efficient approach in that manner.
I am not familiar with earth engine, it sounds like some sort of image pyramid. With these large images, you have to extract smaller patches and make predictions.
Hi Sreeni, I've labelled a number of images on Apeer as in the video, but when I export it my mask images are always completely black. I've checked the correct labels, but it doesn't seem to be working for me. Any ideas how I could fix this? Thank you in advance :)
What program did you use to open your labeled images? When you label your images the pixel values are going to be 1, 2, 3... etc. Most programs open images and display them full range (0-255). This means the image will look all black unless you adjust the range. Try opening the image in APEER web viewer and adjusting the range. Or load the image to python and use pyplot to plot.
@@DigitalSreeni HI Sreeni, you're exactly right- I wasn't aware the pixel value was literally 1. When I thresholded it last night in ImageJ I could see that I did in fact export properly! Thank you :)
@@DigitalSreeni okay thanks will check it out. i already have a model but with only 54% accuracy so ive been searching the internet for some way of building synthetic features to improve a model that doesnt involve using a different model. it has been quite difficult to find someone that talks about convolution without the use of neural networks or tensorflow or keras so im glad i found this and hope it works. ive even tried, with no success, of building my own synthetic features. i knew they werent going to work but i was really desperate. so if your video can help me out then i truly have found a gem.
Not sure what you can, can you please explain? For semantic segmentation you are defining classes manually, you provide them via masks. So you need to label your masks to reflect all classes in your images.
@@DigitalSreeni When I upload my images, I have no pencil option .What I do is I found an Eye option ,Select this and from left pane I select annotation option.Am I right sir
This channel is a masterpiece. I'm doing a research using image processing and analysis, and I can't describe how much those videos are useful for my work, and how many skills I learned in short time. Thank you very much Sreeni, you are doing great work.
Wow, thank you! I am glad you are finding value in these videos.
Me too! I feeling it, thankful.
Not just one of the best, but the best explanation on machine learning, image classification, and segmentation. I can see that very soon this channel is going to be popular. I'm happy and proud to be a part of the first ten thousand subscribers!!!
Thanks, Sreeni. Your channel is helping me a lot for my master level research on deep learning.
Appreciate your hard work.
Huge Respect.
Hi Sreenivas,
Thanks a lot for this channel which is really helpful.
There is one thing I don't get with this video (and the previous one) about the feature extractor.
How the weights are set in the CNN model ? If I understood well you do not trigger any backpropagation here, so how do we know if the weights set randomly make a good feature extractor ?
Thanks
There is no learning happening during feature extraction, therefore the filters are all based on initial random weights. If you actually know which features are appropriate you do not follow this method, you just do your feature engineering, just like I showed in video 67/67b. But if you do not know which filters/features work then you have to start with random filters. After Random Forest training you can look at feature ranking and then find out which ones actually work for your images. One thing you can also do is to take filters from first conv layer and then add filters from 2nd layer and then 3rd to create a really broad set of filters. Based on my experience, for most problems the random initial filters of about 64 work most of the time.
By the way, if you want to get features from trained network, you need to train it first and then chop off the layers you do not want. You can even import weights from pretrained networks such as VGG16.
@@DigitalSreeni Thank you for your answers, this is interesting to know that even without training, a CNN can be used as feature extractor. I don't want necessarily to do transfert learning, but it's seems more reproductible to me to force the weights values by any means than relying on a random instanciation.
Using VGG seems a good Idea, maybe in your use case a least the firsts layers to capture low semantic features like texture or colors.
On the other hand, with the custom CNN, I was wondering if doing a first training could be another way to intanciate the weights. This would make a poor classifier, but at least the first layers would start to discriminate some low level features that could be useful when using the CNN as feature extractor upstream of the random forest
Congratulations on your work! These videos are especially helpful. Thanks for your great work. I work in the field of remote sensing and geoprocessing. One year ago I had no coding knowledge, but 1 year ago I started working with R. Now I am trying to learn Python and these videos are especially useful.
If you would allow me a suggestion, it would be great to find a video where you work with georeferenced raster data. I know these videos are dedicated to microscopists, but this could help a lot of people too.
Thank you very much and greetings.
I am not familiar with remote sensing type datasets, hence I stay away from giving any advise. I guess most of my videos can be applied to many image formats once the problem is defined as an image analysis problem.
I think the SOTA framework for satellite image segmentation (for buildings and footprints) is the TernausNet, a variant of U-Net. Satellite data is huge and I think these images need high GPU power. Therefore SpaceNet has hosted their baseline model in AWS Elastic Cloud Compute instance. You can check the below link. medium.com/gsi-technology/a-beginners-guide-to-segmentation-in-satellite-images-9c00d2028d52
@@soorajs2474 thank you for sharing this tool. In the next few months i will do a course about deep learning using python. i will work with U-net architectures. I will try to use this tool. Trank you!
It might take some more months or potentially years, but I would love to see the traditional machine learning methods integrated in frameworks such as keras. Just imagine how powerful this approach could be in an end-to-end training fashion
Traditional machine learning doesn't get the love it deserves. Everyone is obsessed with deep learning. Whatever happened to Occam's Razor!!!
Great channel. It helps a lot with research works. Sir could you please make a video on feature extracting from pre-trained models and then use that feature extractor as an input to traditional machine learning algorithms. And please explain how to calculate confusion matrix, precision, accuracy, classification report for multi-class image classification problems.
Please watch my latest video: 158b. It describes the process of using pretrained model as feature extractor and then Random forest for image classification. It also goes through the process of printing confusion matrix.
@@DigitalSreeni Thanks for your reply
This is a great channel Sreeni
Thank you :)
No doubt about the learning quality of the video...Its awsome... Just I am li'l confused about the dataset....The dataset you provided in link and the dataset you used in the video is not same or maybe same but I can not grab it.... Would you plz help.. Thanks a lot🧘♂ 🧘♂
Huge thanks to all the content on this channel, really helpful! Love it.
Glad you enjoy it!
Hi Sreeni, great videos! Does Apeer work with geotiff image labeling as well?
I am not sure but I would think it works as geotiff images are just tiffs with some metadata. Please try it out and see if it works.
@@DigitalSreeni thank you Sreeni! I will try it out and update you :)
This tutorial is amazing and very helpful! I am wondering if there's a video showing us how to verify the classification (e.g., accuracy)?
Thank you very much for your helpful videos. I had a question: Can we use random forest to label some images and then use those labeled images to train a CNN. I have access to thousands of images from different rocks and I plan to train a CNN for each rock to be used by the other researchers in the future. But manually labeling all the images would be impossible.
Thank you for another excellent tutorial. Does the performance improve if you use data augmentation?
Yes it does, but you cannot fully rely on augmentation. You need enough labels to begin with.
Thank you for the video. It was really helpful. I have one question though. What if we want to train at the time of feature extraction (i.e. in the convolution layers) before going into random forest, without any transfer learning?
HOW to CUT OFF the DENSE LAYERS of CNN in such case?(I assume we would then have to train entire CNN, and cut off the dense layers, but how?)
Than you for this amazing tutorial. It was really helpful.
Glad it was helpful!
This channel is really great. Can I find a mask corresponding predicted image?
thanks for your great glad work. i watched part 1 to part 5 for ml especially ,such a nice work. i need some clarification i used retinal data along with google colab but it is not enough as compared to spyder ide. pls suggest system configuration to run a STARE dataset or a large dataset using traditional machine learning.
If colab is not enough then I don't know what to suggest other than paying money to upgrade your colab. If not, try transfer learning where you may not need as much data for training. Watch my video 158b.
@@DigitalSreeni thank you for your kind information. very kind off you
Incredibly useful series of videos, thank you! I have one question regarding these few videos on convolutional filters + random forest and some previous videos on your channel (62-67) which seem to be similar to me. In the earlier videos you use Gabor filters for features extraction, with slightly more code, but here you are using keras Conv2D. Are these two methods essentially the same or am I missing some significant differences between the earlier video methods and this video's methods? Should one be preferred over the other? Thanks again for creating this content
❣️❣️
I saved the machine learning model using the pickle library. However, when I tried to load the model again using the same library, it did not work as expected. I am unsure what the issue is. Can you help me troubleshoot the problem?
Great video, Is it possible, can you make video on 3D CNN model for classification on volumetric dataset. Thanks
3D CNN is a bit challenging to explain. I will have to take time to write it in a way that is explainable. I will try.
Hi Sreeni, Thank you for sharing. Can you please let me know how to extract the Region of Interest (ROI) using convolutional filters for classification. Looking forward to your reply.ధన్యవాదాలు
Not sure what you exactly mean. Do you mean identifying objects in images?
@@DigitalSreeni I want to select ROI to detect object using convolutional filters then based on the radius of the object I want to classify them in case of RBC and WBC cells images
@DigitalSreeni looking forward to your reply
Thanks for the great videos and coding, I am actually using these codes for astronomical data processing :D I would like to ask you a question, can I use your *How to predict covid 19 using AI* in my project? And of course I will credit you, your youtube channel and your GitHub.
Feel free to use my code for your project. I am trying to educate people on coding and I am sharing my code with no conditions. If you leave reference to this channel then others can also benefit.
Python for Microscopists by Sreeni Thanks a lot!
i want to say love you
Then say it :)
very helpful
Glad to hear that
Do we have to annotate the image samples shared by you in apeer first? Because if I use the same image samples as shared by you then the unique labels of mask comes out to be random numbers.
sir, can you do tutorial with satellite image data?
Hi Sreeni, is Histogram matching necessary for having consistent results with Random Forest?
Yes, if you only use similar looking images for training. If you have used diverse looking images during training then the model should be generalized enough to be able to segment images with varying contrasts. I do recommend deep learning if you are looking to train a model that is generalized.
Thanks Sreeni! The dataset is quite large and I am trying to keep it under the Earth Engine framework. With DL such as UNet, one of the issues I have is the training of model within earth engine framework. In order to use UNet, I will have to download the imagery in order to run it with Pytorch unet. Also the pricing of the earth engine cloud AI makes me uncertain since I am not clear on how much of the resources I might need or are feasible for continent wide geospatial extent.
Did you consider image generator to directly load small batches from the disk and still stay within the limits of your resources? With large training data you should be looking at using the full deep learning approach.
Hi Sreeni, Yes, I did but then the next part of the issue is the prediction results being generated. Since the earth engine image objects cannot work directly with external models, I will have to first download the imagery that too has a limit on the size of the image..~18K x 18K. That results in roughly 600 images followed by breaking down the images to unet size 512x512. This will result in millions of images and will be followed by the prediction results that will be an image as well. Is there a way to keep everything within earth engine framework without the need to download imagery? Cost x time makes me wonder if it will take a long time to have the model setup and running to make predictions within earth engine and if Random Forest is a more efficient approach in that manner.
I am not familiar with earth engine, it sounds like some sort of image pyramid. With these large images, you have to extract smaller patches and make predictions.
Hi Sreeni, I've labelled a number of images on Apeer as in the video, but when I export it my mask images are always completely black. I've checked the correct labels, but it doesn't seem to be working for me. Any ideas how I could fix this? Thank you in advance :)
What program did you use to open your labeled images? When you label your images the pixel values are going to be 1, 2, 3... etc. Most programs open images and display them full range (0-255). This means the image will look all black unless you adjust the range. Try opening the image in APEER web viewer and adjusting the range. Or load the image to python and use pyplot to plot.
@@DigitalSreeni HI Sreeni, you're exactly right- I wasn't aware the pixel value was literally 1. When I thresholded it last night in ImageJ I could see that I did in fact export properly! Thank you :)
Apeer is not working for me. How did you manage to use it?
It is possible to upload GeoTiff and create the labels using a vector file as ESRI Shapefile, GeoJson, or watever...
May be tis helps? rasterio.readthedocs.io/en/latest/index.html
does this work with image classification?
Yes. Check the previous video. 158 - Convolutional filters + Random Forest for image classification.
th-cam.com/video/9GzfUzJeyi0/w-d-xo.html
@@DigitalSreeni
okay thanks will check it out.
i already have a model but with only 54% accuracy so ive been searching the internet for some way of building synthetic features to improve a model that doesnt involve using a different model.
it has been quite difficult to find someone that talks about convolution without the use of neural networks or tensorflow or keras so im glad i found this and hope it works.
ive even tried, with no success, of building my own synthetic features.
i knew they werent going to work but i was really desperate.
so if your video can help me out then i truly have found a gem.
I’m glad you find these videos helpful.
class option is not found in Apeer
Not sure what you can, can you please explain? For semantic segmentation you are defining classes manually, you provide them via masks. So you need to label your masks to reflect all classes in your images.
@@DigitalSreeni When I upload my images, I have no pencil option
.What I do is I found an Eye option ,Select this and from left pane I select annotation option.Am I right sir
Sir, please provide the dataset for sandstone images.
Sorry, I do not own the dataset so I cannot share. I only have permission to use them.
@@DigitalSreeni ok No Problem. I wanted to know how to extract the circularity feature using a diameter For feature extraction in CNN.
tiff image is displayed as black in ImageJ anyone can help
Adjust the range (Brightness/Contrast).