Good video. In the part where you crop the input image to be suited to a certain size I would also recommend trying padding. This is because the user normally does not expect any data loss in the output, if that makes sense. Thank you again for your work, Sreeni.
Thank you for that vidoe. I've watched many of your videos now and they'v helped me a lot so far. I'm wondering here how this is different from using patchify with a larger overlap (smaller step size)? So, divide the large image using patchify with a small step size (and so a large overlap), predict for each patch, and rebuild the large image with the combined information of the overlaps.
Dear Dr. Sreenivas, your video has always been helpful. I wonder if I am trying to blend my coordinated rasters that only contains probability from 0 to 1 instead of RGB, is it possible to use rasterio instead of opencv? Thank you.
Does this work for 3D patches of irregular shape? I am trying to feed images of 256x256x128 in patches of 256x256x32 and I am getting errors. What would the window size be in this case?
Hi, thanks for this very useful git. I don't understand: - Can we use this git with a pytorch model that predicts a mask but not one hot? - What is the input dimension of pred_function?
Hi sreeni ..great effort .. i had some issues with predict large image . i got an error like "im = np.array(im)[:, ::-1] MemoryError: Unable to allocate 10.5 GiB for an array with shape (22564, 20810, 3) and data type float64" . how to solve this issue please help me
Hi, I'm using patchify to cut UAV image in colab to make label for training dataset, but when I open it in qgis, it doesn't have georefenced coordinates, where am I doing wrong step?
I am try to dehaze an Image using DL, I am using also using patches, but when I combine the patches I can visually see boundary around patches. How can I use this method, as I using pytorch everything is in tensor and code available is for numpy array. Can you help me how to solve this problem?
Thanks for very informative video. Could you tell me about how to measure large image where few objects span to another patch. In that case, objects metrics are not accurate. I am using MaskRCNN model.
Hello @DigitalSreeni: what about imagery with different spatial resolution? How do you deal with these issues ? I assume pooling would be a solution, but I am unsure.Thanks
I get right to the right to the part where I need to unpatchify, however it keeps saying "The patches dimension is not equal to the original image size". Somehow you are able to unpatchify without the 3 from the RGB channel. That is the only thing i am missing that is preventing my unpatchify. Am i missing something? Is it a different version of patchify or something?
Sorry, I do not have any insights into masters degree in deep learning as it depends on many factors, primarily your location. I am based out of the San Francisco bay area and I can definitely tell you that Stanford, UC Berkeley, and UC Davis are all good Universities for deep learning. In general, doing masters in this field is a good idea as more and more jobs are opening up in this field.
Thank you very much for the effort. I wonder about the following: you put quite some effort in illustrating the advantage of smooth blending. And to be clear, it shows. However, why don't you calculate metrics such as IoU, Dice or overall accuracy on both the non-smoothly blended and the smoothly blended result? These should show the advantage *quantitatively* rather than qualitatively, right?
Yes, you need to calculate IoU metrics to make sure you understand the accuracy of your final result. In this case I omitted that from my video as I try not to jam too many things in every video. Good point though... In general, you need to check all metrics when you are putting together a solution to an image analysis challenge.
It was covered in the previous lecture: github.com/bnsreenu/python_for_microscopists/tree/master/228_semantic_segmentation_of_aerial_imagery_using_unet
I am constantly learning by selecting the parts I need.
Always say thank you.
Good video. In the part where you crop the input image to be suited to a certain size I would also recommend trying padding. This is because the user normally does not expect any data loss in the output, if that makes sense. Thank you again for your work, Sreeni.
can u please add that padding line concept into coding
Thanks a lot. We really appreciate your efforts. Keep going!
Thank you, I will
Thanks a lot! This just saved my project and has taken me only 10 minutes to implement.
Great to hear!
sorry but where can I find the file "satellite_standard_unet_100epochs.hdf5" in the folder models. I can't find it in your github link
Thanks a lot Sreeni! Great video :)
My pleasure 😊
Superb Sreeni :)
Is it possible to convert the numpy array with prediction result to raster or vector format that can be read into GIS?
Thank you for that vidoe. I've watched many of your videos now and they'v helped me a lot so far.
I'm wondering here how this is different from using patchify with a larger overlap (smaller step size)? So, divide the large image using patchify with a small step size (and so a large overlap), predict for each patch, and rebuild the large image with the combined information of the overlaps.
i got this error,:
TypeError: Invalid shape (256, 256, 5) for image data, can you help me?
I got the same error (256,256,18), did you please find what is wrong?
Dear Dr. Sreenivas, your video has always been helpful. I wonder if I am trying to blend my coordinated rasters that only contains probability from 0 to 1 instead of RGB, is it possible to use rasterio instead of opencv? Thank you.
Does this work for 3D patches of irregular shape? I am trying to feed images of 256x256x128 in patches of 256x256x32 and I am getting errors. What would the window size be in this case?
Hi, thanks for this very useful git.
I don't understand:
- Can we use this git with a pytorch model that predicts a mask but not one hot?
- What is the input dimension of pred_function?
Thank you so much. I have learned a lot from your videos.
Hi sreeni ..great effort .. i had some issues with predict large image . i got an error like "im = np.array(im)[:, ::-1]
MemoryError: Unable to allocate 10.5 GiB for an array with shape (22564, 20810, 3) and data type float64" . how to solve this issue please help me
Thank you, keep producing valuable content!
Thanks, will do!
Hi, I'm using patchify to cut UAV image in colab to make label for training dataset, but when I open it in qgis, it doesn't have georefenced coordinates, where am I doing wrong step?
You can use gdal to break the tile into smaller parts in order to prevent theloss of metadata.
I am try to dehaze an Image using DL, I am using also using patches, but when I combine the patches I can visually see boundary around patches.
How can I use this method, as I using pytorch everything is in tensor and code available is for numpy array.
Can you help me how to solve this problem?
Thanks for very informative video. Could you tell me about how to measure large image where few objects span to another patch. In that case, objects metrics are not accurate. I am using MaskRCNN model.
I got better results without smooth blending for 2 classes. What do you think could be the reason for this?
Hello @DigitalSreeni: what about imagery with different spatial resolution? How do you deal with these issues ? I assume pooling would be a solution, but I am unsure.Thanks
Thanks that is going to be hugely helpful
Hope so!
Can you please explain how the combining between two neighboring patches is done. Are you averaging the values or taking the maximum?
Please look at the documentation for the library, they provided a very good explanation and also references.
@@DigitalSreeni thank you!
I get right to the right to the part where I need to unpatchify, however it keeps saying "The patches dimension is not equal to the original image size". Somehow you are able to unpatchify without the 3 from the RGB channel. That is the only thing i am missing that is preventing my unpatchify. Am i missing something? Is it a different version of patchify or something?
You are a really good teacher! I want to study a masters degree in DL and specialize in computer vision. Do you have any suggestions?
Sorry, I do not have any insights into masters degree in deep learning as it depends on many factors, primarily your location. I am based out of the San Francisco bay area and I can definitely tell you that Stanford, UC Berkeley, and UC Davis are all good Universities for deep learning. In general, doing masters in this field is a good idea as more and more jobs are opening up in this field.
Thank you very much for the effort. I wonder about the following: you put quite some effort in illustrating the advantage of smooth blending. And to be clear, it shows. However, why don't you calculate metrics such as IoU, Dice or overall accuracy on both the non-smoothly blended and the smoothly blended result? These should show the advantage *quantitatively* rather than qualitatively, right?
Yes, you need to calculate IoU metrics to make sure you understand the accuracy of your final result. In this case I omitted that from my video as I try not to jam too many things in every video. Good point though... In general, you need to check all metrics when you are putting together a solution to an image analysis challenge.
very goog point
@@DigitalSreeni
Could you provide trained model file
I have learnt lot from your videos ,thank you for teaching on this platform. Can you make videos on self supervised denoising by image dataset.
Please look at Noise2Void approach, you may find it useful.
Thanks a lot. You are best!
THANK. YOU.! Thank you.
I can't find the file of simple_multi_unet_model.py but the explanations it's really good, thank you
It was covered in the previous lecture: github.com/bnsreenu/python_for_microscopists/tree/master/228_semantic_segmentation_of_aerial_imagery_using_unet
Thank you!!!
You're welcome!