These from scratch videos & paper implementations take a lot of time for me to do, if you want to see me make more of these types of videos: please crush that *like* button and *subscribe* and I'll do it :) Support the channel ❤️: th-cam.com/channels/kzW5JSFwvKRjXABI-UTAkQ.htmljoin Original paper: arxiv.org/abs/1505.04597 Paper review: th-cam.com/video/oLvmLJkmXuc/w-d-xo.html ⌚️ Timestamps: 0:00 - Introduction 1:03 - Model from scratch 22:20 - Dataset from scratch 29:50 - Training from scratch 39:48 - Utils (almost) from scratch 50:10 - Evaluation and Ending
Thanks for this Aladdin. I was able to train using my own data. Do you have an idea how I can deploy U-net model to my web app? Can't seem to find any resource on it. cheers
You are amazing! I have been struggling with this for 2 weeks and your video is so helpful. I can only imagine the amount of work you put into this. Thank you so much.
Hey bro, I know this video is from a long time ago. But thank you for teaching me and, most importantly, being an inspiration. I have now learned how to do the dataset, training loop, and Unet model, all from scratch in my head, just like you. I have also written a thesis on the subject as part of my bachelor's project at my university. Again, thank you, and I hope to learn more from you in the future.
I was listening and following along like a Bob Ross show. Admittedly, I've already implemented a UNet, but the implementation here was much cleaner and nicer. Thanks for making this.
@2K19/EP/050 MANU GAUR To answer that it can help to explain _why_ we split into training, test, and validation sets. Think of taking a test in school. You have a workbook with a bunch of problems and a test coming up. Your workbook has the answers in the back. Making a validation set is like taking a bunch of the problems in your workbook and putting them aside for a practice exam. You study all the problems in the workbook except the ones in your practice exam. If you fail the practice exam, maybe you aren't learning the right things from the book. The test is, well, the test. In the case of this dataset, you could use the test as the validation. That would be fine. You won't know how well you did after all of your work, but if you intend to put it in production that's okay. In more ML terms: the validation set lets us know if we are overfitting or underfitting on our data before the final test run.
سلام این دیتا ستی که استفاده کرده حجم و ابعاد تصویر تصویرش خیلی پایین تره لز دیتا ست اصلی. میدونین از کجا میشه دانلود کرد اینی که تو ویدیو استفاده کرده رو
@@mohammadasadpour9339 Man Nemidunam chera injuri mishe, chand bar inja baratun neveshtam o link gozashtam vali youtube paak mikone, jaryanesh chie!!!!!! So weird 😕
I am new to machine learning, I would like to ask: 1) How could I train the model with COCO format dataset 2) How could I train the model with more than 1 label class 3) How to apply the trained model
Thank you for the in-depth explanation of how to implement UNET. I would love to see you update GitHub to save the model and a separate display.py showing how to load the model and display the image segmentation predictions.
Hi There, This content is gold. I am a huge supporter of writing things from scratch so many thanks for doing it. I do have one suggestion thou. Would you consider implementing also the loss function they used in the UNET paper? They are using cross-entropy modified with the weighted map so they force the network to segment very thin borders between cells. I think this would also be very useful.
Thank you for the nice video! I think this will help a lot of people that are trying to learn how to develop models and also people like me that have experience but need to expand their knowledge in PyTorch.
Hi! great video, congratulations, I have an answer... when the U.Net needs do multi-class classification and change loss function from BCE With Logits to CrossEntropyLoss, Do I need change to SIgmoid the final conv of the model too?
I'm very thankful for the video and great implementation too but I wish you could go into details of why you do certain things and perhaps explain stuff a bit more. Would be super helpful !
i followed your tutorial step by step and used the same dataset and it did an amazing job. The first dataset (CARVANA) I used worked fine, but once I changed it, the results went downhill. I tried it on CASIAv2, but my dice score is always 0.0 and my predicted masks are just black... i don't know how to fix this, if anyone has any ideas, i beg you, do let me know!
thank you for this video! after watching a handful of times, I've managed to get it predicting on my own custom dataset, thanks entirely to your instruction. curious though - any advice on where to start getting a successful model to make a prediction on a single image, and call it by a script?
Very nice video, trying to figure out how to change this for instance segmentation, there are many tutorials for tensorflow but not so many for pytorch
20:46 I don't understand why you choice resizing x instead of skip_connection which is more similar to the UNET structure it provide. Can you explain it? Thanks.
Bug report: Due to an update to pytorch, the latest version of pytorchversion removes support for variables other than PIL image types from the resize function. So you can use the resize function from torch nn.functional, line 62 could be x = torch.nn.functional.interpolate(x, size=skip_connection.shape[2:])
oh the latest version of pytorchversion do not change the resize function .It's my mistake, my version is the old one. But it is still an alternative solution XD
Thank you so much for your video! BUT I've got the question, on neural net structure shown on picture (e.g 3:09) after each of Double convolution size of image reduced by 2 (e.g 572x572 -> 570x570 -> 568x568 for the first Double Conv) therefore it is not 'same' convolution as you are saying on 4:00. Please correct me if I'm wrong. Thank you in advance
Very nice and compete tutorial on Unets. I have question, Can we, /or how we use the same code for multiclass segmentations. For example, if there are more than 1 masks in output images, rather than only , "Salt" and "Not Salt"
Hi Aladdin, Thanks for the UNET tutorial and I have learned a lot from this video. I am using this model to run a dataset of pavement cracks for binary segmentation. However during training the dice score value decreases and eventually become 0.0 after a few epochs. May I know what is the possible problem that causes this to happen?
Hi, I have the same problem( dice score becomes zero). Have you figured out what was the problem? if yes, could you please write it? I would appreciate your reply
Hey @Aladdin Persson here for binary classification you applied sigmoid to the outputs of the model and then just separated into two by threshold of 0.5, can you suggest anything similar for multiclass classification? can softmax be used there? if yes, how can i separated then further?
Very good explanation using pytorch and Unet, I was able to use that in 1024x1024 images but with 416x416 your DICE formula always shows 0.0, even if I have 99% accuracy, I don't know why...please one suggestion, thanks
@@almag4810 I was able to modifying the preprocessing data when we read the images and converting to arrays we need to have a only baseline if the training needs the label converted in grayscale between 0 and 255 values or if it needs binary dots converted to 0s and 1s and the sigmoid function applied to the predicted image (when you have only 1 class)
Hello! Great tutorial! Was just wondering, at 34:39 you make use of torch.cuda.amp.autocast(). Would this work if you are using CPU processing only, given that you're calling a cuda method or is there a CPU based alternative? I'm trying to experiment with this on my mac, i'm relative new to this and this is one of the steps I don't fully understand yet. Any help would be appreciated :)
I'm in love with this because, for some reason, although I am not adept yet with deep learning...it answers the crucial part of seeing the architecture being engineered. The only thing I can't get past is how do we create the training datasets? I'm interested in satellite image classification but do you have any idea how to create these training datasets? I've seen people suggesting LabelMe and all but since this is pixel-based classification, what's the anatomy of the input into U-Net?
First off: Aladdin thank you so much for your contributions. I hope your channel continues to grow and grow. You deserve it! Lastly: Which version of pytorch are you using? When I run the test function with the randn tensor shape of 161, 161 it raises a TypeError saying the object has to be a PIL Image. This happens at lines 61,62. - if .shape != .shape: TF.resize()
I appreciate the kind words! I am using PyTorch nightly version (1.8.0.dev) in the video. Are you using 1.7 and it's not working? Have you tried the code on Github too?
Hi, I enjoyed your video, even though I already implemented UNet but your intuition is superb. I have one question about how to make inference after training dataset with UNet. I don't know what am doing wrong but when i make prediction, it show black image with little dots and i have tried to understand what am doing wrong but i have got no clue yet.
Thank you for the video man. Will you do something on U-Net++? Like just a paper walkthrough maybe. I'm trying to find out how many channels they used in their dense skip connection layers but I can't find more details on how exactly they structured them.
Hey Aladdin! Thanks a ton for the video, it's very clear if you know the basics. However, I'd like to know how I would go and try to segment a new car image, one, which is outside of my dataset.
hello :) I just followed your code until making model. but got error saying TypeError: img should be PIL Image. Got on TF.resize. even, I copy your code on your git hub it cause same error, anyone know how to solve this?
Great video man. You are working with RGB images (3 bands or channels). Do you think is possible use this architecture for images with more than 3 channels or bands. I'm thinking in hyperspectral cameras, for example.
this was awesome! I was looking to implement some of this for my work for some micrscopy images I have taken but I think I need to start a little simpler e.g. I am not familiar with some of the classes and their variables - any ideas where to start?
In order to avoid the confusion of skipping 2 in ModuleList I would separate to 3 different module list: self.downs, self.ups and self.deconvs what do you think?
Thanks for this Aladdin. I was able to train using my own data. Do you have an idea how I can deploy U-net model to my web app? Can't seem to find any resource on it. cheers
The mirroring part that you mentioned in the begining that we would have to do if we used valid convolution... I cant understand that. I saw the paper walkthrough video too still that part is very unclear. Could you please help with that?
Dear professor, I am very interested in your program, and I have two questions now, (1) How to use code to map between irregular images, complete training through the unet model, and then conduct testing? Is the mask used for preprocessing data? Is there any special software available for preprocessing?
great video...thanks for the guidance...but at the time of training, as the number of epochs increases...my loss also increases in negative.....i have tried changing the loss function to crossentropy but still the issue wont get resolved..would appreciate some help here..thanks anyways..heart emoji
Thanks for the tutorial. Hmm, that trick you added to avoid the requirement of having input perfectly dividable by 16 might lead to big issues depending on the type of imagery that is being processed by the network. Imagine satellite imagery with a GSD (ground sampling distance) of 100m. A single pixel is literally 100x100m and skipping one leads to skipping multiple houses. :D Just saying this in case people come across your tutorial and just blindly copy paste the code. NOTE: Kaggle requires phone number for verifying your account. For those of you (like me), who do not want to hand out such private information, find another set. In the end U-Net is used in many fields with different types of images (e.g. medical ones) and the chances are you will not be doing segmentation on cars. :D
Hi, I noticed that they change from (572,572) to (570,570) and (568,568). Why you still keep padding and stride equal 1?(which mean that you keep the same H and W after cnn)?
@@HazelNut-qn3gu @nhioanhoai6147 In Same padding we actually pad it by zero so our input and output image size remains the "same" In the original Paper Implementation , they do "valid padding" which means no padding at all , they keep the padding to 0 . so thats why the size decrease after convolving.
Hi, I get the following error def pad(img, min_height, min_width, border_mode=cv2.BORDER_REFLECT_101, value=None): AttributeError: module 'cv2' has no attribute 'BORDER_REFLECT_101 Any help would be appreciated.
These from scratch videos & paper implementations take a lot of time for me to do, if you want to see me make more of these types of videos: please crush that *like* button and *subscribe* and I'll do it :)
Support the channel ❤️:
th-cam.com/channels/kzW5JSFwvKRjXABI-UTAkQ.htmljoin
Original paper: arxiv.org/abs/1505.04597
Paper review: th-cam.com/video/oLvmLJkmXuc/w-d-xo.html
⌚️ Timestamps:
0:00 - Introduction
1:03 - Model from scratch
22:20 - Dataset from scratch
29:50 - Training from scratch
39:48 - Utils (almost) from scratch
50:10 - Evaluation and Ending
Sure! I will click every like, subscribe and pinned comment thumbs up button! 👍
how we can download this dataset with low resolution as you use in video and learn and train your network
Please do more of these.
Thanks for this Aladdin. I was able to train using my own data. Do you have an idea how I can deploy U-net model to my web app? Can't seem to find any resource on it. cheers
I am training on a satellite Image dataset, My dice score is 0.0 and the pred mask is empty, Am I doing something wrong here ?
You are amazing! I have been struggling with this for 2 weeks and your video is so helpful. I can only imagine the amount of work you put into this. Thank you so much.
I'm writing this comment, because I want more of these types of videos.
I reply to this comment for the same reason 😊
I reply for the same reason
Thanks! Great work. Useful practical information
You are the only one who does from scratch this good. Please keep up the good work man!
Hey bro, I know this video is from a long time ago. But thank you for teaching me and, most importantly, being an inspiration. I have now learned how to do the dataset, training loop, and Unet model, all from scratch in my head, just like you. I have also written a thesis on the subject as part of my bachelor's project at my university. Again, thank you, and I hope to learn more from you in the future.
I literally read UNIX from scratch and I was like oh boy who is this legend 🤣🤣
Thanks for the video idea, maybe next video 😉
I was listening and following along like a Bob Ross show. Admittedly, I've already implemented a UNet, but the implementation here was much cleaner and nicer. Thanks for making this.
@2K19/EP/050 MANU GAUR
To answer that it can help to explain _why_ we split into training, test, and validation sets.
Think of taking a test in school. You have a workbook with a bunch of problems and a test coming up. Your workbook has the answers in the back. Making a validation set is like taking a bunch of the problems in your workbook and putting them aside for a practice exam. You study all the problems in the workbook except the ones in your practice exam. If you fail the practice exam, maybe you aren't learning the right things from the book. The test is, well, the test.
In the case of this dataset, you could use the test as the validation. That would be fine. You won't know how well you did after all of your work, but if you intend to put it in production that's okay.
In more ML terms: the validation set lets us know if we are overfitting or underfitting on our data before the final test run.
Many thanks of writing this specifically with PyTorch from scratch, I love your videos doing from scratch, you are awesome
سلام این دیتا ستی که استفاده کرده حجم و ابعاد تصویر تصویرش خیلی پایین تره لز دیتا ست اصلی. میدونین از کجا میشه دانلود کرد اینی که تو ویدیو استفاده کرده رو
@@mohammadasadpour9339 Man Nemidunam chera injuri mishe, chand bar inja baratun neveshtam o link gozashtam vali youtube paak mikone, jaryanesh chie!!!!!! So weird 😕
I am new to machine learning, I would like to ask:
1) How could I train the model with COCO format dataset
2) How could I train the model with more than 1 label class
3) How to apply the trained model
Thank you for the in-depth explanation of how to implement UNET. I would love to see you update GitHub to save the model and a separate display.py showing how to load the model and display the image segmentation predictions.
Great topic! Can't wait to watch it in my spare time.
Carvana kaggle dataset does not seem to have val_images and val_mask
Man that was amazing! It was pure quality content. Keep it up!
Hi There, This content is gold. I am a huge supporter of writing things from scratch so many thanks for doing it. I do have one suggestion thou. Would you consider implementing also the loss function they used in the UNET paper?
They are using cross-entropy modified with the weighted map so they force the network to segment very thin borders between cells. I think this would also be very useful.
I think this is application-oriented, they use this trick to solve the touching border issue between the cells e.g. when two cells are overlapped.
Thanks a ton!!!!! Learnt a hell lot of new things from this video other than image segmentation.
Your lectures are pure gem!!!!
Thank you for the nice video! I think this will help a lot of people that are trying to learn how to develop models and also people like me that have experience but need to expand their knowledge in PyTorch.
big data please remember i like this video.
not a single confusion in this video, thanks
Thank you for these detailed tutorials, they are very informative
Keep them coming!
48:00 man you killed it , wow
Goldy bro; Keep up the good works bro. A deep love from India
Hi! great video, congratulations, I have an answer...
when the U.Net needs do multi-class classification and change loss function from BCE With Logits to CrossEntropyLoss, Do I need change to SIgmoid the final conv of the model too?
I feel like I want to say I love you for this tutorial
learnt soo much from this thank you! love the proper structure instead of line by line commands in colab or sth
Awesome video, stayed all day to make this work because I changed some stuff myself :D
I'm very thankful for the video and great implementation too but I wish you could go into details of why you do certain things and perhaps explain stuff a bit more.
Would be super helpful !
i followed your tutorial step by step and used the same dataset and it did an amazing job. The first dataset (CARVANA) I used worked fine, but once I changed it, the results went downhill. I tried it on CASIAv2, but my dice score is always 0.0 and my predicted masks are just black... i don't know how to fix this, if anyone has any ideas, i beg you, do let me know!
I had the same issues
Facing the same issue
thank you for this video! after watching a handful of times, I've managed to get it predicting on my own custom dataset, thanks entirely to your instruction.
curious though - any advice on where to start getting a successful model to make a prediction on a single image, and call it by a script?
Very nice video, trying to figure out how to change this for instance segmentation, there are many tutorials for tensorflow but not so many for pytorch
At 46:28, what is the code behind his face? Please someone help me!
+1! were you able to solve it?
20:46 I don't understand why you choice resizing x instead of skip_connection which is more similar to the UNET structure it provide. Can you explain it? Thanks.
Thanks for creating this education video. Every concept is very clearly explained.
Thank You a million, I been waiting for this. Yaaay
Bug report: Due to an update to pytorch, the latest version of pytorchversion removes support for variables other than PIL image types from the resize function. So you can use the resize function from torch nn.functional, line 62 could be x = torch.nn.functional.interpolate(x, size=skip_connection.shape[2:])
oh the latest version of pytorchversion do not change the resize function .It's my mistake, my version is the old one. But it is still an alternative solution XD
Thank you so much for your video! BUT I've got the question, on neural net structure shown on picture (e.g 3:09) after each of Double convolution size of image reduced by 2 (e.g 572x572 -> 570x570 -> 568x568 for the first Double Conv) therefore it is not 'same' convolution as you are saying on 4:00. Please correct me if I'm wrong. Thank you in advance
that's right I think the padding should be kept as zero
Very nice and compete tutorial on Unets. I have question, Can we, /or how we use the same code for multiclass segmentations. For example, if there are more than 1 masks in output images, rather than only , "Salt" and "Not Salt"
Could you please make an other video ? how to apply trained model with test dataset
Hi there! I have a question; what is the last line at 46:24?
Thank you so much my guy. I hope one day I can also do this with my own knowledge and understanding
thanks for making this video. It really helped me get started with segmentation tasks
Thanks for this lovely video
could you please make a video on 3D Unet for medical image(MRI) segmentation
Did you just crop your tensors from the upConv? I thought the paper crops the skip connection tensor... Or am I a Dumb Dumb?
Hi Aladdin,
Thanks for the UNET tutorial and I have learned a lot from this video. I am using this model to run a dataset of pavement cracks for binary segmentation. However during training the dice score value decreases and eventually become 0.0 after a few epochs. May I know what is the possible problem that causes this to happen?
I also have the same problem. Did you find the solution for this?
Hi, I have the same problem( dice score becomes zero). Have you figured out what was the problem? if yes, could you please write it? I would appreciate your reply
Let me also join, had the same problem so i came to the comment section in hopes to find a solution
Yep got the dice score as zero, the loss =nan is the problem
Hey @Aladdin Persson here for binary classification you applied sigmoid to the outputs of the model and then just separated into two by threshold of 0.5, can you suggest anything similar for multiclass classification? can softmax be used there? if yes, how can i separated then further?
yes you can use the softmax
hello ! thank you for your video. Can you do a tutorial for multi class sementic segmentation if you have the time ?
Very good explanation using pytorch and Unet, I was able to use that in 1024x1024 images but with 416x416 your DICE formula always shows 0.0, even if I have 99% accuracy, I don't know why...please one suggestion, thanks
Am having the same issue, did you happen to find a solution?
@@almag4810 I was able to modifying the preprocessing data when we read the images and converting to arrays we need to have a only baseline if the training needs the label converted in grayscale between 0 and 255 values or if it needs binary dots converted to 0s and 1s and the sigmoid function applied to the predicted image (when you have only 1 class)
@@johnorozco4895 I didnt understand your solution, beg to explain this again. Thank you !!
Hello! Great tutorial! Was just wondering, at 34:39 you make use of torch.cuda.amp.autocast(). Would this work if you are using CPU processing only, given that you're calling a cuda method or is there a CPU based alternative? I'm trying to experiment with this on my mac, i'm relative new to this and this is one of the steps I don't fully understand yet. Any help would be appreciated :)
Hi! did you figure out if this code would work with cpu?
Bro, this slaps fr. Thanks!
Thank you so much for this informative and detailed tutorial.
Thanks for the video. Why you used scaler for backward ? I did not totally understand that.
I'm in love with this because, for some reason, although I am not adept yet with deep learning...it answers the crucial part of seeing the architecture being engineered. The only thing I can't get past is how do we create the training datasets? I'm interested in satellite image classification but do you have any idea how to create these training datasets? I've seen people suggesting LabelMe and all but since this is pixel-based classification, what's the anatomy of the input into U-Net?
This is a very well done tutorial
First off:
Aladdin thank you so much for your contributions. I hope your channel continues to grow and grow. You deserve it!
Lastly:
Which version of pytorch are you using? When I run the test function with the randn tensor shape of 161, 161 it raises a TypeError saying the object has to be a PIL Image.
This happens at lines 61,62. - if .shape != .shape: TF.resize()
I appreciate the kind words! I am using PyTorch nightly version (1.8.0.dev) in the video. Are you using 1.7 and it's not working? Have you tried the code on Github too?
Great work Aladdin,
Thank you for these awesome tutorials
will there be a video about Panoptic segmentation ?
Hi, I enjoyed your video, even though I already implemented UNet but your intuition is superb. I have one question about how to make inference after training dataset with UNet. I don't know what am doing wrong but when i make prediction, it show black image with little dots and i have tried to understand what am doing wrong but i have got no clue yet.
Thank you for the video man.
Will you do something on U-Net++? Like just a paper walkthrough maybe. I'm trying to find out how many channels they used in their dense skip connection layers but I can't find more details on how exactly they structured them.
Hi, I would like to understand for not applying transformations on mask data.
Hey Aladdin! Thanks a ton for the video, it's very clear if you know the basics. However, I'd like to know how I would go and try to segment a new car image, one, which is outside of my dataset.
please make more videos like this. thank you omg
@AladdinPersson
What kind of PyCharm theme do you use? Looks awesome!
Thank you so much man, keep up the good work
Simple and clear expression, thank you so much Aladdin Persson
hello :) I just followed your code until making model.
but got error saying
TypeError: img should be PIL Image. Got on TF.resize.
even, I copy your code on your git hub it cause same error, anyone know how to solve this?
Hi, what would be the check_accuracy function in utils if one wants to have more multiclass segmentation? Many thanks!
Great video man. You are working with RGB images (3 bands or channels). Do you think is possible use this architecture for images with more than 3 channels or bands. I'm thinking in hyperspectral cameras, for example.
this was awesome! I was looking to implement some of this for my work for some micrscopy images I have taken but I think I need to start a little simpler e.g. I am not familiar with some of the classes and their variables - any ideas where to start?
Great video, man!
Yey! I am here first :) Excited to go through this
thanku so much the explanations made it very clear 🙌💯
Thank you for video. Was wondering if anyone knows why I would be getting can’t find file errors ?
How did you do the masking in the dataset? How did you create the dataset, where can I learn the detailed explanation?
Hello, I am using your code to do the picture segmentation, I got dice score more than 1 (1.3) do you know what the issue could be? many thanks
Could you make a begginer friendly version. Nice vid btw!
Hi. Thank you for your video. It helped me a lot
Thank you bro so much!
Can you please make anoter video on how to do semantic segmentation by training U-net model from scratch?
In order to avoid the confusion of skipping 2 in ModuleList I would separate to 3 different module list:
self.downs, self.ups and self.deconvs
what do you think?
I think I tried it but didn't end up as nice as I thought it would. Share code? Maybe I'm wrong
savior of the day
Thanks for this Aladdin. I was able to train using my own data. Do you have an idea how I can deploy U-net model to my web app? Can't seem to find any resource on it. cheers
The mirroring part that you mentioned in the begining that we would have to do if we used valid convolution... I cant understand that. I saw the paper walkthrough video too still that part is very unclear. Could you please help with that?
Thank you, it was great🥰
so, what changes do i need to make if I want to perform a multi class segmentation here can you help me?
thank you so much,I learnt a lot from this vedio. You are awesome!!!
Very good video, good explanations
I have a problem . In the diagram of the UNET, it shows an Upsampling2D together with Conv2D but you create a ConvTranspose2d. Do I miss something??
what a great tutorial
where I should make some ajustments in the codes to make the unet fitting my png imgs?
Thank you for the video, great job!
I have a problem, import torchvision.transforms.functional gives module error and says it is not a library
Why did you put 1 in unsqueeze targets ?
Hi everybody, why my acc data is 78.20....? And saved_images/pred_x.png are all black picture?
Dear professor,
I am very interested in your program, and I have two questions now,
(1) How to use code to map between irregular images, complete training through the unet model, and then conduct testing?
Is the mask used for preprocessing data? Is there any special software available for preprocessing?
Hee, thanks for your video! Got one question: how can your use your trained model for single image segmentation?
Can we only use this if we have the masks in the train dataset ?
great video...thanks for the guidance...but at the time of training, as the number of epochs increases...my loss also increases in negative.....i have tried changing the loss function to crossentropy but still the issue wont get resolved..would appreciate some help here..thanks anyways..heart emoji
Hey, I have the same problem. Did you manage to solve it?
@@Tabea-ef8xe yeah, i did perform thresholding to my images so that the image contains only 0 and 1 pixel intensities, 1==255 here
Also use png format....jpg doesn't support what i just said
Thanks for the tutorial.
Hmm, that trick you added to avoid the requirement of having input perfectly dividable by 16 might lead to big issues depending on the type of imagery that is being processed by the network. Imagine satellite imagery with a GSD (ground sampling distance) of 100m. A single pixel is literally 100x100m and skipping one leads to skipping multiple houses. :D Just saying this in case people come across your tutorial and just blindly copy paste the code.
NOTE: Kaggle requires phone number for verifying your account. For those of you (like me), who do not want to hand out such private information, find another set. In the end U-Net is used in many fields with different types of images (e.g. medical ones) and the chances are you will not be doing segmentation on cars. :D
Which part are you talking about ?
Fantastic contribution, I just have one question: Any reason why you didn't use pytorch's sliding_window_inference to evaluate validation data?
Hi, I noticed that they change from (572,572) to (570,570) and (568,568). Why you still keep padding and stride equal 1?(which mean that you keep the same H and W after cnn)?
I have the same question. Do you know the answer?
@@HazelNut-qn3gu @nhioanhoai6147
In Same padding we actually pad it by zero so our input and output image size remains the "same"
In the original Paper Implementation , they do "valid padding" which means no padding at all , they keep the padding to 0 .
so thats why the size decrease after convolving.
Hi, I get the following error
def pad(img, min_height, min_width, border_mode=cv2.BORDER_REFLECT_101, value=None):
AttributeError: module 'cv2' has no attribute 'BORDER_REFLECT_101
Any help would be appreciated.
hi, may i ask you something
do i have to change the IMAGE_HEIGHT and IMAGE_WIDTH if i use my own dataset?