3D Image Segmentation (CT/MRI) with a 2D UNET - Part1: Data preparation

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ก.ย. 2024
  • Video series on how to perform volumetric (3D) image segmentation using deep learning with the popular 2D UNET architecture and TensorFlow 2.
    In medical imaging, typical image volume types are MRI or CT images. In this video, I show how a simple 2D neural network can be trained to perform 3D image volume segmentation.
    This video is the first in a series of 3:
    1: Dataset preparation: • 3D Image Segmentation ...
    2: Configuring and training a deep neural network: • 3D Image Segmentation ...
    3: Using the trained model to perform image segmentation: • 3D Image Segmentation ...
    Unfortunately, the used images have not been possible to make available online, but all code is available on my GitHub repository: github.com/mad...
    The UNET implementation is inspired by this CNN basics course: github.com/fmi...

ความคิดเห็น • 128

  • @youtubecommenter5122
    @youtubecommenter5122 4 ปีที่แล้ว +10

    I love the style and chill this guy has when making this video. Exactly what you need to know, no mucking about, delivered well

  • @vikramrs4191
    @vikramrs4191 3 ปีที่แล้ว

    Dear Dennis Thanks very much for this and teaching DL to solve image segmentation problem. Amazing stuff mate.

  • @dilaraisaogullar9993
    @dilaraisaogullar9993 2 ปีที่แล้ว

    hello , I need to understand your video for my article homework , but there is no automatic translation of the 1st and 3rd videos of this series , could you please add it . I only know Turkish :(

  • @roblubenow7052
    @roblubenow7052 2 ปีที่แล้ว +1

    Why do you use hounsfield units to normalize? Couldn't you just normalize using np.min and np.max like so:
    def normalizeImage(img):
    return(img - np.min(img)) / (np.max(img) - np.min(img))

    • @DennisMadsen
      @DennisMadsen  2 ปีที่แล้ว

      Hi Rob. That is also a possible way to normalize. Especially with HU, I like to set the boundaries within a range that I'm interested in, i.e. the HU range of the teeth in this case. If you can already throw away information that you are not interested in by normalizing, that is a huge benefit. E.g. if the segment you are interested in, is generally in the range of 100-500, then there is no need to set the upper limit to 10.000 (maybe caused by some artefacts) you can just as well manually set the upper limit to 1000 or so.

  • @hard2k2008
    @hard2k2008 3 ปีที่แล้ว +1

    Amazing stuff Dennis!

  • @ahxmeds
    @ahxmeds 3 ปีที่แล้ว +1

    Very useful implementation-based tutorial. Thanks for making this. Could you also make videos on detection (lesion/organ) in PET images using various object detection algorithms? That would be wonderful. Thanks again.

  • @muhammadzubairbaloch3224
    @muhammadzubairbaloch3224 3 ปีที่แล้ว +1

    please It is my humble request make more researchable tutorial related to medical imaging like brain medical imaging analysis and segmentation.

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Thanks a lot for your suggestion Muhammad. Hereby put on my list of possible new videos :)

  • @moeinhasani8718
    @moeinhasani8718 3 ปีที่แล้ว +1

    Hey I htink you only saved the x axis of the images [i,:,:] was what was written in all the three if statements

    • @DennisMadsen
      @DennisMadsen  2 ปีที่แล้ว

      Thanks for pointing this out :) I have already been fixed in the notebooks on github.

  • @talha_anwar
    @talha_anwar 3 ปีที่แล้ว +1

    It is not clear to me why you loop in x y z direction while saving image. According to my concept. If volume size is 224,244,100, actually these are 100 images stacked together. Then why it's not enough to loop z direction only

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว +2

      Hi Talha. The method of slicing in all directions can be seen as a tradeoff between performing 3D convolutions on the full image volume and pure 2D analysis on a single slicing direction. By additionally adding images from the x and y slicing direction, the network also learns something about the contextual information to neighboring pixels in those directions. Even better would be to use 3D convolutions, but depending on your volume size and the available hardware, this might not be feasable.
      Empirically, I found that it gave a better classification score. Especially when only a few training examples are available.

    • @talha_anwar
      @talha_anwar 3 ปีที่แล้ว

      @@DennisMadsen as you are slicing in x, y direction, (z is set to false), so number of images produced are (224+224), if image size is (224,224,100)

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      @@talha_anwar for the tooth example shown here, I found that adding the z dimension didn't add value with the dataset I was using. So that's why it is default = false in the code. It should probably be adapted depending on the domain you are working on. For other structures such as liver or kidney, I often use all 3 dimensions = true.

    • @talha_anwar
      @talha_anwar 3 ปีที่แล้ว

      @@DennisMadsen thank. Dennis. I will try

  • @jiang-bz2ut
    @jiang-bz2ut 2 หลายเดือนก่อน

    can you give a tut about how to label the 3d medical image

  • @usmanalibaig6016
    @usmanalibaig6016 3 ปีที่แล้ว

    if you will create a playlist i will spent 2 days for sharing that playlist on social media. this playlist will be very helpful for the beginner just like this playlist on tooth segmentation.

  • @redina5257
    @redina5257 ปีที่แล้ว +1

    great video!
    I want to ask why do we only slice the volume in the X and Y directions? what about the Z direction?

    • @monteirodelprete6627
      @monteirodelprete6627 ปีที่แล้ว

      Same thing I thought. Consequently It seems that the training is done only on x-slices and y-slices.

  • @pseudounknow5559
    @pseudounknow5559 3 ปีที่แล้ว

    Waouw this is gold content

  • @user-ql1im6ew7i
    @user-ql1im6ew7i ปีที่แล้ว

    We need more videos

  • @JohnSmith-zh2yn
    @JohnSmith-zh2yn 2 ปีที่แล้ว

    Can anyone please recommend a video tutorial that trains a neural network on 3D images.

  • @rehabhashim8485
    @rehabhashim8485 ปีที่แล้ว

    Thank you its very good and understandable explaniation

  • @sitanizampatnam8441
    @sitanizampatnam8441 3 ปีที่แล้ว +1

    hi Dennis u done a great job its useful to beginners, my project on covid 19 ct images segmentation how this tutorial helps me...thanks....waiting for u r rply

  • @oscarramirez2562
    @oscarramirez2562 3 ปีที่แล้ว +2

    Hi,
    thanks for sharing,
    could you make a demo using 3D UNET?

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Oscar. Will try to do this as soon as possible. As such, it is really just a matter of using 3D convolutions and poolings instead of 2D. So the u-net function itself is easily converted.
      The more difficult is the dataloader which should now support 3D instead of 2D images with possible augmentation.

    • @017_itmohamedmufassalsulta8
      @017_itmohamedmufassalsulta8 3 ปีที่แล้ว

      @@DennisMadsen Can you please show a demo , how to do it?. It would be really helpful . And there is less resources when i search about it on google . All I see is papers , not tutorials.

  • @liittl3sk4t3rb0y
    @liittl3sk4t3rb0y 3 ปีที่แล้ว +1

    Very very good video series!
    As I am a beginner, I’m just curious about, how keras knows, which map belongs to which image? Do they have to have the exact same name for that purpose?
    How would one handle mri data with more than one modality? Still nem them all the same?

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว +1

      Hi Florian. It just looks at the naming yes. If you have images with more modalities you can add all of those to your training set, and have the naming hint the modality.

  • @Aishwarya_Varma21
    @Aishwarya_Varma21 2 ปีที่แล้ว

    can you help me resolve an issue immediately? No such file or no access: '/content/drive/MyDrive/mask-1.png/lidc-idri' when i try this maskPath = os.path.join(maskPathInput, 'lidc-idri')
    mask = nib.load(maskPath).get_fdata()
    np.min(mask), np.max(mask), mask.shape, type(mask)

  • @shivammishra7306
    @shivammishra7306 3 ปีที่แล้ว +1

    Hey quick question, what is the use of normalizing image, when ultimately we are denormalizing it prior to saving the 2d slice?

    • @DennisMadsen
      @DennisMadsen  2 ปีที่แล้ว

      Hi Shivam. We are not fully "denormalizing" the images. What is done in the tutorial here is to resize the image into a standard image volume which all volumes are resized into. This resizing should also be done on inference. In part3 I then show how to "undo" this step to get back to the original image dimensions.

  • @ramadhuvishana3092
    @ramadhuvishana3092 3 ปีที่แล้ว +1

    Hey Dennis, I have a doubt since we're converting the 3D to 2D don't you think the properties of the 3D images will be lost, and are those losses negligible?

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว +1

      Hey Ramadhuvishan. The short answer is: it depends. For the specific application of teeth, I also found that using the full 3D volume and perform 3D convolutions increase performance. For other domains such as cells, it might not have that big of an influence. Some times it is also not possible to use full volumes due to memory. So either you will have to downsample your volumes, do patch based analysis like in the 3D unet paper, or you can do analysis on 2D slices like in the video series here.
      I find that 2D unet is very fast to train and test compared to 3D unet - and works well as a baseline setup.

  • @thodoristziolas
    @thodoristziolas 3 ปีที่แล้ว +1

    Hello Dennis im working on a lroject for my degree in 3d. Im new in this topic stuck on convert pointclouds to depthmap. I haven't found anything in youtube. If you know about this topic why don't you make something similar? Any advice will be helpful. Thanks

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว +1

      Hi Theo, thanks a lot for the suggestion. I am indeed working with point clouds myself, so this would be a good idea :) Haven't done anything with depthmap so far, so a good possibility to explore that a bit.

  • @Aishwarya_Varma21
    @Aishwarya_Varma21 2 ปีที่แล้ว

    can you help me resolve an issue immediately? No such file or no access: '/content/drive/MyDrive/mask-1.png/lidc-idri' when i try this maskPath = os.path.join(maskPathInput, 'lidc-idri')
    mask = nib.load(maskPath).get_fdata()
    np.min(mask), np.max(mask), mask.shape, type(mask)
    PLEASE REPLY SOON

  • @marcusbranch2100
    @marcusbranch2100 3 ปีที่แล้ว +1

    Hi Dennis, great video and congratulations!! When you create some PNG files from the training dataset to use as test dataset, but don't remove these files from de training dataset, doesn't a data leak occur? Which directly influences the accuracy values, making them very high, as the network already knew this data before. Forgive me if I get it wrong and would appreciate an answer from you to answer that question

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Marcus. Thanks!. Yes. Mixing validation and training data is indeed something that should be avoided. In the video I just dump all my volumes in subfolders and then do a manual separation afterward such that there are 9 volumes for training and 1 for validation. In the "step 2" video I briefly show the file structure of how the files are stored. None of the validation data was used for training - one should always keep some data out of the training to see how well the network generalises to unseen data.

  • @tchiyasadeghikurdistani4776
    @tchiyasadeghikurdistani4776 3 ปีที่แล้ว +2

    Hello Dennis, thank you so much for your tutorial. I would like to once implement your code line by line but I have a problem and can not the data about tooth1.ii which you used in this tutorial. could you please tell me what should I do and where can i find this database?
    Best,
    Tchiya

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Tchiya, unfortunately I haven't been able to publish the data as well. So you'll have to find another volume dataset on the internet for the code finally to work. Sorry if this was not clear in the video.

  • @niloufarrahimizadeh1480
    @niloufarrahimizadeh1480 2 ปีที่แล้ว

    Thank you for sharing this tutorial. How can I find your dataset?

  • @akshyareghunath6998
    @akshyareghunath6998 3 ปีที่แล้ว +1

    Is it possible to train dicom ct images using 2D unet model as you have discussed in this video without converting it to nifti format ..please yes could you tell how

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Akshya, I think the easiest is still to have it in a nifty format. You can just have a script that automatically converts all your dicom folders into nifty files with e.g. the dicom2nifti library in python.
      Alternatively, the pydicom package might be able to help you out github.com/pydicom/pydicom , but I haven't worked with this library myself yet.

  • @abhijeet6989
    @abhijeet6989 3 ปีที่แล้ว +1

    Dear Dennis,
    Greetings!!
    Thank you very much for your tutorial. I wanted to perform image segmentation for Frank's sign on MRI image. How can I do it using above method?

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      I am not familiar with Frank's sign and how this can be seen in MRI images. But the main setting as with any segmentation method is that you need some ground-truth segmentations that you are using to train the deep-learning network.

  • @sakshamkumarsharma2309
    @sakshamkumarsharma2309 2 ปีที่แล้ว

    Hey Dennis, just one question why you have just used one volume instead of all the volumes you have?

  • @kenanmorani9204
    @kenanmorani9204 3 ปีที่แล้ว +1

    Very helpful videos, I wish you will be able to make more similar videos. I have a request if possible; Would you recommend a website where I can find .nii image to be able to experiment with your code. Manually labeled 3D images if possible. Thank you!

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว +1

      Kaggle has some segmentation challenges: www.kaggle.com/data?search=segmentation , or on grand-challenges: grand-challenge.org/challenges/ . Besides that, then look for datasets to different segmentation challenges e.g. at the medical imaging conference MICCAI such at BRATS.

  • @aineljaras6891
    @aineljaras6891 3 ปีที่แล้ว +1

    Hello, Dennis! Thank you for such an amazing tutorial! Could you, please explain why you'd applied windowing (min = -1000, max= 2000) without converting pixel brightness to Hounsfield Units or is it because nifti format is already in HU ? Thank you !

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Ainel. The nifty format is in HU. HU units are as such without upper and lower boundary, so by applying the clipping, we are fixing the scale of the possible HU intensities. This have to be done specific to the domain you are working on.
      Then I do a normalisation by bringing all the pixels in the 0-1 range. This is for the network to easier learn the weights. If we keep the large HU values, then the weights we learn will also eventually become very large, which will take a long time by taking small gradient steps.
      Instead of the fixed range normalisation, you can also do a normalisation of all the pixels to a standard normal distribution (minus the mean and divide by the standard deviation) - this could also easily be done without clipping if you would like to avoid setting the min/max range.

    • @aineljaras6891
      @aineljaras6891 3 ปีที่แล้ว

      @@DennisMadsen Thank you very much!!! It helped me a lot =)
      Dennis, could you please, advise me if i should resize voxel's volumes ? Because in my training data set I have a big range of slices in one CT scan (75 - 960 slices in one CT scan) or it doesn't affect the training at all ? Thank you in advance !

  • @Ahmetkumas
    @Ahmetkumas 3 ปีที่แล้ว +1

    Hey Dennis thanks for the video, I was wondering how do you deal with False Positive examples on U-net? How can we create a probability value for predicted mask?

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Ahmet. What do you mean with a probability mask? U-net itself assign a probability value to each pixel from 0-1.

    • @Ahmetkumas
      @Ahmetkumas 3 ปีที่แล้ว +1

      @@DennisMadsen Hello again Dennis, yes you are right so I have perfect TP results for detecting the class I look for, but at the same time, the FP rate is very high. I have enough data for the class and I can the find very well on the image. but I don't know what to put as background. Other things are detected as my class. What should be added as background reduces the FP rate? Thanks for your suggestions.

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Is it possible that your dataset consist of very small structures to segment? E.g. Tumors or similar where there is a large class imbalance in each image.For such a case, the weighted cross entropy loss could maybe help: www.tensorflow.org/api_docs/python/tf/nn/weighted_cross_entropy_with_logits .

  • @usmanalibaig6016
    @usmanalibaig6016 3 ปีที่แล้ว

    Dear Dennis,
    May you Please create a youtube list on 3d brain tumor segmentation because it is not existed on youtube.

  • @rashidabbasi6035
    @rashidabbasi6035 3 ปีที่แล้ว +1

    Hellow Dennis, Can you share your complete source code of this lecture

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Rashid. Link in the description :) github.com/madsendennis/notebooks

  • @socialgenerationmarketing2070
    @socialgenerationmarketing2070 4 ปีที่แล้ว +1

    Fantastic video Dennis! How do you recommend making the ground truth mask before using it in this way? What program do you use for the manual creation of ground truth?

    • @DennisMadsen
      @DennisMadsen  4 ปีที่แล้ว +1

      Hi, good point. I'll see if I can have a labelling video recorded later in the week. I use the free software 3D slicer which has a segmentation toolbox.

    • @socialgenerationmarketing2070
      @socialgenerationmarketing2070 4 ปีที่แล้ว

      @@DennisMadsen Hi Dennis, I managed to follow your great tutorial and I appreciate the open source code library immensely. I've tried to process some masks of a 512x512x123 (X,Y,Z) dicom however the X & Y masks are 123 wide and 512 high when converted to images. Shouldn't these be 512 wide and 123 high? Where am I going wrong? :/

    • @DennisMadsen
      @DennisMadsen  4 ปีที่แล้ว +2

      Hi, @@socialgenerationmarketing2070 . I think you just need to define the correct slicing direction and set that you only want to slice in the z direction, in case you only want the 512x512 images.
      So:
      SLICE_X = False
      SLICE_Y = False
      SLICE_Z = True

    • @reenergisedigitalmarketing8034
      @reenergisedigitalmarketing8034 4 ปีที่แล้ว

      @@DennisMadsen thanks for the quick reply. It wasn't the SLICE_X etc. but I actually had to transpose the image by 90 degrees in the def SaveSlice function which then applied it to all the saved images. Great tutorials and support :D

    • @DennisMadsen
      @DennisMadsen  4 ปีที่แล้ว

      A bit delayed compared to my original schedule. But the video on volume image segmentation in Slicer 3D is now online: th-cam.com/video/A5inlUEq_Uw/w-d-xo.html

  • @remibadel803
    @remibadel803 3 ปีที่แล้ว

    Dear Dennis Congrats to share your experience ! I would like to try the RIBFrac Challenge. Do you think I can "follow" your advices before use UNET ?
    Thanks

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Maybe part of the code can be used for the detection part of the challenge. It could potentially serve as the baseline that you have to beat. I don't think it'll produce great results as is. You will also need some extension to also allow for the classification.

  • @MrMGA06
    @MrMGA06 3 ปีที่แล้ว

    Nicely organized video!!. I watched few other videos, but this gives more in-depth explanation. Thank you very much! I have a question. I work with a CT scan data. There are images, for which the corresponding ground truth is black (quite a lot of images). For example, if I go in Z direction in the segmented data, only after some depth I observe white pixels (segmented region). That implies there are no desired object in the image. How should I deal with this situation? Any suggestions would be helpful.

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว +1

      Hi MrAGM, thanks a lot for your feedback. Your setting with some slices without content is the same as in the tooth example. The network learns the structures that you highlight in images - and for that, it is also fine not to highlight anything in single slices.
      If you have a large difference in the % of pixels that is background vs % of the structure you are segmenting, you might however want to use a weighted loss function to account for this. But to start with, you should be good with the standard U-Net.

    • @MrMGA06
      @MrMGA06 3 ปีที่แล้ว +1

      @@DennisMadsen Thank you for the suggestion. I will look into it.

  • @sohnnijor3366
    @sohnnijor3366 3 ปีที่แล้ว +1

    Hey Dennis do you have any suggestions on good software for annotation of MRI slices before training the segmentation model?

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Sohn, I am only using slicer 3D myself. I already did a small video on this, though not explicitly focused on MRI. th-cam.com/video/A5inlUEq_Uw/w-d-xo.html

  • @zhenjing94
    @zhenjing94 4 ปีที่แล้ว +2

    ​ @Dennis Madsen Hi Dennis! Thanks for the video its a life saver! I tried masking using Slicer 3D. I was able to segment the tooth in 3d but not sure how to export to a list of binary masks across all slices. Can you share some insight?

    • @DennisMadsen
      @DennisMadsen  4 ปีที่แล้ว

      Hi Heng, I'm currently working on a video about image labelling with Slicer. It should be out hopefully tomorrow.

    • @zhenjing94
      @zhenjing94 4 ปีที่แล้ว

      @@DennisMadsen Thanks Dennis, it would be really helpful! May I ask why in the video we are not slicing in Z direction and save the images?

    • @DennisMadsen
      @DennisMadsen  4 ปีที่แล้ว

      @@zhenjing94 it was a bit trial and error. I just found that with the teeth, the dimensions were so different and would require a lot of resizing. E.g. x,y,z = 20,20,80, then slicing in the z dimension will give you images of size 20x20, but then you would want to resize to the same dimension as you did in the x and y dimensions (20x80). I found that the resizing here just introduced too much noise to add any performance gain.

    • @DennisMadsen
      @DennisMadsen  4 ปีที่แล้ว

      Video on ground truth creation now online: th-cam.com/video/A5inlUEq_Uw/w-d-xo.html

    • @zhenjing94
      @zhenjing94 4 ปีที่แล้ว

      Thanks for the reply and the video, very helpful!

  • @user-tf9hn6ln1i
    @user-tf9hn6ln1i 3 ปีที่แล้ว

    Hello Dennis, thank you so much for your tutorial. could you please tell me what is the properties that must be in the dataset that i can use in this codes?

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi, what properties do you mean? I use image volumes in .nii format and then you need a "ground-truth" segmentation as well for the training volumes. This can be done in slicer3D as shown in one of my other videos.

  • @nomannosher8928
    @nomannosher8928 4 ปีที่แล้ว +1

    Nice Deniss. I have to make a project on brain MRI images to detect brain tumor. How can this series help me out?

    • @DennisMadsen
      @DennisMadsen  4 ปีที่แล้ว

      Hi Noman. Hopefully, it can help you to get started with a very basic setup, which you can then extend upon. Especially if you want to use TF2, then it could be useful to see how data can easily be handled, which is way different than e.g. TF1. For the first and third videos, it is more on how to pre-process data and how to use an already trained model to do a prediction on a new image.
      I usually use this simple setup as a baseline - to get a feeling of how much e.g. it helps to use 3D convolutions.

    • @nomannosher8928
      @nomannosher8928 4 ปีที่แล้ว

      @@DennisMadsen nice explanation. can you help me out regarding my project other than this series

    • @DennisMadsen
      @DennisMadsen  4 ปีที่แล้ว

      @@nomannosher8928 at the moment I do not have much time for additional projects. But you can always send me a msg (youtube, twitter) if you have a specific question to be answered, e.g. data handling, network configuration.

    • @nomannosher8928
      @nomannosher8928 4 ปีที่แล้ว

      @@DennisMadsen ok

  • @ibrahelsheikh
    @ibrahelsheikh 11 หลายเดือนก่อน

    very clear

  • @ibrahelsheikh
    @ibrahelsheikh 11 หลายเดือนก่อน

    very useful

  • @aleenasuhail4309
    @aleenasuhail4309 2 ปีที่แล้ว

    Is it same as Maximum intensity projection?

    • @DennisMadsen
      @DennisMadsen  2 ปีที่แล้ว

      Hi Aleena. I'm not doing any projection. But instead slicing the volume through different axises.

  • @sevimcengiz9815
    @sevimcengiz9815 3 ปีที่แล้ว

    Hi Dennis, I checked your GitHub repo for this tutorial, the results of the predicted mask aren't so good. What is your approach to make the segmentation better?

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Sevim, there is a lot of parameters to change. The most obvious one is to add more data. From experiments, I also found it better to use a 3D unet that processes the whole volume. But this requires hardware with more memory. Then there the loss-function could be changed to weighted cross-entropy, dice-loss. Or for the teeth something that emphasis the boundary.
      A lot of stuff can also be tried out with Unet itself. Using drop-out, batch-normalization, more filters in each layer, more layers (deeper network).
      All of the above really depends on the specific domain you are working in.

    • @sevimcengiz9815
      @sevimcengiz9815 3 ปีที่แล้ว

      ​@@DennisMadsen Starting with the loss function might be an easy way and quick solution. Thank you so much for reply.

  • @hellenabeje
    @hellenabeje 3 ปีที่แล้ว +1

    Hello, does your code linked also include how to convert my dicom images to niftii as yours?

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Hellen. This I have not included in my scripts. So either you would go ahead and use a python library like pydicom.github.io/ , or you could manually convert the dicom images to nifty using e.g. Slicer (download.slicer.org/)

    • @hellenabeje
      @hellenabeje 3 ปีที่แล้ว +1

      @@DennisMadsen thank you 🙏

  • @vikramrs4191
    @vikramrs4191 3 ปีที่แล้ว

    Hi Denis. The test slice image of mask and image in png type was not saved on my output folder at all. Is there any issue?

    • @vikramrs4191
      @vikramrs4191 3 ปีที่แล้ว +1

      Thanks. I had not created subdirectories under the output folder. It is working now. Also one more feedback. My image data is reversed with (z,y,x) and also slice x does not hold any information and slice y has few images. Hence in constants I defined Slice Z as true and Slice Y and X as False. Images were created in png in outputpath. But images are vertical oriented instead of horizontal. Will be adding some augmentation algorithms to orient in horizontal and also to harmonize contrast in image.

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi @@vikramrs4191 . Difficult to know exactly what causes the image mirroring without having the images and code. Hope you already found a solution :)

  • @kevalsharma1865
    @kevalsharma1865 3 ปีที่แล้ว +1

    Hi, Can you help me with generating segmentation masks from an image? I have CT images of brain in nii.gz format and I need to generate the segmentation masks from those images.

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Keval. Have a look at my Slicer3D video on segmentation. Hopefully you’ll find some tips in there on how to generate the segmentation masks.

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      th-cam.com/video/A5inlUEq_Uw/w-d-xo.html

    • @kevalsharma1865
      @kevalsharma1865 3 ปีที่แล้ว

      Actually I want to generate masks by means of algorithm not manually. And I don't know how to proceed with it that' s the problem.

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      At first you need some semi-manual method to obtain your ground-truth masks. When you have your final UNET model to perform the segmentation, then you can use STEP3 of this small video series to create the masks for new images.
      I see that I forgot some code on how to save the output from UNET as an image volume:
      import nibabel as nib
      img = nib.load('your_cropped_img.nii')
      segmentedImage = UNET(img) # segment the image using unet - output is then the segmented volume
      croppedVol = nib.Nifti1Image(segmentedImage, img.affine, img.header) # use the header and affine from the original image
      nib.save(croppedVol, 'your_segmented_img.nii')
      There might be some image scaling you'll have to do as well.

    • @kevalsharma1865
      @kevalsharma1865 3 ปีที่แล้ว

      Isn't there any algorithm to do that?

  • @kevalsharma1865
    @kevalsharma1865 3 ปีที่แล้ว

    Hi again. Can I use color images for training unet? I have colored images and their segmentation masks.

    • @kevalsharma1865
      @kevalsharma1865 3 ปีที่แล้ว

      If yes then do I need to make any changes to your code?

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hi Keval. You can also train UNET on colored images. The input channels just need to be changed to 3. This should be the main change for the model itself. But a few additional changes will probably be needed in the slicing script to save the images as RGB instead of gray-scale.

  • @MrPopocatepetl
    @MrPopocatepetl 3 ปีที่แล้ว +1

    Dennis, please up your video quality..
    You're coming through in like 12 FPS, like from a space shuttle

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว +1

      Indeed an old webcam I was using for this video :)

    • @DennisMadsen
      @DennisMadsen  3 ปีที่แล้ว

      Hopefully the quality in my most recent videos is more pleasant to watch :)

  • @martymcfly695
    @martymcfly695 3 ปีที่แล้ว

    can you add subtitles.please?

  • @user-ql1im6ew7i
    @user-ql1im6ew7i ปีที่แล้ว

    Why you stop