215 - 3D U-Net for semantic segmentation

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ต.ค. 2024

ความคิดเห็น • 144

  • @DigitalSreeni
    @DigitalSreeni  3 ปีที่แล้ว +5

    If you want to work with TensorFlow 2.x you will get an error while loading Segmentation models library, please follow these steps to fix the issue: th-cam.com/video/syJZxDtLujs/w-d-xo.html

    • @vassilistanislav
      @vassilistanislav 3 ปีที่แล้ว

      Do you know any other software such as Apeer, for wound imaging . Is Apeer only useful for microscopic imaging or can we even do labeling for chronic wounds ?. Kindly please let me know.

    • @mahyasadatebrahimi3164
      @mahyasadatebrahimi3164 2 ปีที่แล้ว

      @@vassilistanislav Try to use Ilastiks for this purpose. It might help you.

  • @yjoliiyki706
    @yjoliiyki706 3 ปีที่แล้ว +3

    Thank you for the video, I think instead of a semantic segmentation, using 3d Unet to generate an instance segmentation is more interesting!!!

  • @aishashahnawaz9898
    @aishashahnawaz9898 2 ปีที่แล้ว +2

    Thank you for the great and explicit lessons. Your way of explaining is amazing and can be easily understood by beginners as well. Thank you very much for your efforts!

    • @DigitalSreeni
      @DigitalSreeni  2 ปีที่แล้ว +2

      I was a beginner once and I know the pain, so my explanation comes out of my empathy towards beginners.

  • @caiyu538
    @caiyu538 2 ปีที่แล้ว

    Great lectures. I follow up with your series. I learned a lot. Excellent teacher. I am doing a 3D Unet segmentation. Your tutorial is very helpful.

  • @kavithashagadevan7698
    @kavithashagadevan7698 3 ปีที่แล้ว +2

    This is great. Thank you very much for your wonderful videos

  • @venkatesanr9455
    @venkatesanr9455 3 ปีที่แล้ว +1

    Thanks for your efforts and sharing knowledge

  • @rajeevgupta4058
    @rajeevgupta4058 6 หลายเดือนก่อน

    Hi sir,
    First of all, thanks for posting these videos; they are really helpful. I have a pressing doubt, though. In a previous video (159b), you taught how to slice the blocks of the VGG16 model using SM 3D, but that was all for 2D images. Now, VGG16 is made with 2D kernels, so that makes sense. However, here you are feeding a 3D shape to sm.Unet with VGG16 backbone, and this works. I saw the SM's code, and there they have declared input_shape as (none, none, 3). Additionally, for the Unet function in the models folder, they have simply fed the args list of the Model factory from the Classification 3D library, which then directly picks the model from Keras.
    What I want to ask is how can I get sliced blocks for 3D conv layers with VGG16 weights, as we did for 2D in the 159b video? It should be possible since they have a 3D Unet built upon VGG16 without the hassle of doing slice-by-slice methods to create the 3D shape blocks (even though there are papers available on that technique).
    Lots of thanks.
    Hoping for a quicker response from you. :)

  • @MariemMakni-jg6un
    @MariemMakni-jg6un 4 หลายเดือนก่อน

    Thank you so much this is really helpful!! Bless you ^^

  • @farhanshadiquearronno7453
    @farhanshadiquearronno7453 3 ปีที่แล้ว +3

    Your tutorials and explanations are on point.

  • @caiyu538
    @caiyu538 2 ปีที่แล้ว

    thumb it up. Thank you Dr. Sreeni for your excellent tutorials

  • @vassilistanislav
    @vassilistanislav 3 ปีที่แล้ว +1

    Is there a way to create a tutorial for 3D reconstruction using multiple 2D images, If such tutorial is possible ?.

  • @qaw54
    @qaw54 2 ปีที่แล้ว +1

    Hi, could you suggest what tweak to do if the image cube dimensions are not equal for e.g z not equal to x and y? Thank you.

  • @kavithashagadevan7698
    @kavithashagadevan7698 3 ปีที่แล้ว +1

    Thank you for this wonderful video. How would I be able to view my segmented multi-channel image in 3D in Apeer? I am unable to see any button to obtain the 3D view.

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว

      If you do not see a button for 3D view that means it does not recognize your data as 3D. Please verify if the image indeed has 3 channels and if they are in the right order. You can open the image in imageJ to see if it recognizes the z direction correctly.

    • @kavithashagadevan7698
      @kavithashagadevan7698 3 ปีที่แล้ว

      @@DigitalSreeni Thank you for your advice

  • @nouhamejri1698
    @nouhamejri1698 3 ปีที่แล้ว +2

    good job , you can find Brats dataset on kaggle

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว

      I cannot find the dataset on Kaggle, can you please provide the direct link? Everyone is referring about going to www.smir.ch/ and making a request which I tried and never heard back.

    • @nouhamejri1698
      @nouhamejri1698 3 ปีที่แล้ว

      @@DigitalSreeni this is the link www.kaggle.com/awsaf49/brats20-dataset-training-validation ,sorry for being late

    • @Xiaoxiaoxiaomao
      @Xiaoxiaoxiaomao 3 ปีที่แล้ว +1

      @@DigitalSreeni I have sent you the link of BRATS dataset. Please have a look at your email. Thanks.

  • @hartree.y
    @hartree.y 2 ปีที่แล้ว

    Marvellous work! Thank you.

  • @nohinlab
    @nohinlab 3 ปีที่แล้ว

    Thank you for sharing your knowledge .
    I have XCT scan images of a cylindrical part manufactured using additive manufacturing process . The parts have porosity defects that i wanna segment.
    Please can you tell me if i should remove background before labeling my images in apeer ? ( in your case the part is cubique but mine are cylindrical )

  • @neginpirannanekaran1236
    @neginpirannanekaran1236 ปีที่แล้ว

    Thanks for the nice video. Just one question, you are using train_test_split which randomly picks slices. This seems to defeat the whole purpose of using 3d Unet that wants to learn from the geometry of 3rd dimension (image slices are not independent and they have spatial dimension).

  • @ChristianRichardson-i5f
    @ChristianRichardson-i5f ปีที่แล้ว

    Im a bit confused on why we use patchify, dont we need our images to have certain dimensions to break it into specified patch sizes? this requires me to resize my images to fit the patch dimensions,but the purpose i thought was so that we dont need to resize anything or have images with the same dimensions

  • @abderrahimhaddadi4023
    @abderrahimhaddadi4023 3 ปีที่แล้ว +1

    Hello doctor .. Can you give me some guidelines/tips&tricks and resources to read to achieve better results/metrics for semantic segmentation for medical images?
    Thanks a lot for all the videos ^^ !

  • @BernardoSaab
    @BernardoSaab ปีที่แล้ว

    Thank you for the great presentation! Would you recommend using 3D UNet for abdominal image segmentation ? And if positive is there a strong reason to utilise this architecture over 2D UNet ?

  • @user-maomao-tsai
    @user-maomao-tsai 6 หลายเดือนก่อน

    excuse me , Sir , can we still use Apeer on arivis cloud to see multi-channels segmentation in 3 D style like this task?

  • @umairsabir3519
    @umairsabir3519 3 ปีที่แล้ว +1

    can we please have a tutorial on YOLO v3 or v4 using keras ?

  • @georgevonfloydmann1797
    @georgevonfloydmann1797 5 หลายเดือนก่อน

    Can I use this workflow if I want to segment liver tumor? I have a dataset of nifti files of different depth. Their dimensions vary. How can I divide every nifti file to equal patches?

  • @clueless1550
    @clueless1550 2 ปีที่แล้ว

    If I don't want to use the concept of patchify, Can I use the whole volume as input to the 3D CNN?

    • @DigitalSreeni
      @DigitalSreeni  2 ปีที่แล้ว

      If your system memory can handle working with entire 3D volume then you do not need patchify. 3D CNN itself has no limitation, it relies on your system memory to access data.

  • @mbq215
    @mbq215 8 หลายเดือนก่อน

    Hi @DigitalSreeni ... can you tell me if the backbone model only works for a symmetric volume and sub volume in this case...i have a volume (128x128x51) and its throwing error. Pls help me. Tq

  • @dhaferalhajim
    @dhaferalhajim ปีที่แล้ว

    Thanks a lot... I have 3D ct scan medical data image and I want to segment it by 3D unet, I don't understand How can get mask image as input?

    • @houdahassouane636
      @houdahassouane636 ปีที่แล้ว

      In order to train your model, you need to provide masks too and then test it on unseen data without labels

  • @shabinaa6407
    @shabinaa6407 6 หลายเดือนก่อน

    Can you share some videos on specially how to make multi label mask data ready for semantic segmentation. Specifically I have 2 binary images(having 0 to 1 pixels value). Now How to prepare mask data and how to lable without any tools(because we already have binary images).

  • @rajeshwarsehdev2318
    @rajeshwarsehdev2318 3 ปีที่แล้ว

    What about loading batches of images? like 125 no. of images? Below are the steps I am trying to perform.
    Data -> imgHeigh = 256, width = 256 & channel = 196
    1. Stored images into NumPy & Resizing, result is -> (125, 128, 128, 192)
    But I would I use patchify and restructuring here as my trying to preprocess these, Goggle Colab memory is getting crashed

  • @srinivasanvenkatramanan171
    @srinivasanvenkatramanan171 3 ปีที่แล้ว

    I am having image data as size of (240,240,155,4) where height-240 ,width-240, depth(layers) - 155, channels-4 . How we can use patchify for this?

  • @surajneelakantan6625
    @surajneelakantan6625 3 ปีที่แล้ว

    Hello sir,
    This is a wonderful video. Can I use this for nd numpy array data stored in npy files (which are data of 3D images) ??? and the mask are also in npy files which are basically 0 for uninterested regions and 1 for interested regions

  • @linachato5817
    @linachato5817 3 ปีที่แล้ว

    Great video, Thank you so much for the great explanation! I just have questions: did you used the weights values from the vgg16 and start from these weight training your Unet? or did you use the layers from Vgg16 in encoder instead of using the usual unet layers? Also, is the Unet model that you downloaded is 3D Unet? and for which application the downloaded model was trained on? and which dataset was used to train the downloaded model?

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว

      I loaded imagenet trained weights for VGG16 to start the training, as explained at 30.15 in the video. And yes, the Unet is 3D from segmentation-models-3D library. Not sure what you mean by which application the model got trained on; I was using a tomography image of sandstone - commonly used in oil and gas exploration.

  • @matthewavaylon196
    @matthewavaylon196 2 ปีที่แล้ว

    Have you tried training from scratch? Any recommendations for doing so?

  • @syedsajid7823
    @syedsajid7823 3 ปีที่แล้ว

    U are amazing
    my question is: suppose we are using BRATS dataset for segmentation purposes , how the following statement is going to change::
    Encoder weights='imagenet'

  • @mehnaztabassum1878
    @mehnaztabassum1878 ปีที่แล้ว

    I appreciate your effort! for my case, my training images(3D, t1 modality) are in nifty (.nii.gz) format. How can I convert them into a .tif stack? please help me in this regard.

    • @DigitalSreeni
      @DigitalSreeni  ปีที่แล้ว +1

      Please check this playlist about BraTS2020 data segmentation. Your questions may be answered. th-cam.com/play/PLZsOBAyNTZwYgF8O1bTdV-lBdN55wLHDr.html

    • @mehnaztabassum1878
      @mehnaztabassum1878 ปีที่แล้ว

      @@DigitalSreeni Thanks for the reply. I will definitely follow your advice.

  • @talha_anwar
    @talha_anwar 3 ปีที่แล้ว

    much needed tutorial. if the data z-axis is different in every image, then what to do?

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว

      What do you mean by a axis being different in different datasets. If you mean that z axis scale is different then it doesn’t matter much for training. In fact, it may help generalize the model a bit. You care about scale when you segment images and report object measurement parameters. Until then a pixel or a voxel is measured in pixel or voxels and not real units.

    • @talha_anwar
      @talha_anwar 3 ปีที่แล้ว

      @@DigitalSreeni some images have shape (512,512,110), some have (512,512,103), (512,512,117) etc. Do i need to make them on one scale

  • @johannesschmidt8611
    @johannesschmidt8611 3 ปีที่แล้ว

    What are the 3D models trained on? What datasets were used?

  • @王松晨
    @王松晨 ปีที่แล้ว

    Hello, I want to ask why I used the website apeer to open your tif file, but there is no 2D to 3D in the lower right corner, is this function charged?

  • @faheem5191
    @faheem5191 2 ปีที่แล้ว

    how can i handle different number of slices in dataset, such as verse2020.

  • @saadiaazeroual8857
    @saadiaazeroual8857 3 ปีที่แล้ว

    Hello Mr.Sreeni , thank you for this video!
    i have 1 question , I want to know how to choose the best model to segment a multi-organ ? are they all free and open access like Unet ? please answer me , i am very confused ! thank you

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว +1

      If you want to put together your own code, all useful python libraries are free. Also, you fill find a lot of useful code in the public domain. If you don’t want to write your own code, you can try www.apeer.com where you can annotate, train, and segment your images, it is free. I am sure you will find other online and offline platforms that offer these services.

    • @saadiaazeroual8857
      @saadiaazeroual8857 3 ปีที่แล้ว

      thank you a lot for this informations!

  • @hamadyounis1840
    @hamadyounis1840 2 ปีที่แล้ว

    How can i applied this to my dataset? my data shape is 190 How can i run it getting
    IndexError: index 255 is out of bounds for axis 1 with size 1

  • @matthewavaylon196
    @matthewavaylon196 2 ปีที่แล้ว

    Pip installing those packages changes the tensroflow version to 2.x instead of keeping the 1.x defined at first.

  • @rijotom8839
    @rijotom8839 3 ปีที่แล้ว

    wonderful presentation

  • @gabrielmonacoribeirodasilv8643
    @gabrielmonacoribeirodasilv8643 ปีที่แล้ว

    please, do some videos handling with this pore model using porespy and openpnm libraries

  • @johnyang5440
    @johnyang5440 2 ปีที่แล้ว

    Thank you for the fantastic video. I am establishing 3D cardiac muscle cell segmentation. I don't know if it will also work on that. Hope it will.

  • @kibetwalter8528
    @kibetwalter8528 2 ปีที่แล้ว

    You are just the best

  • @cim5410
    @cim5410 3 ปีที่แล้ว +2

    Your video is very helpful to me, I would like to ask if you have published a paper?

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว +1

      I am not into academic research, my day job is in marketing so no opportunities to publish papers. I do have a few patents related to machine learning. Of course, you will find many of my previous publications online, just google search for my name Sreenivas Bhattiprolu :)

  • @dev834
    @dev834 3 ปีที่แล้ว

    it would be nice if you put out a video about reading, preprocessing and segmentation of color images

  • @sophiez7952
    @sophiez7952 ปีที่แล้ว

    thank your great work!

  • @rameshwarsingh5859
    @rameshwarsingh5859 3 ปีที่แล้ว

    Thank you Sreeni Sir

  • @monaallaam8652
    @monaallaam8652 ปีที่แล้ว

    Can we find a nice tutorial/code like this in pytorch?

  • @ahhhhhhhh6947
    @ahhhhhhhh6947 2 ปีที่แล้ว

    Amazing explanation

  • @sophiez7952
    @sophiez7952 ปีที่แล้ว

    WHETHER THE THRESS DIMENSIONS MUST BE THE SAME LIKE 64*64*64, IF THE LAST DIMENTION IS LESS, IS IT still okay?

    • @georgevonfloydmann1797
      @georgevonfloydmann1797 5 หลายเดือนก่อน

      Hello did you find the answer for your question? I have similar concern.

  • @nouhinchannel
    @nouhinchannel 3 ปีที่แล้ว

    Hello , can you please show us how u did annotate your dataset ?

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว

      I used www.apeer.com. I annotated the images and downloaded the masks. You can watch the video on how to do the annotation on APEER. Of course, there are many other annotation tools out there but for my purposes APEER is the easiest.
      Disclaimer: APEER is developed by my team at work. It is free so you can check out if it fits your needs.

  • @carlotarivera9754
    @carlotarivera9754 3 ปีที่แล้ว

    Hello, thanks for the video, I have a question .. I have a dicom file of a cerebral angiography, I opened it in Imagej and there are 384 images, could I segment them and convert them into 3d with your tutorial? if not .. how could I?

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว +1

      Yes you can follow this method to segment your 3D data set. Please convert your dicom image into a 3D tiff stack, for example using imageJ. This will give you a volume you can work with. You can use www.apeer.com for image annotation if you do not have any existing solution.

    • @carlotarivera9754
      @carlotarivera9754 3 ปีที่แล้ว

      @@DigitalSreeni Thanks, I have a question ... I don't know how to convert to 3D from imageJ, but with 3DSlicer yes ... you know if a lot of information is lost from an AVM with that software ..

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว +1

      DICOM is a tricky format and I do not have a lot of knowledge about all types of DICOM. Normally, you should be able to open the image in imageJ using one of the plugins and then save the opened image as a tiff stack. You just need to find the right plugin that can handle DICOM files.

    • @carlotarivera9754
      @carlotarivera9754 2 ปีที่แล้ว

      @@DigitalSreeni thaaaanks :D

  • @anitakhanna2766
    @anitakhanna2766 3 ปีที่แล้ว

    superbly explained sir. sir can i use the same procedure for 2 class problem also?

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว +1

      Yes, of course.

    • @anitadhawan0308
      @anitadhawan0308 3 ปีที่แล้ว

      @@DigitalSreeni thanks a lot sir. Sir what to do if i want to train the models with different volumes of patients one by one. one single .tif of huge number of images (adding volumes together) and giving at one time is making system crash and if we want model to learn, we have to provide many test volumes. Pl suggest, i am stuck.

  • @kalhormh2883
    @kalhormh2883 2 ปีที่แล้ว

    How we can convert EX.DCM file format to tiff? there is no such an option in the Cirrus software to do that. Please help!

    • @DigitalSreeni
      @DigitalSreeni  2 ปีที่แล้ว +1

      I am not familiar with DCM format, sounds like some sort of DICOM file format. You need to look for libraries that can read these files.

    • @kalhormh2883
      @kalhormh2883 2 ปีที่แล้ว

      @@DigitalSreeni thanks for the quick response. DCM is the file format for Zeiss OCT device. I've been trying different ways like python or other software to open but no success. After some searches, I noticed this is not just regular Dicom formats that you can handle easily but it's locked by the company and it's not for public. I sent emails to Zeiss team as well to see if there is any way to unlock and convert these files and now waiting for a response.

  • @salmahayani2710
    @salmahayani2710 3 ปีที่แล้ว

    Hello firstly thankx for this useful video , i wanna ask something about dice coef loss , i'm doing a semantic segmentation on 3D Ct scan (Luna16 database )using 3D unet i have a problem that my dice loss function blocked in 50% and don't decrease anymore for both training and validation, do you have any idea what could be the problem?
    Waiting for your answer :)

    • @houdahassouane4018
      @houdahassouane4018 ปีที่แล้ว

      Salam salma, I hope you're doing well. we're doing a semantic segmentation on 3D xct images too, but we have a problem with large data and insufficient RAM ; it crashes at training stage ( we're using 3D Unet too), did you face the same problem and if so how did you manage it? even 32Gb won't do the work :/

    • @salmahayani2710
      @salmahayani2710 ปีที่แล้ว

      @@houdahassouane4018 hi Houda, for me as first solution i have used Library TorchIO that helped me to charge data on flow while training so you don't need to charge all data in RAM and even you can do with the same library data augmentation on fly. thus, The second solution was to change machine with one that has NVIDIA RAM .
      I Hope this can help you to deal with this problem.

    • @houdahassouane4018
      @houdahassouane4018 ปีที่แล้ว

      ​@@salmahayani2710 thaanks a lot for repling, I really appreciate it. I'll try it. I have another question if you don't mind. Did you use his colab? If so didn't you get a problem with output label n ground truth not giving anything, but just a plain purple screen?

    • @salmahayani2710
      @salmahayani2710 ปีที่แล้ว

      @@houdahassouane4018 do you mean google colab ?

  • @elnaz8202
    @elnaz8202 3 ปีที่แล้ว

    very nice thank u. I wanna use different datasets that size of image are (512,512), how can i use them without error

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว

      Crop them to smaller size as you may not be able to fit 512x512x512 volumes in memory.

  • @talllankywhiteboy
    @talllankywhiteboy 2 ปีที่แล้ว

    Really enjoyed the video and have been trying to use the code, but the step at 23:36 is a huge waste of resources. Tripling the size of my already fairly large images basically eats up all the RAM Colab offers. Really would have liked to see a more efficient approach.

    • @houdahassouane4018
      @houdahassouane4018 ปีที่แล้ว +1

      Hello sir. Indeed the RAM provided by colab is insufficient. I'm facing the same problem, did you find a way to it?

    • @talllankywhiteboy
      @talllankywhiteboy ปีที่แล้ว

      @@houdahassouane4018 I sadly never manage to actually solve the issue of needing the three channels. One strategy I used to make things better though was to convert the data-type of the arrays to be as small as possible. For me that meant initializing some of my arrays to be a uint8 datatype.
      Example:
      train_lbls = np.zeros(train_dims, dtype='uint8')

  • @karthikp7291
    @karthikp7291 3 ปีที่แล้ว

    Sir, I have 140 3D Niftii files, and I need to extract patches on the fly and use a data generator. In your case, you have loaded one volume, how do I scale this to my case and load data such that it does not run out of memory? Can we work on this together and make a flexible pipeline for everyone to use?

    • @karthikp7291
      @karthikp7291 3 ปีที่แล้ว

      Right now, I save every patch for all patients and then use a dataloader to load the data. But this is not very flexible if I want to change the patch size during fine tuning.

    • @saifeddinebarkia7186
      @saifeddinebarkia7186 3 ปีที่แล้ว

      @@karthikp7291 I ran out of memory trying to run the 3D U-net on brats2020 dataset despite using batch_size = 1 and having 6GB memory, I think 3D segmentation needs a lot of memory :/

    • @karthikp7291
      @karthikp7291 3 ปีที่แล้ว

      @@saifeddinebarkia7186 yes, 3D segmentation requires a lot of memory. You need to create patches of mxnxt size and then train.

    • @houdahassouane636
      @houdahassouane636 ปีที่แล้ว

      @@saifeddinebarkia7186 Hello sir, did you find a way to train your model? My colab crashes and the RAM space doesn't help. I'd like to know if Colab Pro would be of great help or not? thanks in advance.

  • @kibetwalter8528
    @kibetwalter8528 2 ปีที่แล้ว

    Can you combine the 3D unet with gnn/gcn at the base layer?

    • @kibetwalter8528
      @kibetwalter8528 2 ปีที่แล้ว

      Like in this paper
      A joint 3D UNet-Graph Neural Network-based method for Airway Segmentation from chest CTs

  • @pandian1537
    @pandian1537 ปีที่แล้ว

    Hi brother could you tell me any idea or method about how to get patches in 3 d mri image

    • @DigitalSreeni
      @DigitalSreeni  ปีที่แล้ว

      You can use Patchify library or of course write your own code.

  • @ritikaagarwal112
    @ritikaagarwal112 3 ปีที่แล้ว

    Thank you sir for sharing the knowledge. Any plans to cover UNet++ architecture in upcoming lectures?

  • @FDXMSAIF
    @FDXMSAIF ปีที่แล้ว

    how do you create the mask image set? please help

  • @gurinderjeetkaur8087
    @gurinderjeetkaur8087 3 ปีที่แล้ว

    Create one video related to 3d unet for brats dataset too, if possible.

  • @a96yonan
    @a96yonan 3 ปีที่แล้ว

    Can this work with 2D images too?

  • @wrtxubaid9114
    @wrtxubaid9114 ปีที่แล้ว

    Can we use nii files in this ?.

  • @gabrielcerono306
    @gabrielcerono306 3 ปีที่แล้ว

    Amazing work!!

  • @moumitamoitra1829
    @moumitamoitra1829 3 ปีที่แล้ว

    could you please make some video of classification of 3D images using deep learning.

    • @moumitamoitra1829
      @moumitamoitra1829 3 ปีที่แล้ว

      I want to learn 3D image based classifications using different deep CNN pretrained models. Please help us.

  • @AlexanderFIOsman
    @AlexanderFIOsman 3 ปีที่แล้ว +1

    Very nice! Thanks. It would be great if you could make videos on the adaptive domain using transfer learning (VGG16, Inception, etc.) and GANs.

  • @mohamedomar-rp3kz
    @mohamedomar-rp3kz 2 ปีที่แล้ว

    I'm not able to open Apeer for the first time! Could you help?

    • @DigitalSreeni
      @DigitalSreeni  2 ปีที่แล้ว

      Please post it on the APEER discord server: discord.gg/xffrNwm78e

    • @mohamedomar-rp3kz
      @mohamedomar-rp3kz 2 ปีที่แล้ว

      @@DigitalSreeni I did and I got no reply yet!

  • @olubukolaishola4840
    @olubukolaishola4840 3 ปีที่แล้ว

    Thank you 🙏🏿

  • @cirobrosa
    @cirobrosa ปีที่แล้ว

    Keep it up!

  • @МатвейБрюшков
    @МатвейБрюшков 3 ปีที่แล้ว

    Let's create video about semantic segmentation of satellite images (buildings, roads, forests, rivers) using Unet

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว

      What dataset do you recommend for semantic segmentation of satellite images?

    • @МатвейБрюшков
      @МатвейБрюшков 3 ปีที่แล้ว

      @@DigitalSreeni For example, this www.kaggle.com/humansintheloop/semantic-segmentation-of-aerial-imagery

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว

      @@МатвейБрюшков Thanks. This looks like a small dataset but fun to work with. I will try to record a video.

    • @МатвейБрюшков
      @МатвейБрюшков 3 ปีที่แล้ว

      @@DigitalSreeni Can you give me please link to code, that devide large training images to 256x256 part?? Thanks

    • @МатвейБрюшков
      @МатвейБрюшков 3 ปีที่แล้ว

      @@DigitalSreeni oh. I forgot send link to other datasets. zenodo.org/record/1154821#.YImFqaGEaUk

  • @Suman-zm7wx
    @Suman-zm7wx 3 ปีที่แล้ว

    Sir could you please provide a vedio regarding "Super Resolution" using SRGanns, or any other algorithm, coz need an explanation just from you 😇

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว +1

      Sure.

    • @Suman-zm7wx
      @Suman-zm7wx 3 ปีที่แล้ว

      @@DigitalSreeni thank you sir, much appreciated ❤

  • @abbasagha9661
    @abbasagha9661 11 หลายเดือนก่อน

    Thanks!

    • @DigitalSreeni
      @DigitalSreeni  11 หลายเดือนก่อน

      Thank you very much.

  • @morniang3845
    @morniang3845 3 ปีที่แล้ว

    Thanks You

  • @manishnarnaware6507
    @manishnarnaware6507 2 ปีที่แล้ว

    Dear Sir, can you help me in my project

    • @manishnarnaware6507
      @manishnarnaware6507 2 ปีที่แล้ว

      I can pay you for that

    • @DigitalSreeni
      @DigitalSreeni  2 ปีที่แล้ว +1

      Sorry, I got no time for helping with individual projects. I wish I had the time but I have a full time job that requires my full attention. I am sure you will find some freelancers, if you are willing to pay.

  • @fuegopuro5933
    @fuegopuro5933 3 ปีที่แล้ว

    Lits is also on kaggle!

    • @nouhamejri1698
      @nouhamejri1698 3 ปีที่แล้ว

      hello, have you ever worked with brats dataset ?

    • @fuegopuro5933
      @fuegopuro5933 3 ปีที่แล้ว

      @@nouhamejri1698 no, but im looking forward to