87 - Applications of Autoencoders - Denoising using custom images

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ต.ค. 2024

ความคิดเห็น • 51

  • @kaokuntai
    @kaokuntai 2 ปีที่แล้ว +1

    Could you provide the link for downloading your dataset?
    Like Google Drive.

  • @eylulmood4830
    @eylulmood4830 2 ปีที่แล้ว +1

    where did you get the noisy and clean images, please explain

  • @computer-qz2uz
    @computer-qz2uz 3 ปีที่แล้ว +2

    where did you get from the noise clean image, please explain

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว +1

      I artificially added noise. You need to be careful when you do this because the trained model may be good only at cleaning images with artificial noise.

  • @sriramsvk
    @sriramsvk 2 ปีที่แล้ว

    Thanks Sreeni for video series on autoencoders. With respect to Accuracy metrics, I think size of the image of also matters apart from sample size. Your thoughts on this pls.

  • @adityanjsg99
    @adityanjsg99 2 ปีที่แล้ว

    Any CV problem, Sreeni has a video/solution!! Thanks.
    Image Segmentation, Instance Segmentation and now Autoencoders.

  • @ruhidave5826
    @ruhidave5826 2 ปีที่แล้ว

    hi, can you please provide some details about autoencoder that you are using here.
    like there are some types of autoencoder, so which one you are using here?

  • @MainakGhosh7
    @MainakGhosh7 4 ปีที่แล้ว +1

    Excellent Video !

  • @ellisiverdavid7978
    @ellisiverdavid7978 4 ปีที่แล้ว

    Hi, Sir Sreeni! :)
    I’m just wondering-after we obtained the most important features from the bottleneck of our trained neural network, is it possible to apply the denoising capability of the autoencoder to a live feed video that is somewhat highly correlated to the training images?
    Will this be better, or even recommended, instead of using traditional denoising filters of OpenCV for real-time videos?
    I’d love to learn more from your expertise and advices as I explore this topic further. Thank you for the insightful explanation by the way! Subscribed! :)

    • @DigitalSreeni
      @DigitalSreeni  4 ปีที่แล้ว

      Once you have a trained denoising filter, using AutoEncoders or Noise2Void or other networks, you can use it to segment future images. The future images can be z-stack or time series (video). Live video is basically a frame by frame image.

  • @jothegamechanger
    @jothegamechanger 4 ปีที่แล้ว

    thank you

  • @shayanmesdaghi7587
    @shayanmesdaghi7587 2 ปีที่แล้ว

    Dear Dr. Sreenivas,
    Thanks for your tutorial. I have two questions,
    1. In your code, there are two paths, clean and noisy, but in GitHub, there is a series of images, do the images of noise and clean necessarily have to be the same? Or not?
    2. I use echocardiography images. Unfortunately, whatever I do, the accuracy does not increase by 30%, but loss decreases below 2%. the number of images is 60,000, and the size is 112. I change optimizers and loss functions. What is your suggestion?

    • @rbhambriiit
      @rbhambriiit ปีที่แล้ว

      I have a similar query. Could not find data files on github

  • @wiemrachman4788
    @wiemrachman4788 9 หลายเดือนก่อน

    Hi Sir, where is "Save the Model" ?

  • @limchoonchen341
    @limchoonchen341 2 ปีที่แล้ว

    Hi, your video is so good and everything is explained in details in terms of coding and slide presentation. However, could you please make a video about the low-light image enhancement using CNN? It is extremely important for my study. Thank you

  • @krishnakumars5947
    @krishnakumars5947 4 ปีที่แล้ว

    hello sir, really good to watch your videos. I have doubt . shall we remove skin hairs in the images using denoising autoencoder ?

    • @DigitalSreeni
      @DigitalSreeni  4 ปีที่แล้ว +1

      Yes. Image restoration is one of the major applications for autoencoders. It does take a lot of data to train but once you have a trained model you can use it confidently. First search online to see if you can find any pre-trained models.

  • @unamattina6023
    @unamattina6023 2 ปีที่แล้ว

    how can I download the dataset you use in this video? other datasets on google I researched are about 6gb or similar. Could you please share the dataset on google drive or similar site?

  • @LakshyaIIITD
    @LakshyaIIITD 6 หลายเดือนก่อน

    what if images are MNIST-back-image, how to denoise them ??

  • @ruhidave5826
    @ruhidave5826 2 ปีที่แล้ว

    numpy.core._exceptions.MemoryError: Unable to allocate 524. MiB for an array with shape (1341, 320, 320, 1) and data type float32
    can you please help me with this error?

  • @shibilit188
    @shibilit188 3 ปีที่แล้ว

    thanks for the video sir.. I tried to implement the same to denoise dicom CT images using Colab... but not getting result..can you suggest the changes to be made for CT dicom images

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว +1

      Denoising using autoencoders can take a lot of training data. For CT images try BM3D (traditional approach). If you want deep learning please look at a paper called Noise2Void. We implemented that on our image analysis platform APEER. You can use it to denoise your images. it is free. www.apeer.com.

    • @shibilit188
      @shibilit188 3 ปีที่แล้ว

      Thank you

  • @rubisharma2916
    @rubisharma2916 3 ปีที่แล้ว

    Sir, in your github code of auto encoding single image, how is your model learning from single image and not large data set?An explaination will be friitful for my research paper.

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว

      The model is learning only from a single image because with autoencoder I am trying to reconstruct the same image. The trained model will only reproduce the input image as that’s all it ever got trained on. This model is not generalized to work on other situations. It was used to demonstrate the meaning of autoencoder. In real life you’d be using autoencoder as part of some application such as denoising or anomaly detection where you train it on larger datasets.

  • @HaiderAli-pd4wd
    @HaiderAli-pd4wd 4 ปีที่แล้ว

    Thanks for your outstanding explanation.I am getting this error would you give me some suggestions??
    IndexError: index 2 is out of bounds for axis 0 with size 2

    • @DigitalSreeni
      @DigitalSreeni  4 ปีที่แล้ว +2

      You seem to have an array with shape 2 where as it may be expecting the array to be of shape 3. Please follow the variable array shapes and pin down the issue. If needed, expand dimensions to make sure the inputs comply with the expected shape of the array.

    • @HaiderAli-pd4wd
      @HaiderAli-pd4wd 4 ปีที่แล้ว

      @@DigitalSreeni I got it

  • @adithiajovandy8572
    @adithiajovandy8572 4 ปีที่แล้ว +1

    how about if i just use noise images ? how to train it?

    • @adithiajovandy8572
      @adithiajovandy8572 4 ปีที่แล้ว +1

      sorry, i mean you make 2 path,clean images and noise images,of course i just need to denosing, noise images, so i dont need clean images, and the train codes need parameters which in your code used noise and clean parameters, how to make just for denosing ? thank before

    • @DigitalSreeni
      @DigitalSreeni  4 ปีที่แล้ว +2

      You need both noisy and clean images only for training purposes where you create a model. Then you can apply it on noisy images only. You can take an existing model if you can find something online and enhance it with your own data, this process is called transfer learning. In other words, you need some model that is trained on noisy and clean images so you can perform denoising. If you don't have this model then try normal approaches like non-local means denoising or anisotropic diffusion. In fact, they work well for different types of images including microscopy, MRI and CT.

  • @deepakkumarjyani1648
    @deepakkumarjyani1648 4 ปีที่แล้ว

    We use keras.preprocessing.image import img_to_array
    This library shows error like module tensorflow has no attribute 'name_scope'
    What suppose be I do..?

    • @DigitalSreeni
      @DigitalSreeni  4 ปีที่แล้ว

      Depends on the version of tensorflow. If you are using:
      import keras.xxx
      change it to: import tensorflow.keras.xxx

    • @rohanthorat4282
      @rohanthorat4282 4 ปีที่แล้ว

      if ur using Tensorflows keras make sure to use tensorflows keras globaly. Mixing Keras and Tf.keras will cause errors

  • @kaluleramanzani9212
    @kaluleramanzani9212 4 ปีที่แล้ว

    thank you, i would like to know how you can slice these images.

    • @DigitalSreeni
      @DigitalSreeni  4 ปีที่แล้ว

      What do you mean by ‘slice’?

  • @mtahirrasheed5538
    @mtahirrasheed5538 4 ปีที่แล้ว

    The data used in this video is not available on your github link. from where we can download this dataset?

    • @DigitalSreeni
      @DigitalSreeni  4 ปีที่แล้ว

      Sorry, github is not letting me upload that many images. Please find some images and add noise yourself.

    • @frankchieng
      @frankchieng 11 หลายเดือนก่อน

      can u upload your noisy and clean images to huggingface dataset?as far as i know, if you have multiple images, you can upload on HF freely@@DigitalSreeni

  • @adithiajovandy8572
    @adithiajovandy8572 4 ปีที่แล้ว

    nice material

  • @mtahirrasheed5538
    @mtahirrasheed5538 4 ปีที่แล้ว +1

    can you please make a video on feeding the patches of high resolution images and then combining the patches to make the full image back. thanks a lot in advance.

  • @ashwinig5160
    @ashwinig5160 3 ปีที่แล้ว

    sir please explain n2v denoising method code sir

  • @maishamahboob7423
    @maishamahboob7423 3 ปีที่แล้ว

    Does this work if the images are in tif format?

    • @DigitalSreeni
      @DigitalSreeni  3 ปีที่แล้ว

      Yes of course. You just need to be able to read the files.

  • @rohanthorat4282
    @rohanthorat4282 4 ปีที่แล้ว

    Great videos. Subscribed

  • @vatsalshingala3225
    @vatsalshingala3225 ปีที่แล้ว

    ❤❤❤❤❤

  • @Ruhgtfo
    @Ruhgtfo 4 ปีที่แล้ว

    ouef, but i did installed on Win 10 by using anaconda navigator
    D:\z_fyp\extra test\clahe\python_for_microscopists-master>python 086--auto_denoise_mnist.py
    Traceback (most recent call last):
    File "086--auto_denoise_mnist.py", line 9, in
    from tensorflow.keras.datasets import mnist
    ModuleNotFoundError: No module named 'tensorflow'

    • @DigitalSreeni
      @DigitalSreeni  4 ปีที่แล้ว

      Not sure what your question is, obviously it cannot find tensorflow on your python so please do pip install tensorflow.