Transfer Learning (C3W2L07)

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ม.ค. 2025

ความคิดเห็น • 48

  • @gaidna4387
    @gaidna4387 15 วันที่ผ่านมา

    This lecture is very easy to learn about transfer learning. Initially, I didn't understand transfer learning, but now I understand it perfectly! Thank you!"

  • @rogerab1792
    @rogerab1792 5 ปีที่แล้ว +7

    Transfer learning is the key to AGI, once a Neural Network learns the patterns of logical relationships and it is able to transfer that learning and apply it to new problems an AI will be able to draw intelligent conclusions. All that is needed is an AI that picks patterns in logical problems and learns from it's conclusions, once it has learned it needs to pick similarities between new problems and the old already solved ones to transfer it's neural pathways but also COMBINE them to deduct new conclusions( combine them taking into consideration the problem that is being aimed to solve ), conclusions which will be useful to solve new problems and so on... ( The key concept here is to find a way to 'COMBINE' trained neural nets to build newer, smarter and more general ones, an AI should be trained to learn to combine specific neural nets to solve new problems related to the ones already solved, then use that AI that combines pathways to assist in the combination of neural nets for different problems, combining not only neural nets themselves but the neural nets that were used to combine nets in the past to create better more general combinations of nets ). All this will keep building on itself and AGI will become more capable faster and faster as time passes by.

  • @sehaba9531
    @sehaba9531 5 ปีที่แล้ว +11

    Very clear and simple explanation, thank you so much

  • @SaniaSinghania
    @SaniaSinghania ปีที่แล้ว +3

    Don't know why people appreciate him. He does not break down complex concepts in simpler terms at all.

    • @TragicGFuel
      @TragicGFuel ปีที่แล้ว +3

      Are you being sarcastic?

    • @Ash-bc8vw
      @Ash-bc8vw 3 หลายเดือนก่อน

      You can input his explanation into chatgpt and ask it explain to you as if you were a 3yr old.
      Because he is already made it simple.

  • @kpagrawal2306
    @kpagrawal2306 3 หลายเดือนก่อน +2

    Very Nice Explanations - Dr Andrew

  • @salehjamali6716
    @salehjamali6716 2 ปีที่แล้ว

    Thank you alot for summarizing the whole concept in one small video. ❤️

  • @hannahJane300
    @hannahJane300 5 หลายเดือนก่อน

    I am reading a paper on GNN. There are terms i did not understand. Thank you so much.

  • @spacecapitalism7152
    @spacecapitalism7152 6 ปีที่แล้ว +19

    Video is done at 1:25 Lol!! he explained it so simply.

    • @trexmidnite
      @trexmidnite 4 ปีที่แล้ว

      Maybe your brain is full at that

    • @MrZouzan
      @MrZouzan 3 ปีที่แล้ว +1

      @@trexmidnite rude

  • @waliullahmahir869
    @waliullahmahir869 ปีที่แล้ว

    Nice explanation 😍

  • @hazema.6150
    @hazema.6150 ปีที่แล้ว

    Wonderful, thanks for uploading this video

  • @johncsheath5037
    @johncsheath5037 2 ปีที่แล้ว

    Brilliant, Andrew, thank you

  • @arpege3618
    @arpege3618 4 ปีที่แล้ว

    man thanks for the info i like your explaining and manner
    thank you again mister

  • @arnabmondal601
    @arnabmondal601 4 ปีที่แล้ว +1

    Very clear explanation.

  • @HanhTangE
    @HanhTangE 4 ปีที่แล้ว +2

    Pretty intuitive. I luv it :)

  • @nadirshah8600
    @nadirshah8600 3 หลายเดือนก่อน

    in a case of time series problem, can we do transfer learning for the exact dataset as that of pre-trained model data set?

  • @mdasadullahturja1481
    @mdasadullahturja1481 6 ปีที่แล้ว +3

    Great explanation !!

  • @xDevoneyx
    @xDevoneyx 4 ปีที่แล้ว +1

    Assuming in this example, because it is about images, we are talking about neural networks with convolution layers, right? Then I think of the visualizations of the filters in the convolution layers. And I do not understand how images of cats, have similar structures to images of cells/tissue/bones in radiology. I can imagine that a network which is trained on lots of pictures of pebbles could help pre-training for images of cell tissue, because of the somewhat similar circular/elliptical structure. Could you comment on this?
    Another thing I am confused about is that you mention you could only retrain the last layer. Typically in a convnet this is a dense layer. Does that mean that there are cases in which no convolution layers are retrained, yet the network is effective in predicting types of images it have never seen, by just retraining the dense layer?
    Thanks for the video, much appreciated!

  • @nickbelanger5225
    @nickbelanger5225 3 ปีที่แล้ว

    Very nice explanation

  • @shigstsuru765
    @shigstsuru765 4 ปีที่แล้ว

    Got a question:
    Does transfer learning work, if task a and b has same input but different column varieties?
    So let’s say A and B’s task is to detect emotion (let’s say if the person likes it or dislikes it)
    A has better detection rate than B and I’m trying to transfer the high detection rate of A to B.
    Data A has anger, sorrow, joy, excitement, and Data B has anger, joy, excitement.
    I am super-amateur in this field so I’m not sure if I’m talking about anything plausible but it’s be a great help if I the scenario is plausible or not.
    Many thanks

    • @nitishjaiswal6610
      @nitishjaiswal6610 4 ปีที่แล้ว

      I guess that's exactly what transfer learning is about! From his example as well, the image recognition dataset will have low level features which are used during radiology diagnosis. Similarly in your example, if you train using 5 emotions, there'll be low level features which will help you to detect a different dataset (with 3 emotions) as well. We just might need to retrain the network as the new dataset doesn't contain the 2 dropped emotions so we need to make adjustments to the previous training to accommodate all the test data in the new classes

    • @andrewkreisher689
      @andrewkreisher689 4 ปีที่แล้ว

      yep thats pretty much what i would use it for

  • @kabokbl2412
    @kabokbl2412 2 ปีที่แล้ว

    Doesn't retraining the dataset simply preserve the model architecture (i.e the sequence and types of layers) since the wieghts and biases are retrained/fineTuned?

  • @StefanBrock_PL
    @StefanBrock_PL 2 ปีที่แล้ว

    Very usefull background.

  • @MOHSINALI-bk2qo
    @MOHSINALI-bk2qo 5 ปีที่แล้ว +1

    thank you sir

  • @nands4410
    @nands4410 6 ปีที่แล้ว +2

    amazing!

  • @miketsui3a
    @miketsui3a 4 ปีที่แล้ว +2

    now we are in 2020, but this vid is still in 360p

    • @poojakabra1479
      @poojakabra1479 2 ปีที่แล้ว

      Ikr, at first I suspected this wasn’t the original channel

  • @raulmaldonado3477
    @raulmaldonado3477 6 ปีที่แล้ว +7

    is this video part of a bigger course?

    • @dtienloi
      @dtienloi 6 ปีที่แล้ว

      Raul Maldonado yes

    • @AnshumanKumar007
      @AnshumanKumar007 6 ปีที่แล้ว +1

      It's a part of the course on Coursera

  • @Fatima-kj9ws
    @Fatima-kj9ws 4 ปีที่แล้ว

    Great Thanks

  • @littletiger1228
    @littletiger1228 ปีที่แล้ว

    beautiful

  • @shashanksharma7202
    @shashanksharma7202 5 ปีที่แล้ว

    How to handle input data if the aspect ratio of pre-trained models are different than input images?? for example, Say aspect ratio for Task A image recognition is 224 x 224 and aspect ratio for Task B diagnosis is 250 x 125

    • @linachato5817
      @linachato5817 5 ปีที่แล้ว +1

      by resizing the dataset (scaling or crop). if opposite situation, you can add a border to each image!

  • @shahi_gautam
    @shahi_gautam 5 ปีที่แล้ว

    can we use the concept of transfer learning on SVM?

    • @linachato5817
      @linachato5817 5 ปีที่แล้ว +2

      Yes, you can use the pre-trained model for feature extraction and then use the features matrix in training SVM, NN,....

  • @manojsriramula2355
    @manojsriramula2355 4 ปีที่แล้ว

    A big thanks

  • @THEGASDRIP
    @THEGASDRIP 2 ปีที่แล้ว

    this is cool

  • @swagatggautam6630
    @swagatggautam6630 2 ปีที่แล้ว

    Time to reshoot the video with higher quality camera...

  • @saurabhagarwal8970
    @saurabhagarwal8970 4 ปีที่แล้ว +2

    Code for Transfer Learning anyone ??

  • @rogerab1792
    @rogerab1792 4 ปีที่แล้ว

    What if you trained the network on both sets at once, keeping the two different output layers? This way the common net doesn't get biased towards one set or the other and maybe even generalises better to new data for both sets. Would be like a regularization technique where you update the weights to both fitting on one set and regularizing on the other. If this technique already exists someone please reply, I'd like to see the results and wether it is useful for regularization.

    • @rogerab1792
      @rogerab1792 4 ปีที่แล้ว

      multi-task learning

  • @736939
    @736939 2 ปีที่แล้ว

    OK, now how to program it in pytorch. Update the architecture of the NN and train it on the different model. HOW?