Soheil Feizi
Soheil Feizi
  • 48
  • 156 139

วีดีโอ

Deep Learning Foundations by Soheil Feizi : Large Language Models
มุมมอง 11K9 หลายเดือนก่อน
0:00 Basics of language models 2:30 Word2vec 16:27 Transfer Learning 19:23 BERT 1:00:39 T5 1:31:14 GPT1-3 1:53:05 ChatGPT 2:20:03 LLMs as Deep RL 2:53:00 Policy Gradient 3:32:50 Train your own LLM
Deep Learning Foundations by Soheil Feizi : Latent Text-to-Image Diffusion Models
มุมมอง 1K9 หลายเดือนก่อน
Deep Learning Foundations by Soheil Feizi : Latent Text-to-Image Diffusion Models
Deep Learning Foundations by Soheil Feizi : Diffusion Models: A Score Matching Perspective
มุมมอง 1.2K9 หลายเดือนก่อน
Deep Learning Foundations by Soheil Feizi : Diffusion Models: A Score Matching Perspective
Deep Learning Foundations by Soheil Feizi : Diffusion Models
มุมมอง 5K9 หลายเดือนก่อน
Deep Learning Foundations by Soheil Feizi : Diffusion Models
Deep Learning Foundations by Soheil Feizi : Hierarchical Vision Transformers
มุมมอง 95010 หลายเดือนก่อน
Course Webpage: www.cs.umd.edu/class/spring2024/cmsc720/
Deep Learning Foundations by Soheil Feizi : Vision Transformers
มุมมอง 1.4K10 หลายเดือนก่อน
Course Webpage: www.cs.umd.edu/class/spring2024/cmsc720/
Deep Learning Foundations by Soheil Feizi : Linear Attention
มุมมอง 1.4K10 หลายเดือนก่อน
Course webpage: www.cs.umd.edu/class/spring2024/cmsc720/
Deep Learning Foundations by Soheil Feizi : Transformers
มุมมอง 2.9K10 หลายเดือนก่อน
Explaining transformers: encoders and decoders
Deep Learning Foundations: Arash Vahdat's talk on "Denoising Diffusion Models"
มุมมอง 2.3K2 ปีที่แล้ว
Course webpage: www.cs.umd.edu/class/fall2022/cmsc828W/ For a long time, the generative learning field especially around image generation was divided into two schools of thought: (1) generative adversarial networks (GANs) that generate high-quality samples at the cost of poor mode coverage and unstable training, and (2) likelihood-based models including variational autoencoders (VAEs), normaliz...
Deep Learning Foundations: Dorsa Sadigh 's talk on "Learning Robot Policies"
มุมมอง 7542 ปีที่แล้ว
Course webpage: www.cs.umd.edu/class/fall2022/cmsc828W/ ABSTRACT: A common paradigm of learning robot policies is to rely on expert demonstrations. However, we often have limited access to expert demonstrations and collecting such data on robots with high degrees of freedom can be quite challenging. In practice, there are many other sources of human data that allow for learning robust robot pol...
Deep Learning Foundations: Xinyun Chen 's talk on "Learning-Based Program Synthesis"
มุมมอง 7182 ปีที่แล้ว
Course webpage: www.cs.umd.edu/class/fall2022/cmsc828W/ With the advancement of modern technologies, programming becomes ubiquitous not only among professional software developers, but also for general computer users. However, gaining programming expertise is time-consuming and challenging. Therefore, program synthesis has many applications, where the computer automatically synthesizes programs...
Deep Learning Foundations: Constantinos Daskalakis's talk on Equilibrium Complexity & Deep Learning
มุมมอง 7392 ปีที่แล้ว
Course webpage: www.cs.umd.edu/class/fall2022/cmsc828W/ ABSTRACT: Deep learning has recently made significant progress in learning challenges such as speech and image recognition, automatic translation, and text generation. Much of that progress is being fueled by the success of gradient descent-based optimization methods in computing local optima of non-convex objectives. From robustifying mac...
Deep Learning Foundations: Mahdi Soltanolkotabi's talk on Feature learning & inverse problems
มุมมอง 7762 ปีที่แล้ว
Course webpage: www.cs.umd.edu/class/fall2022/cmsc828W/ In the first part of the talk, I will focus on demystifying the generalization and feature learning capability of modern overparameterized neural networks. Our result is based on an intriguing spiking phenomena for gradient descent, that puts the iterations on a particular trajectory towards solutions that are not only globally optimal but...
Deep Learning Foundations: Jonathan Frankle talk on Faster Neural Network Training, Algorithmically
มุมมอง 1.1K2 ปีที่แล้ว
Course webpage: www.cs.umd.edu/class/fall2022/cmsc828W/ Abstract: Training modern neural networks is time-consuming, expensive, and energy-intensive. As neural network architectures double in size every few months, it is difficult for researchers and businesses without immense budgets to keep up. In this talk, I will describe one approach for managing this challenge: changing the training algor...
Deep Learning Foundations: Daniel Roy and Mufan Li's Talk on "Neural Covariance SDE"
มุมมอง 6642 ปีที่แล้ว
Deep Learning Foundations: Daniel Roy and Mufan Li's Talk on "Neural Covariance SDE"
Deep Learning Foundations: Balaji Lakshminarayanan's Talk on Reliability via Pretrained Large Models
มุมมอง 5082 ปีที่แล้ว
Deep Learning Foundations: Balaji Lakshminarayanan's Talk on Reliability via Pretrained Large Models
Deep Learning Foundations: Simon Du's Talk on Passive and Active Multi-Task Representation Learning
มุมมอง 8952 ปีที่แล้ว
Deep Learning Foundations: Simon Du's Talk on Passive and Active Multi-Task Representation Learning
Deep Learning Foundations: Andrew Wilson's Talk on How Do We Build Models That Learn and Generalize?
มุมมอง 2.5K2 ปีที่แล้ว
Deep Learning Foundations: Andrew Wilson's Talk on How Do We Build Models That Learn and Generalize?
Deep Learning Foundations: Misha Belkin's Talk on deep learning through the prism of interpolation
มุมมอง 2.2K2 ปีที่แล้ว
Deep Learning Foundations: Misha Belkin's Talk on deep learning through the prism of interpolation
Deep Learning Foundations by Soheil Feizi : Course Summary
มุมมอง 1.4K4 ปีที่แล้ว
Deep Learning Foundations by Soheil Feizi : Course Summary
Lecture 29 - Deep Learning Foundations by Soheil Feizi : Reinforcement Learning (Part III)
มุมมอง 9154 ปีที่แล้ว
Lecture 29 - Deep Learning Foundations by Soheil Feizi : Reinforcement Learning (Part III)
Lecture 28 - Deep Learning Foundations by Soheil Feizi : Reinforcement Learning (Part II)
มุมมอง 8964 ปีที่แล้ว
Lecture 28 - Deep Learning Foundations by Soheil Feizi : Reinforcement Learning (Part II)
Lecture 27 - Deep Learning Foundations by Soheil Feizi : Reinforcement Learning (Part I)
มุมมอง 1.3K4 ปีที่แล้ว
Lecture 27 - Deep Learning Foundations by Soheil Feizi : Reinforcement Learning (Part I)
Lecture 25 - Deep Learning Foundations, Guest Lecture by Aya Ismail: Deep Learning Interpretations
มุมมอง 1.1K4 ปีที่แล้ว
Lecture 25 - Deep Learning Foundations, Guest Lecture by Aya Ismail: Deep Learning Interpretations
Lecture 24 - Deep Learning Foundations, Guest Lecture by Aya Ismail: RNNs, LSTMs and Transformers
มุมมอง 1.5K4 ปีที่แล้ว
Lecture 24 - Deep Learning Foundations, Guest Lecture by Aya Ismail: RNNs, LSTMs and Transformers
Lecture 23 - Deep Learning Foundations by Soheil Feizi : Meta Learning
มุมมอง 2.1K4 ปีที่แล้ว
Lecture 23 - Deep Learning Foundations by Soheil Feizi : Meta Learning
Lecture 22 - Deep Learning Foundations by Soheil Feizi : Self-Supervised Learning (Part II)
มุมมอง 1.5K4 ปีที่แล้ว
Lecture 22 - Deep Learning Foundations by Soheil Feizi : Self-Supervised Learning (Part II)
Lecture 21 - Deep Learning Foundations by Soheil Feizi : Self-Supervised/Contrastive Learning
มุมมอง 3.4K4 ปีที่แล้ว
Lecture 21 - Deep Learning Foundations by Soheil Feizi : Self-Supervised/Contrastive Learning
Lecture 20 - Deep Learning Foundations by Soheil Feizi : Domain Generalization
มุมมอง 3.8K4 ปีที่แล้ว
Lecture 20 - Deep Learning Foundations by Soheil Feizi : Domain Generalization

ความคิดเห็น

  • @SphereofTime
    @SphereofTime 7 วันที่ผ่านมา

    1:00 awesome

  • @arashlagzian94
    @arashlagzian94 12 วันที่ผ่านมา

    Thank you so much for this fantastic course! I really enjoyed the way you explained everything-it felt clear and approachable. I’ve learned a lot, and I just wanted to let you know how much I appreciate the time and effort you put into making this. Looking forward to more videos from you!

  • @RachitVerma-f2k
    @RachitVerma-f2k 2 หลายเดือนก่อน

    These lectures of yours are absolutely gold. Thanks a lot!

  • @ruijiaxu4035
    @ruijiaxu4035 3 หลายเดือนก่อน

    I really enjoy your teaching.

  • @ruijiaxu4035
    @ruijiaxu4035 3 หลายเดือนก่อน

    thanks a lot for your sharing.❤

  • @AishKhan-le7xq
    @AishKhan-le7xq 4 หลายเดือนก่อน

    What is true distribution?

  • @AishKhan-le7xq
    @AishKhan-le7xq 4 หลายเดือนก่อน

    Thank you, Sir.

  • @freerockneverdrop1236
    @freerockneverdrop1236 5 หลายเดือนก่อน

    The formula for the neural network in this video should be a 2 level summation instead of one level.

  • @KittyCat-lp3zy
    @KittyCat-lp3zy 5 หลายเดือนก่อน

    یاشا آذربایجان ثروتی ❤

  • @MonkkSoori
    @MonkkSoori 7 หลายเดือนก่อน

    At 20:20 why does Phi(Q_i) not cancel out in the numerator and denominator?

  • @janesun9008
    @janesun9008 7 หลายเดือนก่อน

    Thank you for sharing this lecture, prof. Great quality and easy to understand!

  • @NavaAbdolalipour
    @NavaAbdolalipour 7 หลายเดือนก่อน

    من با شما قلمچی اردبیل بودم،بعد سالها توی کشورهای نزدیک بهم اسمتون رو دیدم،خوشحالم از موفقیت های هم دوره ایی ها

  • @simaranjbari
    @simaranjbari 8 หลายเดือนก่อน

    your explanation was very nice and easy to understand. Thank you!

  • @fierydino9402
    @fierydino9402 8 หลายเดือนก่อน

    Wonderful lecture!! Thank you for sharing

  • @PradeepKumar-zy6cd
    @PradeepKumar-zy6cd 9 หลายเดือนก่อน

    Can you please share the slide

  • @prabhavkaula9697
    @prabhavkaula9697 9 หลายเดือนก่อน

    Thank you for the lecture! ☺️

  • @ax5344
    @ax5344 9 หลายเดือนก่อน

    @1:58:57 You said you will explain different procedures to generate different responses later. I did not find it till you start discussing Step 3. Could you illustrate further?

    • @ax5344
      @ax5344 9 หลายเดือนก่อน

      @2:15:30 Found it. Thanks!

  • @ax5344
    @ax5344 9 หลายเดือนก่อน

    When you upload the video, could you set the speed to 1.5? Right now, I'm setting it to 2X, it is still very very slow.

  • @sabujchattopadhyay
    @sabujchattopadhyay 9 หลายเดือนก่อน

    Can you please share the slides? (2)

  • @miquelnogueralonso2576
    @miquelnogueralonso2576 9 หลายเดือนก่อน

    Can you please share the slides

  • @bardiasafaei457
    @bardiasafaei457 9 หลายเดือนก่อน

    Thank you Soheil for the great content and the clear way of explanation! Could you also share the final written notes of each session for download?

  • @ai__76
    @ai__76 9 หลายเดือนก่อน

    Massage lesson! Tnx

  • @amiltonwong
    @amiltonwong 9 หลายเดือนก่อน

    Thanks a lot for providing such an excellent lecture. Would it be possible to release the notes for study? Thanks~

  • @parhamsalar3826
    @parhamsalar3826 9 หลายเดือนก่อน

    Many thanks for your excellent lectures, particularly those on diffusion models. I do have a few inquiries regarding models of conditional diffusion. Can we think of text vectors as the query (Q) and image vectors as the key (K) and value (V) in cross-attention instead of image vectors as the query (Q)?

  • @parisaemkani5730
    @parisaemkani5730 9 หลายเดือนก่อน

    Hi, could u please introduce a good course in the basics of machine learning and deep learning for beginners?

  • @Stealph_Delta_3003
    @Stealph_Delta_3003 10 หลายเดือนก่อน

    Thanks for sharing.

  • @sdiabr6792
    @sdiabr6792 10 หลายเดือนก่อน

    Real quality content

  • @mozhganmomtaz8169
    @mozhganmomtaz8169 10 หลายเดือนก่อน

    I just want to thank you 🤗

  • @INSTIG8R
    @INSTIG8R 10 หลายเดือนก่อน

    This here is the best video on SWIN transformers

  • @MrNoipe
    @MrNoipe 10 หลายเดือนก่อน

    The handwriting is difficult to read, maybe write slower or with a different brush?

  • @naeemkhoshnevis
    @naeemkhoshnevis 10 หลายเดือนก่อน

    Thanks for uploading these lectures.

  • @miladkhademinori2709
    @miladkhademinori2709 10 หลายเดือนก่อน

  • @naeemkhoshnevis
    @naeemkhoshnevis 10 หลายเดือนก่อน

    Thanks for uploading the lectures.

  • @Nerraruzi
    @Nerraruzi 10 หลายเดือนก่อน

    Thanks so much for sharing this updated version of the course!!

  • @shayanmohammadizadeh172
    @shayanmohammadizadeh172 10 หลายเดือนก่อน

    It's minute 30 of the video and I have watched +8 ads. Really attention is all we need!

    • @Umar-Ateeq
      @Umar-Ateeq 8 หลายเดือนก่อน

      you can use "adblock for youtube" extension to avoid ads.

  • @junqi7050
    @junqi7050 10 หลายเดือนก่อน

    Thank Soheil for sharing the updated deep learning theory courses. I ever followed Sohail's former lectures in 2020, where I learned the theoretical knowledge of deep learning in terms of representation, generalization, and optimization. I found that Soheil's course schedule this year has substantially changed to state-of-the-art transformer-based technologies, such as large language models, etc. I plan to catch up with Sohail's updated deep learning foundation course this year and really appreciate the new lecture videos.

  • @mohammadshahbazhussain2029
    @mohammadshahbazhussain2029 10 หลายเดือนก่อน

    Thank you for sharing it

  • @BowenXie-b7b
    @BowenXie-b7b 10 หลายเดือนก่อน

    Hi professor, I was also wondering that if you plan to to add some contents related to distanglement learning? like nonlinear ICA which I think is very theoretically interesting and important.

    • @BowenXie-b7b
      @BowenXie-b7b 10 หลายเดือนก่อน

      Sorry, there's a typo. It should be 'disentanglement'.

  • @BowenXie-b7b
    @BowenXie-b7b 10 หลายเดือนก่อน

    Thanks for updating this really amazing course. I've read the syllabus of this semester, and find it is really interesting, especially the generative models and multi-modal models part. Hope to see more latest course videos. Thanks a lot for your effort of sharing the contents of this amazing course.

  • @hesamce
    @hesamce 10 หลายเดือนก่อน

    Thank you for sharing the updated version of the course🙏

  • @AyushSharma-ie7tj
    @AyushSharma-ie7tj ปีที่แล้ว

    Really nice lecture with a very even pace. Thank you for sharing.

  • @StratosFair
    @StratosFair ปีที่แล้ว

    Great lecture. Thank you for sharing

  • @hamedgholami261
    @hamedgholami261 ปีที่แล้ว

    explanation of: "Loss landscapes and optimization in over-parameterized non-linear systems and neural networks"

  • @mojtabakolahdoozi2418
    @mojtabakolahdoozi2418 ปีที่แล้ว

    Great lecture on the highly ignored ground! thanks

  • @quanguyenang1615
    @quanguyenang1615 ปีที่แล้ว

    Thanks for the great lectures, Prof. Soheil.

  • @sylus121
    @sylus121 ปีที่แล้ว

    25:00 (Bookmark)

  • @Thaumast
    @Thaumast ปีที่แล้ว

    24:18 The loss function is sometime defined by an L and sometime edfines by the caligraohic L, are they the same? thank you very much !

  • @bryanbocao4906
    @bryanbocao4906 ปีที่แล้ว

    42:30 one option could be KL divergence loss?

  • @mskang009
    @mskang009 ปีที่แล้ว

    Such a great lecture I've seen in TH-cam related to self-supervised learning. So many thanks!

  • @sinaasadiyan
    @sinaasadiyan 2 ปีที่แล้ว

    great explanation, just Subscribed!