Reinforcement Learning with sparse rewards

แชร์
ฝัง
  • เผยแพร่เมื่อ 3 พ.ย. 2024

ความคิดเห็น • 91

  • @AnkitBindal97
    @AnkitBindal97 6 ปีที่แล้ว +37

    Your teaching style is incredible! Can you please do a video on Capsule Networks?

  • @DeaMikan
    @DeaMikan 3 ปีที่แล้ว +2

    Seriously great, I'd love to see an updated video with the newest research!

  • @Frankthegravelrider
    @Frankthegravelrider 5 ปีที่แล้ว +1

    Ah dude just discovered your videos!! Just what I needed. Can't believe, have 6 year degree in engineering, work in AI and I can still learn from TH-cam.mad when you think out it. It's a new paradigm of education

    • @ArxivInsights
      @ArxivInsights  5 ปีที่แล้ว

      Haha, glad to hear that! You're welcome :)

  • @michaelc2406
    @michaelc2406 6 ปีที่แล้ว +4

    I've just been reading these papers for the openai retro competition. Your video went into a lot of depth, which is really hard to do with complex ideas, bravo!

  • @timonix2
    @timonix2 3 ปีที่แล้ว

    Holy shit. I have been working on this problem for months and to see that professionals are getting almost the exact same answers as me is pretty cool. There are a whole bunch if ideas in here I have not tried yet as well. Super useful

  • @Jabrils
    @Jabrils 6 ปีที่แล้ว +21

    fantastic content lad!

  • @glorytoarstotzka330
    @glorytoarstotzka330 6 ปีที่แล้ว +27

    no clickbait , good video quality , good sound, relative nice topics for some people, but 16k subs Excuse me , wtf

    • @sebastianjost
      @sebastianjost 4 ปีที่แล้ว

      The video quality is great but the topics are just not interesting for many people. And of course few subs makes it hard to find this channel.
      I'm glad I did though. This is a great overview.

  • @armorsmith43
    @armorsmith43 4 ปีที่แล้ว

    This a very effective strategy for personal productivity as a programmer with ADHD.
    I augment my unreliable reward-signaling system with Test-Driven Development.

  • @henning256yt
    @henning256yt 3 ปีที่แล้ว

    Love your passion for what you are talking about!

  • @minos99
    @minos99 3 ปีที่แล้ว +1

    I was really touched by the ending of the video. We need research on models and the social-economic consequences of the AI models...and I don't mean that terminator, Butlerian jihad crap. I mean human side: job losses, bias, morality, misuse...etc

  • @thomasbao4477
    @thomasbao4477 4 ปีที่แล้ว +1

    AMAZING! The prediction-reward algorithm in the first mentioned paper is very similar to how humans learn, at least based on a computational neurobiology course I took in college.

  • @pasdavoine
    @pasdavoine 6 ปีที่แล้ว +1

    Fantastic video! Making me gain time and in an enjoyable way.
    Many thanks

  • @adrienforbu5165
    @adrienforbu5165 3 ปีที่แล้ว +1

    It's always interesting to see how ideas around curiosity have taken off in reinforcement learning (I think about the "Never give up" paper and atari57

  • @Matthew8473
    @Matthew8473 9 หลายเดือนก่อน

    This is a marvel. I read a book with similar content, and it was a marvel to behold. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn

  • @robosergTV
    @robosergTV 6 ปีที่แล้ว +13

    need more deep RL stuff ^^

  • @dzima-create
    @dzima-create ปีที่แล้ว

    Maan it is so damn interesting and good video. I come from a completely different area - game development. And I wanted to understand some basics of A.I because I really want to dive deep into this to eventually teach for example rocket to fly, flappy bird to jump, snake to play efficiently.
    Reading papers is really difficult without knowledge of some basics, and the way you explained all these things is so good. I still don't understand the terminology and all these formulas, but at least I got one step closer :)
    Thank you for this brilliant video :)

  • @mohammadhatoum
    @mohammadhatoum 6 ปีที่แล้ว

    Always impressing and I never get bored watching your videos. Good job and keep it up 👍

  • @Leibniz_28
    @Leibniz_28 5 ปีที่แล้ว

    Really happy to find your channel, really sad to find out few videos in it.

  • @lukaslorenc4816
    @lukaslorenc4816 5 ปีที่แล้ว +2

    Recommend to read "Curiosity-driven Exploration by Self-supervised Prediction"
    it's really awesome paper.

  • @satyaprakashdash8203
    @satyaprakashdash8203 5 ปีที่แล้ว

    I would like to see a video on meta reinforcement learning. Its an exciting field now!

  • @DjChronokun
    @DjChronokun 6 ปีที่แล้ว +5

    if it wasn't for this channel I'd have never have known it wasn't pronounced 'ark-ziv'

    • @ritajitdey7567
      @ritajitdey7567 6 ปีที่แล้ว +2

      Same here, at least we got it corrected without embarrassing ourselves IRL

    • @wahabfiles6260
      @wahabfiles6260 4 ปีที่แล้ว

      @@ritajitdey7567 INR

  • @ItalianPizza64
    @ItalianPizza64 6 ปีที่แล้ว

    Amazing video again! Clear and concise as always, all but trivial with this kind of topics.
    I am very curious to see what you will be focusing on next!

  • @andrestorres2836
    @andrestorres2836 5 ปีที่แล้ว +3

    Your videos are awesome!! Im going to tell all my frieds about you

  • @TheAcujlGamer
    @TheAcujlGamer 3 ปีที่แล้ว

    This is so cool, specially the "HER" method. Wow!

  • @miriamramstudio3982
    @miriamramstudio3982 4 ปีที่แล้ว

    Excellent video! Thx.

  • @saikat93ify
    @saikat93ify 6 ปีที่แล้ว +1

    This channel is really amazing initiative as I've always found ArXiv extremely interesting but don't have the time to read all the papers. :)
    This question may sound very silly, but - How do programs play game like Mario and Reversi ? What I mean is, don't we need some kind of hardware like a keyboard or joystick to play these games ? How do software agents play this game ?
    I have always been curious about this. Please explain my doubt if anyone has my answer. :)

    • @ArxivInsights
      @ArxivInsights  6 ปีที่แล้ว

      It's not that hard to hack the game engine so that an RL agent controls the game inputs via an API (so you can do that from eg Python) in stead of via a controller/joystick. In most gym games there's even an option to train your agent from the raw game state in stead of the rendered pixel version!

  • @emademad4
    @emademad4 6 ปีที่แล้ว +1

    great content , great purposes . please do more videos asap . im studding at the same field would you suggest some links for up to date good articles?

  • @LatinDanceVideos
    @LatinDanceVideos 6 ปีที่แล้ว

    Great channel. Thanks for this and other videos.

  • @cyrilfurtado
    @cyrilfurtado 6 ปีที่แล้ว

    Great video, I can now look to read the papers. It would be great to post the links of the papers here

    • @ArxivInsights
      @ArxivInsights  6 ปีที่แล้ว

      All links are in the video description! :)

  • @aliamiri4524
    @aliamiri4524 4 ปีที่แล้ว

    amazing content(s), you are a very good teacher

  • @inspiredbynature8970
    @inspiredbynature8970 2 ปีที่แล้ว

    you are doing great, keep it up

  • @nikoskostagiolas
    @nikoskostagiolas 6 ปีที่แล้ว +1

    Hey dude, awesome video as always! Could you do one for the Relational Deep Reinforcement Learning paper of Zambaldi et al. ?

  • @adityaojha627
    @adityaojha627 4 ปีที่แล้ว

    Nice video. Question: Is DDQN efficient at solving sparse reward environments? Say I only give an agent a reward at the end of an episode.

  • @vadrif-draco
    @vadrif-draco ปีที่แล้ว

    So "HER" basically starts off as "if I do this action, I can get to this goal", and then gradually learns how to flip the statement to "if I want to get to this goal, I need to do this action". Pretty nice.

  • @mashpysays
    @mashpysays 6 ปีที่แล้ว

    Thanks for the nice explanation.

  • @420_gunna
    @420_gunna 6 ปีที่แล้ว +7

    Great vid! re: the ending of the video, what do you think about creating something on AI safety or ethics?

    • @ArxivInsights
      @ArxivInsights  6 ปีที่แล้ว +8

      Actually, that's a really good suggestion! Added to my pipeline :)

    • @AnonymousAnonymous-ht4cm
      @AnonymousAnonymous-ht4cm 5 ปีที่แล้ว +1

      Have you seen Robert Miles' channel? He has some good stuff on AI safety, but posts rather infrequently.

  • @CalvinJKu
    @CalvinJKu 6 ปีที่แล้ว

    Awesome video as usual!

  • @ianprado1488
    @ianprado1488 6 ปีที่แล้ว

    You make high quality videos A+

  • @bonob0123
    @bonob0123 5 ปีที่แล้ว

    great stuff well done man

  • @aytunch
    @aytunch 5 ปีที่แล้ว

    Great videos and channel. Why don't you make any more videos? :(

  • @QNZE5
    @QNZE5 6 ปีที่แล้ว

    Hey, very nice video :)
    What is the source for that video containing the boat in a behavioural circuit?

  • @ThibaultNeveu
    @ThibaultNeveu 6 ปีที่แล้ว

    Very nice video. Thanks you :)

  • @hassanbelarbi5185
    @hassanbelarbi5185 5 ปีที่แล้ว

    if some one want to contact you directly is there any way ?? i have some questions related to my thesis topic. thanks in advance for your efforts .

  • @herrizaax
    @herrizaax 5 ปีที่แล้ว +1

    Nice video.
    I didn't get the last part: how does it learn faster if it sets virtual goals? If it gets the same reward for a virtual goal as for the real goal, then it will just learn it can shoot at any point which is made a goal but the real goal will never be found. If it gets a lower reward then it learns that shooting at goals gives a reward but it tells nothing about the proximity to the real goal. I'm obviously missing something here and I'm really curious what it is. Thank you :)

  • @sunegocioexitoso
    @sunegocioexitoso 6 ปีที่แล้ว

    Awesome video

  • @Vladeeer
    @Vladeeer 6 ปีที่แล้ว

    C a n. you do an example for RL?

  • @samanthaqiu3416
    @samanthaqiu3416 5 ปีที่แล้ว

    Make a video on the MuZero paper

  • @arjunbemarkar7414
    @arjunbemarkar7414 5 ปีที่แล้ว +2

    Can you tell me where you find these articles?

    • @areallyboredindividual8766
      @areallyboredindividual8766 3 ปีที่แล้ว

      Website appears to be Arxiv. Searching for DeepMind and OpenAI papers will yield results too

  • @mountain_bouy
    @mountain_bouy 5 ปีที่แล้ว

    you are amazing

  • @artman40
    @artman40 6 ปีที่แล้ว +1

    What about delayed rewards?

  • @jeffreylim5920
    @jeffreylim5920 5 ปีที่แล้ว

    7:56 where the main point starts

  • @ycjoelin000
    @ycjoelin000 6 ปีที่แล้ว

    What's the website you used in 2:23?

  • @vigneshamudha821
    @vigneshamudha821 6 ปีที่แล้ว

    brother please explain about capsule network

    • @ArxivInsights
      @ArxivInsights  6 ปีที่แล้ว

      Aurelien Geron has a great video on CapsNets, no need to redo his video, its already perfect! th-cam.com/video/pPN8d0E3900/w-d-xo.html

    • @vigneshamudha821
      @vigneshamudha821 6 ปีที่แล้ว

      +Arxiv Insights thanks bro

  • @codyheiner3636
    @codyheiner3636 6 ปีที่แล้ว

    Hi Xander, I made a Patreon account just for you! Keep it up!

    • @ArxivInsights
      @ArxivInsights  6 ปีที่แล้ว +1

      Thx a lot Cody!! Getting this kind of support from people I've never is such a great motivation to keep going! Many thanks :)

  • @StevenSmith68828
    @StevenSmith68828 5 ปีที่แล้ว +1

    I really like machine learning because it feels like training a pokemon sure it sometimes take a very long time to get it set up but yeah...

  • @viralblog007
    @viralblog007 6 ปีที่แล้ว

    can you suggest a link of research paper on reinforcement learning?.

  • @dripdrops3310
    @dripdrops3310 6 ปีที่แล้ว

    The number of views of your videos is not proportional to their quality. Looking forward to new content!

  • @MasterofPlay7
    @MasterofPlay7 4 ปีที่แล้ว

    any coding videos?

  • @shivajbd
    @shivajbd 5 ปีที่แล้ว +1

    15:29 Modi

  • @DistortedV12
    @DistortedV12 6 ปีที่แล้ว

    Smart guy

  • @WerexZenok
    @WerexZenok 6 ปีที่แล้ว

    I don't see any social problem automation can cause.
    If you let the market free, it will ajust itself as it always did.

    • @egparker5
      @egparker5 6 ปีที่แล้ว

      I sort of feel the same way. We shouldn't make any public policy decisions until we see actual damage happening, and not just overexcited predictions. So far it seems DL/ML is creating net additional jobs and increasing average salaries. If that changes, then maybe it is time to think about new public policies. In the meantime, I would recommend retargeting the time spent worrying about AI into time spent learning about AI to increase your human capital.
      www.wsj.com/articles/workers-fear-not-the-robot-apocalypse-1504631505
      www.forbes.com/sites/bernardmarr/2017/10/12/instead-of-destroying-jobs-artificial-intelligence-ai-is-creating-new-jobs-in-4-out-of-5-companies

    • @WerexZenok
      @WerexZenok 6 ปีที่แล้ว

      Agreed.
      And even imagining the worst scenario, where AI replaces all jobs, we still will be capable of owning bots and renting then.
      We will live like gods on earth.

    • @NegatioNZor
      @NegatioNZor 6 ปีที่แล้ว +1

      The question here though, is WHO will be owning these robots, and how will these jobs be distributed? For highly educated and resourceful people, this will probably not be a huge issue. But there are something like N million truck drivers in the US, which will have a much harder time adjusting. Going from blue-collar to white-collar is probably not as easy.

  • @hesohit
    @hesohit 4 ปีที่แล้ว

    I just want my computer to grind levels. Not take a my job.

  • @wahabfiles6260
    @wahabfiles6260 4 ปีที่แล้ว

    why his head bigger then the body? Alien?

  • @planktonfun1
    @planktonfun1 5 ปีที่แล้ว

    big brain filter

  • @MD-pg1fh
    @MD-pg1fh 6 ปีที่แล้ว +2

    Her?

  • @Rowing-li6jt
    @Rowing-li6jt 5 ปีที่แล้ว

    louder pls

  • @creativeuser9086
    @creativeuser9086 ปีที่แล้ว

    what happened to this channel..

  • @tsunamio7750
    @tsunamio7750 4 ปีที่แล้ว

    VOLUME TOO LOW!!!

  • @loopuleasa
    @loopuleasa 6 ปีที่แล้ว

    A feedback on your video: Trim your content, and be more entertaining for the videos.
    Watch how Siraj does it.
    From my point of view, I dozed off a couple of times, even though the accuracy of the content is high.
    Bascially use less words, less images, less intro, less buildup and focus more on the crux, while going faster to keep your audience on edge and curious.
    Hope my view is productive to you. Good luck.

    • @loopuleasa
      @loopuleasa 6 ปีที่แล้ว

      Do it like an AI optimizer does it. Minimize and use simplicity as much as possible until you reach the goal: Communicate the idea you want to convey, in as little time and actions as possible.