HUMAN Level AI is HERE!

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ธ.ค. 2024

ความคิดเห็น • 37

  • @johnnychromatic
    @johnnychromatic หลายเดือนก่อน +10

    In the context of automated vehicles, one could train one's vehicle how to park in one's driveway and save that method for future use, and perhaps even upload to the mothership to be shared with other vehicles that may visit the same location. Tesla's FSD currently does a really shitty job of parking in my driveway.

    • @BigBen621
      @BigBen621 หลายเดือนก่อน

      _Tesla's FSD currently does a really shitty job of parking in my driveway._
      The horror...

  • @roger_is_red
    @roger_is_red หลายเดือนก่อน +1

    I found listening to Mozart while working on plumbing helps my mood. Jeannine

    • @clavo3352
      @clavo3352 หลายเดือนก่อน

      That's some great Zen. I'm partial to Carly Simon myself.

  • @jackcoats4146
    @jackcoats4146 หลายเดือนก่อน +3

    Need to find a way to back 'add' the TTT saved, or several at a time, back into the base understanding. I have no clue how to do that, but it would allow adding skills into the base without retraining from scratch and adding the training on top with more TTT for tasks that should be part of the base. I know, I'm wrong, but you got me dreaming/going on it!

  • @IntoTheFray.58
    @IntoTheFray.58 หลายเดือนก่อน

    I like the cache idea. Imagine having a library online of skills available on demand to load into a cache on your Optimus. This would enable true reconfiguring on the fly for new tasks and will make Optimus or other inference engines much more widely useful.

  • @czarkbrooks
    @czarkbrooks หลายเดือนก่อน +2

    WOW! I had to solve exactly that ARC problem to support VLSI design rules a few years ago. The difference between my code and the broken code it replaced was that mine was an order of magnitude shorter :-) BTW, self-intersecting shapes make it even more "interesting" (No neural nets required)

  • @ellipsisfan
    @ellipsisfan หลายเดือนก่อน

    I also like the idea of a LoRA cache. Some modest-resource-demand monitoring would be needed to recognize the appropriateness of a particular cache for a particular circumstance. That kind of “Aha! This is akin to my previous kitchen sink problem” realization resonates with the “activation schemata” of cognitive psychology.

  • @afjerry1
    @afjerry1 หลายเดือนก่อน

    I love your taking these AI strategies and breaking them down so us common Volkswagen can understand.

  • @Thedeepseanomad
    @Thedeepseanomad หลายเดือนก่อน

    The surprising kickassness of good LoRAs...

  • @antoniobortoni
    @antoniobortoni หลายเดือนก่อน

    AI isn't infinite-it works step by step, processing input-output data, logic, and calculations. But imagine training AI to function with a loop system, incorporating awareness, memory, and external tools like calculators or databases. This approach could create more versatile software that doesn't just answer the first thing it encounters but adapts based on tasks or context.
    For example, a trained AI could have 'awareness files'-predefined behaviors or goals activated by specific keys. One file might simulate the mindset of a man, another of a woman, each trained with distinct attitudes, rules, and objectives. The same AI could perform multiple tasks by switching between these files, all while retaining its core functionality.
    This concept extends to robotics: train a multimodal AI to process sensor inputs (vision, touch, etc.) and control motor outputs. With the right training, even a small AI could operate a robot that cleans dishes or makes sandwiches-controllable from a phone. General AI doesn’t have to be distant; it's about layering awareness, adaptability, and specific skills into a unified system.

  • @twirlyspitzer
    @twirlyspitzer หลายเดือนก่อน

    AGI will never be invented but rather emerge which what it seems to be doing at least partially right now.

  • @stewartciesla8142
    @stewartciesla8142 หลายเดือนก่อน +1

    Very informative video. 👍

  • @clavo3352
    @clavo3352 หลายเดือนก่อน

    Really hope to play Scrabble with a variety of AI assembly bot tools.

  • @GreylanderTV
    @GreylanderTV หลายเดือนก่อน

    What nobody seems to get is this: _intelligence_ is what happens at training time. The model architecture & training algorithm _are_ the intelligence of an AI system. What happens at inference time (in most systems) is just a regurgitation of what has already been learned/solved, the generalizations already made. Understanding this, we realize AGI has in effect already been achieved--broadly speaking NNs & their training algorithms are generalized problem solvers.
    This technique, by applying an abbreviated version of the training process at _inference time,_ is actually a first step to bring the AGI that already existed into the inference process.

  • @slwiser1
    @slwiser1 หลายเดือนก่อน +3

    Training to the test is not the same as general learning.

    • @nyanbrox5418
      @nyanbrox5418 หลายเดือนก่อน

      No, it's *specific* learning, and one of the steps to generality is solving specifics
      Remember, this was a problem that "no LLM could ever solve" a year ago, the problem with LLMs is that there are specific things they can't do that people want them for, not that they can't do literally everything
      Most tasks are specific, not general, if you can have a model that you can customise to master just one thing, as opposed to that you can't train to do anything whatsoever, that is far more useful

    • @Thedeepseanomad
      @Thedeepseanomad หลายเดือนก่อน +1

      If you need a neurologist you take the general model, finetune it to medical model and slap on a neurology specialization LoRA.
      Need it to do structural engineering? Same procedure. So you can have a model, or collection of models working in concert to load the right finetune and right LoRA for the task.

    • @nyanbrox5418
      @nyanbrox5418 หลายเดือนก่อน

      @Thedeepseanomad I've seen this kind of thing for image generation, but this is the first I've heard of it for more general procedures,
      Irl, we have specialists who spend years developing experience in one specific kind of task, it makes sense from a human perspective to have ais specialised to do specific tasks

  • @RobertDickert
    @RobertDickert หลายเดือนก่อน

    Excellent summary, this really adds nuance to high-level reporting I’ve seen elsewhere. I didn’t really understand that this is kind of a hack, not “real” abstraction, so in some ways less of a breakthrough than it sounds like, but still a real advancement. This is going to make inference mighty expensive, maybe won’t be practical until we have better silicon - but maybe ASICs could make something like this viable

  • @MashDaddy
    @MashDaddy หลายเดือนก่อน

    LoRA tunes a LLM for a specific task, Reasoning Tokens are more efficient in taking a LLM through reasoning steps to fit a specific task.

  • @henrismith7472
    @henrismith7472 19 วันที่ผ่านมา

    Honestly, with breakthrough speed inference chips I don't think this knew test/time scaling paradigm will cause models to become slow. Apparently 01 pro takes half the time to answer as 01 preview did yet gets better results, and it's going to be so easy to speed up inference by orders of magnitude over the next couple years.

  • @paulmuriithi7596
    @paulmuriithi7596 หลายเดือนก่อน

    30 million dollars worth of cerebras wafer based inferencing will make this approach a viable pathway towards robust real world task multi agents. ai agents really need this to offer small buisness owners real benefits.

  • @DarylOster
    @DarylOster หลายเดือนก่อน

    This would be good for FSD used in seldom encountered conditions by only a few vehucles (for instance driving through fire) then the rare element (fire) if encountered could quickly load a more appropriate response (like not stopping in the middle of the fire)...

  • @BlissedOut
    @BlissedOut หลายเดือนก่อน +2

    Humans have a varied level of intelligence. We walk, we don't really use our intelligence for that. We ride bicycles, we don't really use intelligence for that. We drive we don't really use intelligence for that , we call those skills... Computers have simulated human skills since the abacus. making calculations faster and storing results. To claim intelligence of any sort has arrived because we digitized another skill, really ????. I feel at least a little insulted when someone claims a computer/software has reached human level of intelligence. Maybe it has surpassed yours ? Computers have been able to do things faster and better than humans since their invention, or humans would still be doing the tasks we have delegated to machines. All we have created are machines that can almost simulate very rudimentary skills humans have. Intelligence looks at a cartoon depiction of a cat once and recognize cats with 98% accuracy after that. AI looks at 20 billion photographic quality images of cats and still get it wrong sometimes... Everybody knows it should learn the skill eventually. Very few humans wake up in the morning and look at their computers to see if it came up with something intelligent or original during the night. This is never going to happen, it is going to continue regurgitating the rubbish it found on the internet, or have been fed in other ways. The most rudimentary forms of life can navigate the world, often with much better skill than humans. Maybe AI gets as good as flies at navigation and agility. If we have a misconception that navigating the forest of time and space is an indicator of intelligence, we also have to concede that flies exceed ours..(an insight that was not originated by AI)

  • @via_kole
    @via_kole หลายเดือนก่อน

    To add on at about the 1:00 mark. Imagine if everyones ai (that has ttt implemented) could be personalized. Like if will selfe train on a specific task and if you know you will use it for that task a lot you can tell it to remember it. But theres probably gotta be a cap to that.
    So each person will have there own personal ai's that are good for what they need it for.

    • @via_kole
      @via_kole หลายเดือนก่อน

      Also what if we had lora "chips" that we could put into our robots that gives it a special training for certain tasks. Or Lora models for llms for certain applications. I know image models already do this and I'm sure llms do but it's cool to think about xD

  • @RonLWilson
    @RonLWilson หลายเดือนก่อน

    In the road to AGI I am not sure if this is a detour or a short cut but I have been exploring other options (perhaps) of doing that which make more use on other technics than just NNs and such.
    I just made and uploaded avid eo to my TH-cam channel that sets off on that journey, but whether it dead ends or fizzles out or ends up going someplace useful is at present not so clear, but might be good food for thought regardless.

  • @jackcoats4146
    @jackcoats4146 หลายเดือนก่อน

    It seems like, keeping the new weights, could be kept and run through TTT for a new task, so it could extend the base knowlege, allowing it to have a new updated base to learn from on the next tasks. Or am I seeing something wrong?

  • @QuwanMarkos-e5q
    @QuwanMarkos-e5q หลายเดือนก่อน

    I am already working on my own AI Agent program to make this happen.

  • @ddmitch1
    @ddmitch1 หลายเดือนก่อน +2

    It would be neat if we could verbally connect with FSD to input our thoughts on improvements needed by FSD, especially after disengagements. Can FSD potentially listen?

    • @adampenny1406
      @adampenny1406 หลายเดือนก่อน

      You’d just get waffle from driver.

  • @ddmitch1
    @ddmitch1 หลายเดือนก่อน +7

    I'm wondering why the other legacy car companies don't just buy a part of Tesla, by investing billions in TSLA, instead of wasting billions trying to build their own EVs that are not profitable. If they invested $50 billion in TSLA stock, (approximately 5% of Tesla's current valuation), they would make a lot more money than wasting $50 billion trying to build EV batteries, better software, better EV designs, safer EVs, 48 volt tech, FSD, steer by wire tech, autonomous EVs, etc. That maybe the only way for the non Tesla legacy automakers to avoid bankruptcy. 🤣😂😁😋

    • @IQBooks-pub
      @IQBooks-pub หลายเดือนก่อน +2

      Yep. But then investors would bale out. Why own ford stock? If their “profits” are coming from owning TSLA then just own TSLA.

  • @andrasbiro3007
    @andrasbiro3007 หลายเดือนก่อน

    Considering that the human brain has at least exaflop performance, we shouldn't feel bad for it taking so long on a single GPU.

  • @cherubin7th
    @cherubin7th หลายเดือนก่อน +1

    this is kind of old, people called it auto gpt.

    • @Gnaritas42
      @Gnaritas42 หลายเดือนก่อน +2

      Not remotely the same thing, so no; and it's sad you think they are.