On The Road to Artificial Intelligence

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ส.ค. 2024
  • Once the realm of science fiction, smart machines are rapidly becoming part of our world-and these technologies offer amazing potential to improve the way we live. Imagine intelligent, autonomous vehicles that reduce crashes and alleviate congestion in crowded cities. Imagine robots that can help your aged grandma move around safely or instructors that can assist special-needs children in classrooms. Gil Pratt, former head of the Robotics Challenge at DARPA, now heads up the $1 billion Silicon Valley-based, Toyota Research Institute where he and his team are pushing the boundaries of human knowledge in autonomous vehicles and robotics. This session will explore the breakthrough technologies on the horizon and the unprecedented issues we will face in this brave new world.

ความคิดเห็น • 21

  • @ONDANOTA
    @ONDANOTA 8 ปีที่แล้ว +5

    21:29 "if we behave better" ... but, guess what, we DO NOT BEHAVE WELL. autonomous cars don't send text messages, don't get drunk, don't fall asleep. So I believe (and it was tested), autocars are better than humans

    • @Brainbuster
      @Brainbuster 8 ปีที่แล้ว +2

      Of course autocars are better than humans.
      Humans suck at driving.

  • @AgrippaTheMighty
    @AgrippaTheMighty 8 ปีที่แล้ว +5

    At 14:32 "We have no data on cognition" What about the millions of TH-cam videos showing cognitive behavior, including emotional expression and reaction? Once we have machines capture the fundamental parts of a conversation or of someone speaking, these machines will likewise learn and understand the contents of a story, what type of emotions subjects express and when and why they express such emotions. I think this A.I. expert is a little too pessimistic. However, I did agree with him about the relation between A.I., automation and the need for distribution of wealth.

    • @georgeglez9872
      @georgeglez9872 8 ปีที่แล้ว +1

      That data you say describes the consequences of cognition,but doesnt explain it or any of its causes. We indeed have almost no data on cognition.

    • @Kaish3k
      @Kaish3k 8 ปีที่แล้ว

      You obviously don't understand the fundamental problem. You see, we don't have any data on cognition (in a meaningful pure form) because we don't even know where to start. What is cognition, where does it happen, how does it happen - what specifically is this thing that allows me to conceive of this thought that then becomes a series of twitches of my muscles in my hands, which ultimately allows me to type this thought, so you then can reverse the whole process and decode the words to then build them up into a similar thought to the one that I had while writing this sentence? It's been a problem in philosophy for a long time, going back to Descartes (probably even before, but he articulated the problem particularly well). Sure you can try to answer my questions of what and where by saying "of course we know... It's the neurons in your brain firing" ... and that's about as far as you can get with any meaningful discussion before you then realize that you have no idea where the idea that you have no idea came from in the first place. The problem of cognition is at least as hard as the problem of consciousness. In fact, the definition of consciousness is to be cognizant of one's self. Cognizant simply means being aware of something. Consciousness is a specific instance of being cognizant (having cognition of X, where X is yourself). As far as I know, there haven't been any developments on what specifically cognition is; the last time I checked we were still arguing about if cognition is even a material thing - let alone having tangible data with which to use about it. Even in the case of having "data" of cognition, say the raw electrical and chemical outputs of neurons, it's not obvious how to use that with existing neural networks to then make them cognizant of anything.
      I work with deep learning everyday (like actually building neural networks). I work with all kinds of data. The reason we need large sets of data is because most deep learning is what's called supervised. This means for any input X we have the correct corresponding output Y. This makes the process of "deep" learning fairly straightforward in terms of being a simple minimization process of a high dimensional tensor equation (that's all a neural network really is by the way: linear algebra and a little calculus in equations with millions of parameters). At this point, any supervised learning problem is a simple matter of computation to carry out the minimization of its loss function in terms of its input and output, which can sometimes be overwhelming, but this is in my opinion the real reason we're actually seeing such huge leaps forward (not the stupid reason the Toyota guy mentions being "cell phones" - what a fucking joke), that GPUs have finally reached the price per performance point for both commercial and research level applications. The hard problem is making a neural network do something that we don't know the output for. This is actually achievable, for instances where we know if the output is correct or not, but only after it happens (think of any sufficiently complex video game - "will moving this piece better the outcome - who knows, try it and find out"). Reinforcement learning (using deep learning to approximate an unknown quality function of input to output) does this learning task with a lot of fine tuning (it diverges very fast if you're not extremely careful about a lot of things = mostly unstable). But alas, we remain with the greater unsolved task of truly unsupervised learning, which is basically taking input and making an inference about it. Wait, that sounds familiar, that's... not, maybe, cognition is it? Oh wait, yeah it is - it does the same thing. Bingo. Unsolved problem. This problem may even be AI-complete.
      Having often contemplated cognition and consciousness from the viewpoint of a computer scientist who works with this stuff daily (and being a philosopher I might add), I think cognition is really just an emergent property of the dis-entropy of information in its most abstract form. You can think of this happening in levels, each level being more abstract than the previous. For example, the pixels on your screen form lines at one level, then letters at another, then words, then sentences, then fragments, then concepts, and so on until they all reach a maximum level of abstraction. This is of course you. The concept of being one's self is the most abstract anything ever could be. After all, you are the embodiment of the summation of all your other concepts/abstractions. Then in this view consciousness is only the abstraction(s) that encompass self reflection. The interesting thing about self reflection as opposed to all other modes of being cognizant is that it must, as a matter of necessity, be an observation of a previous state of that same system. So in other words, consciousness is uniquely time dependent (on the previous state). This, to me, seems like it would be some kind of emergent property of a recurrent system. Maybe we will use massive recurrent neural networks to either emulate or reproduce (is it authentic in the same way our consciousness is authentic?) both cognition and consciousness in a machine. I don't think we're too far off actually, about half the problem is the raw computational power required to do this sort of thing (which should be easily solvable in time) and the other half is to get the ball rolling so to speak (which requires the hardware be there to develop and test ideas). I personally believe it will be an emergent property of any recurrent system with sufficient sensory input ability to perceive itself. Who knows though. As of right now, cognition is a complete mystery, and no data exists on it in the form that would be conducive to its formation.

    • @Kaish3k
      @Kaish3k 8 ปีที่แล้ว

      I think you're confusing my rather dark diagnosis of the situation as terminal. This is not the case. We, certainly are trying to push forward - myself included. It's just that the path is not one that is obvious, so it's difficult to say if we're even making progress at all. It seems that these small steps are pushing in the right direction, but then again, they may not. Reinforcement learning may actually be the answer we're looking for - it just has a long way to go if that's the case. Deep reinforcement learning has many of the attributes that we're looking for, primarily that it's capable of forming semi-unsupervised analysis of a situation. It even fits inside the framework of evolution very nicely; there is an intrinsic reward at any state for any living organism: life or death (where a reward of any state is a prerequisite of reinforcement learning). It may turn out that true unsupervised learning is only a perceived property of a sufficiently intelligent system, where the underlying mechanism is much more simple, such as reinforcement learning. So in other words, what we're looking for may only *appear* to exist because the underlying system's complexity masks the much more unsophisticated base truth.
      For example, I could today use reinforcement learning to drive an artificial organism that I create in terms of a simple game. Suppose I give the organism a few "sensors" (to "see") and make it "die" if it doesn't obtain enough of a "resource", say by traveling to it in a 2d virtual space. It's only output is up, down, left, and right; its only inputs are a few numbers that represent its perceived environment through its "sensors." The artificial organism *will* learn to move toward the "food." This is possible today. It would only take a few hours to do this. So now I ask you, is this thing living or not? It sure displays the properties of a simple organism, like a bacteria or something, and we prescribe that bacteria are living, so what is stopping us from saying this thing I've created is living? The point I'm trying to make is that not only do we not know *how* to get there, we also don't know *when* we've arrived.
      Don't get me wrong, I want and look forward to the progress we will enjoy in the coming decades. Unfortunately, it's one of those things that just takes a lot of time. Like I said before, half of it is just having access to the computing power necessary, which is just a pretty well known factor of time, the other is gaining the insight necessary, which will also happen in time, but the rate of this is really the unknown here. Being conservative, I think by 2040 we'll have truly cognizant machines, performing at or above humans at all tasks.
      The old adage "be careful what you ask for" is of particular importance here though. Once you start down the path of creating human level machine intelligence (HLMI), you then come to a split in the road: extinction or proliferation of the human species. This is because as soon as you achieve HLMI it then follows the well known exponential trajectory of information technologies. What this means is, as soon as you create HLMI you then subsequently create super human level machine intelligence (SHLMI) shortly after. Which has many more unknowns than even the process of achieving HLMI. At the point of SHLMI, humans would no longer be the dominant species, what happens then? If the takeoff of SHLMI is slow enough, we could merge with it, benefiting from it; although, I fear if we do not, the outcome for the human species is rather grim indeed. So should we even work on creating HLMI in the first place? That answer, is also not obvious at all.

    • @paulc5082
      @paulc5082 8 ปีที่แล้ว +1

      i think if A.I. watched all of youtube it would become retarded

    • @ck58npj72
      @ck58npj72 8 ปีที่แล้ว

      I find your comment more interesting than the talk.
      I believe that the AI needs to have a base 'consciousness' in order to interpret input in a way that it can decide the value of the outcome being good, depending on the decision it makes. It needs context and not just ability.

  • @Brainbuster
    @Brainbuster 8 ปีที่แล้ว +2

    Play at 1.5x playback speed. ;)

  • @scientious
    @scientious 7 ปีที่แล้ว

    These conversations will have to wait. Publication has been shelved for now.

  • @williamreed912
    @williamreed912 7 ปีที่แล้ว

    The navigation system in my 2015 Toyota 4-runner struggles to function well - I hope Toyota can solve this problem at least concurrently with making their cars autonomous.

  • @atf300t
    @atf300t 8 ปีที่แล้ว

    7:51 "Is thinking only about games?"
    Obviously, it is not, but games provide good benchmarks for unbiased comparison. While we have not involved to play Go or any other games, all children like to play games. From the evolutionary point of view, it would make much sense for children to invest so much time and energy in games if it did not offer any practical advantage in real life. In case of deep learning, we can see that the same approaches that worked so well for Go, also works well for many other tasks, such as image recognition.
    9:05 "Neural network is a crude abstraction of what is going on in human brains or any animal brains."
    By the same token, you can describe airplanes as a crude abstraction of how birds fly. Nevertheless, airplanes can fly higher, faster and cover larger distances than any bird. So there is no need for technology to be an exact copy of natural processes to be useful. In fact, by simplifying design and focusing on most crucial characteristics we can achieve better results than what nature endowed us. In other words, the goal of AI is not to produce an exact copy of human being, but to produce device that can solve complex problems that are challenging for even the best human minds.

  • @cosmicseer5103
    @cosmicseer5103 8 ปีที่แล้ว

    Although the dangers of AI are quickly dismissed in this interview there is more to know.
    1. The history of AI that is given here only deals with the publicly acknowledged side of AI development. Due to its military significance, expense and the nature of the groups funding research we should assume that AI development is far more advanced than is assumed in this presentation.
    2. In the discussion about the dangers of AI what is never publicly spoken about or acknowledged is that an aggressive and invading AI far in advance of any AI on Earth seeks to take over our AI systems before taking over us and ultimately destroying all biological humanoids.
    3. Quantum computing and nano technology have a huge impact on the development and advancement of AI on Earth and the agenda of the invading, off-world AI. I wouldn't expect the interviewee to relate either to the topic of an invading super advanced AI but he should at least acknowledge that quantum computing has the potential to revolutionize AI development.
    4. The invading consciousness or off-world AI works like a broadcast signal and can attach to the electrical field of a human much like a virus that can have a steering impact on the thought forms of the individual. Those in key high tech jobs are targeted by the invading AI to promote technology and create the infrastructure it needs to take over. These contaminated individuals are blind to the dangers of AI and are know as 'prophets' since they constantly promote the advancement of technology.

  • @IJustMadeAComment
    @IJustMadeAComment 8 ปีที่แล้ว

    Can't wait for AI to take off. Not sure I should be excited given the economic implications but I am.

  • @JorroBG
    @JorroBG 8 ปีที่แล้ว

    I disagree that we humans communicate at 10 bits per second. This only applies to our verbal conversations. But at the same time, we transmit at least a comparable amount of information with our voice intonation, and a lot more with our facial expressions. So our eyes, ears and brains are heavily engaged in a real conversation. Not to mention the background thoughts of our conscious and of the subconscious mind. Now, add emotions to all this... An example: imagine a bitter argument between estranged spouses. Is this 10 bits per second?!! Well, on the male's side, could be. But I'm talking about the wife's side here...

    • @feelMYgurth
      @feelMYgurth 8 ปีที่แล้ว +1

      JorroBG . I liked your comments and interesting point, however id add that a " bitter argument between estranged spouses" wouldn't really qualify as productive communication/computation. sounds like alot of noise rather than a signal 😆

  • @WilsonMar1
    @WilsonMar1 8 ปีที่แล้ว

    [5:40] "half the US military air fleet are autonomous vehicles"

  • @ClayMann
    @ClayMann 8 ปีที่แล้ว

    To me it seems obvious what will happen with cars. Self driving will become the standard way of getting around in just a few years. It's too useful and brilliant not to take off like a rocket. Now everyone that wants to drive their car because they love driving, they're going to get pushed out very quickly for one simple reason. Speed. Self driving cars will obey the speed limits of course but once you have roads full of nothing but self driving cars, there is really no need to obey a speed limit.
    The technology will be at such a high standard in a short period of time thanks to the way the technology will improve exponentially as happens with all digital technology. The cars and heavy trucks will be able to travel at speeds closer to the limit of what the average of all cars can handle. So say 100mph. As the self driving cars improve, those speeds will climb. Now it will simply be illegal to drive a car as a human being will not be able to safely drive with robot cars whizzing around.
    The benefits of getting to your destination in half the time will be far too seductive not to go for. The culture will quickly change to a point where anyone complaining that they want to drive like in the old days, they'll be rapidly pushed out of the conversation.
    Furthermore, roads will start to be redesigned. They'll become much slimmer as AI cars do not need wide roads. Traffic jams will become a thing of the past as each car will be in touch with every other car and be able to co-ordinate to provide the best solution to keep all traffic flowing.
    Just imagine companies like Amazon having their deliveries times halved. Big companies like that will be lobbying so hard to get the laws changed and the public opinion will certainly be for it as they quickly adapt to cars just being there for them anytime they need them.
    So when this guy says we don't know what will happen. That's what will happen. It's inevitable Mr Anderson.

  • @markwilliams5654
    @markwilliams5654 8 ปีที่แล้ว

    how funny how wrong they are Tesla