Book Summary and Review | Rebooting AI by Gary Marcus and Ernest Davis

แชร์
ฝัง

ความคิดเห็น • 13

  • @Go-Meta
    @Go-Meta  ปีที่แล้ว

    Hi all, I hope you enjoyed this video review.
    Have you read Rebooting AI? What did you think of it?
    Do you think it is still relevant today?

  • @AIIdeation
    @AIIdeation ปีที่แล้ว +5

    Thanks for the insightful review of the "Rebooting AI". We need more scepticism of the AI hype

    • @Go-Meta
      @Go-Meta  ปีที่แล้ว +1

      Thanks, and yeah, it's hard to keep sensible bearings with everything that is going on in the world of AI at the moment. I'm hoping to strike the right balance on this channel between being fascinated and amazed with what is going on, but at the same time being sceptical about much of the hype and deeply worried about the possible implications for society.

  • @rinekeverbrugge1780
    @rinekeverbrugge1780 ปีที่แล้ว +1

    Thank you for your review! I read the book three years ago and it was good to hear the perspective from 'after chatGPT'

  • @rosscads
    @rosscads ปีที่แล้ว +1

    Interesting review of "Rebooting AI" by Marcus and Davis!
    While the authors rightly identify the limitations of current deep learning AI systems, their proposed solution of hybrid deep-learning and symbolic systems seems misguided.
    Having worked on state-of-the-art hybrid AI systems in the early 2010s, I'm convinced they're a dead end for AGI. Symbolic systems are inherently limited by the knowledge of their programmers. To achieve true understanding, AI systems must be capable of independent learning.

    • @Go-Meta
      @Go-Meta  ปีที่แล้ว

      Hi Ross, thanks!
      I agree that there are many misguided ways that symbolic systems could be added to neural network (NN) style AIs, but I do think there is something deeply interesting about the situation where we're using NNs that can't accurately do symbolic calculations as our 'best' AIs.
      In a much earlier video (th-cam.com/video/e7jWBE0QR9s/w-d-xo.html) I explain the view that I take on the relationship between reasoning with and without support from symbolic calculation. I take a slightly non-standard view that most of the time when we use the term 'rationality' we actually mean *calculative* rationality in the sense that our arguments are supported by some symbolic calculations of one kind or another. For example, much of science depends on doing accurate calculations.
      So, one way or another we need our AIs to be able to correctly and accurately do symbolic calculations, and as a pure play NN it seems quite clear that they can't do it. Maybe these NNs like ChatGPT just need to be connected to use computers to do the calculations (maybe like the plugin that links to Wolfram Alpha). Similarly we're always going to want our bank accounts managed by 100% logically accurate traditional programs (e.g. based on SQL databases or whatever), but again the NNs could simply use and program these as we do.
      I guess the upshot of all this is that I agree there are many ways to get the NN to symbolic connection wrong, but I do think it is going to be needed.
      Thanks again for the comment!
      Oli

  • @miraculixxs
    @miraculixxs ปีที่แล้ว +2

    Whenever you hear the words "AI can ..." you immediately have to add "under specific conditions" before even considering what it is that is allegedly possible.

    • @Go-Meta
      @Go-Meta  ปีที่แล้ว +1

      Indeed (although brevity doesn't always allow full accuracy!).
      I would also note that many have also complained that a further problem with current AI (and all probabilistic systems) is the lack of confident reproducibility, which is compounded by the fact that the systems keep being updated. So even more correct would be to say something like, "During this time period, many people were able to get AI X to do Y under specific conditions" 😀
      Thanks for the comment,
      Oli

  • @shawnvandever3917
    @shawnvandever3917 ปีที่แล้ว +4

    I have to disagree with Marcus on many fronts. Gary likes to call it a "super auto-correct," and that in itself is an underestimate. These AI models identify patterns and make predictions based on those patterns. At a high level, this is what humans do as well. Basic reasoning cannot be accomplished by just predicting the next word alone, as Gary likes to leave the narrative at. The short gap between ChatGPT 3.5 and GPT-4 is a major jump in intelligence, and using the example "Yesterday was fun," it can now answer this. The "hallucination" issue is not squashed but is much better. OpenAI feels confident that this problem will be fixed soon. Palm E AI can recognize things it hasn't been trained on in the real world. For example, it knows what a coffee cup is in the real world from its abstract understanding as a language model. I would like to point out another issue: we are expecting AI to be perfect. If this is the case, we have already surpassed AGI in our expectations and have moved to superintelligence. Humans are prediction machines good with patterns, and because of this, we ourselves will never be 100 percent accurate. We cannot replicate human intelligence without a certain amount of failures.

    • @Go-Meta
      @Go-Meta  ปีที่แล้ว +2

      Hi Shawn,
      Personally I find the "LLM is sophisticated predictive text" is a useful first base analogy for anyone trying to get their head around what magic is going on here (indeed I made a video using this analogy: th-cam.com/video/hgpIOduCED0/w-d-xo.html) but I agree that it shouldn't be treated as more than a useful analogy for a broader audience.
      And, yeah, I've seen that GPT-4 gets the "Yesterday was fun" thing correct now, which is amusing and makes me glad that I highlighted the dangers of using specific examples. But of course, if GPT-X eventually passes all specific examples then scale will indeed have been enough :-)
      But already we're seeing people experiment with hybrid approaches, so actually I think Marcus will be "right" simply because scale alone is such an inefficient way to extend the capabilities of these kinds of systems. For example, this looks like an interesting extension of transformers to add memory so that you don't have to just keep on growing the prompt size (Memorizing Transformers: arxiv.org/abs/2203.08913 ). And I think others have linked LLMs to a traditional computational system.
      But Marcus could well be "wrong" in the sense that these hybrid systems may not end up providing humans with a symbolic view into their "mental" states and "beliefs". But, then again, the huge benefit of doing this would be in the explainability and safety of an AI system that could reveal to us this kind of view of its working.
      So, I think there will be economic and governance reasons why the "just scale it up" approach may be abandoned soon. No-one will feel much reason to be purist about this. They'll do what works efficiently. If that happens then the "will scale alone work" experiment will sort of be ended anyway.
      And I agree when you say "We cannot replicate human intelligence without a certain amount of failures" but I think this misses a key part of the hope / expectation about what AGI will be like. People talk about AGIs as if they'll be a cross between human intelligence and traditional computer accuracy and reliability. And *this* will not be achieved just with scale of transformer networks alone. We'll always want our bank balance to be managed by a traditional symbolic approach, so whether or not the hybrid-ness is in one system or with AGIs using traditional computing too, we will certainly have a hybrid future for computing.
      Thanks for the comment!
      Oli

    • @shawnvandever3917
      @shawnvandever3917 ปีที่แล้ว +1

      @@Go-Meta I agree it will be hybrid I guess my take with him has been LLMs need to be thrown out and start over. I haven't read his book only seen a few videos on him. One of the biggest faults with LLM are not being able to catch when something is incorrect and make a new prediction. For humans when a prediction is wrong, sometimes we catch it then we are able to update the data and make a new prediction, Because of this alone LLMs will not be reliable for self driving so yes data is not enough. I am gonna watch some more of your videos. Glad I bumped into your channel.

    • @miraculixxs
      @miraculixxs ปีที่แล้ว

      No, humans are not "prediction machines".

    • @shawnvandever3917
      @shawnvandever3917 ปีที่แล้ว

      @@miraculixxs Ok well you have an opinion however that opinion is incorrect