Mathematician debunks AI intelligence | Edward Frenkel and Lex Fridman

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 เม.ย. 2023
  • Lex Fridman Podcast full episode: • Edward Frenkel: Realit...
    Please support this podcast by checking out our sponsors:
    - House of Macadamias: houseofmacadamias.com/lex and use code LEX to get 20% off your first order
    - Shopify: shopify.com/lex to get free trial
    - ExpressVPN: expressvpn.com/lexpod to get 3 months free
    GUEST BIO:
    Edward Frenkel is a mathematician at UC Berkeley working on the interface of mathematics and quantum physics. He is the author of Love and Math: The Heart of Hidden Reality.
    PODCAST INFO:
    Podcast website: lexfridman.com/podcast
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com/feed/podcast/
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    SOCIAL:
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Reddit: / lexfridman
    - Support on Patreon: / lexfridman
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 899

  • @LexClips
    @LexClips  ปีที่แล้ว +8

    Full podcast episode: th-cam.com/video/Osh0-J3T2nY/w-d-xo.html
    Lex Fridman podcast channel: th-cam.com/users/lexfridman
    Guest bio: Edward Frenkel is a mathematician at UC Berkeley working on the interface of mathematics and quantum physics. He is the author of Love and Math: The Heart of Hidden Reality.

    • @quantum_ocean
      @quantum_ocean ปีที่แล้ว +2

      Terrible title @lex he’s not talking about “AI” generally but about LLMs specifically.

    • @robertmartin2262
      @robertmartin2262 ปีที่แล้ว

      I look at it the opposite way, a large languague model would have never assumed that the square root of a negative number is impossible...

    • @reellezahl
      @reellezahl ปีที่แล้ว

      Lex, your guest didn't even scratch the surface on the issue. I'll summarise his argument:
      - It took humans centuries to break the barriers.
      - I *don't think* that LLMs can do this.
      That's a god-of-the-gaps argument.
      LLMs *at the moment* are just performing (roughly) two functions: imitation of and summaries of all the discussions that humans have conducted (both in forums and documentation) on the internet for the past decades.
      There are other systems that are going to come online soon, that will put this linguistic mimicry to shame. *Artificial reasoning* and experimenting. In part this is already being done.
      You don't even need to know much about this tech. As a kid I grew up hearing all these stories about *how special* so-and-so in Italy or England or wherever was. So hearing the same ol' tripe from this Russian mathematician made my eyes roll so hard. He did not bring anything new or substantial to this interview.
      Take √-1: there really is not anything more to this than: find an algebraic framework which extends the reals and solves X² + 1 = 0. Extensions of structures are _very_ common concepts. The fact that it took centuries for humankind to do this is not something to be in awe of but to be ashamed of. Give an AI a few such goals and it *will* come up with a suitable framework.
      I spent my life trying to show that all these ideas and results _can in principle_ be independently be found *without* an Einstein/von Neumann/Gödel, etc. And it works. (The historical proof of this is that mathematical results often get proved _completely independently_ by multiple people.) Some ingredients are: necessity-is-the-mother-over-invention[or: discovery] + reflection (about concepts and connections you already know) + refinement of ideas + test-cases. *THIS IS ALL STUFF YOU CAN AUTOMATE.*

    • @quantum_ocean
      @quantum_ocean ปีที่แล้ว

      ​@@reellezahl it's bit more than imitation of and summaries. It's creating and maintaining representations, among other things.

    • @reellezahl
      @reellezahl ปีที่แล้ว

      @@quantum_ocean sure, that's why I wrote 'some ingredients'. I would like to add: anybody who is not undergoing an existential crisis at the moment, has not reflected enough on how thinking, discovery, etc. work and may even think it's just all magic. I think Frenkel has reflected on history a lot, but not on enough on mechanical thinking, esp. the current 'organic' paradigms being implemented.

  • @psyfiles7351
    @psyfiles7351 ปีที่แล้ว +152

    This is now one of my favorite interviews. Love and math. What a guy

    • @FederatedConsciousness
      @FederatedConsciousness ปีที่แล้ว +3

      It was so good. Absolutely one of the best Lex has done. A conversation that pushes the boundaries of everything we know.

    • @Crusade777
      @Crusade777 ปีที่แล้ว +2

      Yet people say," Where did he debunk A.I/General Intelligence ".

  • @julioguardado
    @julioguardado ปีที่แล้ว +78

    Frenkel has to be the most interesting mathematician ever. The whole interview is tops.

  • @SinanAkkoyun
    @SinanAkkoyun ปีที่แล้ว +118

    He is a top tier mathematician and explains to the audience in simple, digestable but still leapful mysterious manner. Giving 'simple' examples like srqt(-1) to explain the emotional concept hiding behind just brings joy to me, such a lovely person!

    • @timsmith2525
      @timsmith2525 ปีที่แล้ว +2

      That's the sign of a true expert: He can explain things clearly to non-experts.

    • @JoshTheTechnoShaman
      @JoshTheTechnoShaman ปีที่แล้ว +1

      You can tell this is how wins the ladies 😂

    • @theecharmingbilly
      @theecharmingbilly ปีที่แล้ว

      Yeah, we watched the video too.

  • @thewildfolk6849
    @thewildfolk6849 ปีที่แล้ว +40

    Wow I not only totally get his point but simultaneously finally understood complex numbers from this. Fantastic guest, Lex

    • @jasonbowman9521
      @jasonbowman9521 ปีที่แล้ว +2

      I don't know if it's exactly the same but I find I understand concepts better when shown. I study 3D computer art as a hobby. In order to let the computer help make certain textures and bump maps a person can use these things called nodes. They have texture nodes and geometry nodes and I think or tell myself I understand somewhat what those math formulas mean because I can see objects changing in real time every time a node is adjusted. The 3D program is free. It's called Blender. And I think if a mathematician could learn it they could figure out a way for everyday people to see certain things. I kind of get what a black hole is but I doubt I could chart out everything that is going on.

  • @theTranscendentOnes
    @theTranscendentOnes ปีที่แล้ว +12

    such a great guest! thanks for bringing him on. he's eloquent, seems enthusiastic with such affectiveness for the topic and probably the kind of person I could sit down and talk about stuff for a long time. I love his accent too! adds "flavor"

  • @lacedmilk8586
    @lacedmilk8586 ปีที่แล้ว +24

    Wow! This dude is absolutely passionate about math. There was pure joy in his eyes as he spoke.

  • @angelcastro3129
    @angelcastro3129 ปีที่แล้ว +21

    Edward Frenkel... A beautiful Mind but What a beautiful Soul this man has, It shows through his eyes wise yet childlike. Awesome Thanks you Lex great interview.

  • @Gordin508
    @Gordin508 ปีที่แล้ว +18

    When going through education, consider yourself blessed if you got teachers/instructors/professors who are as passionate about their field as Frenkel is about math.

    • @Brian6587
      @Brian6587 ปีที่แล้ว +2

      I had one such teacher in high school and he turned something I hated to something I loved! It makes a difference!

  • @justin4202
    @justin4202 ปีที่แล้ว +7

    Best clip ever. Wow. Raw and true and vulnerable. Great job catching this moment. My goodness

  • @CaptainValian
    @CaptainValian ปีที่แล้ว +3

    Brilliant discussion.

  • @jjacky231
    @jjacky231 ปีที่แล้ว +71

    When I was a kid many people explained why computers would never be as good at chess than the best humans. The explanations where similar to Edward Frenkels explanations: "There just is something that we get and computers won't get in chess / mathematics.

    • @agatastaniak7459
      @agatastaniak7459 ปีที่แล้ว +9

      Back then we didn't know that a master chess palyer simply recalls 70 possible tactical combinations of a new move per 1 minute. Now we know it, so it's more than obvious that's it all about time within which someone or some device can perform such an operation at higher speed. People you mention didn't have this knowledge this is why nowadays we judge them too harshly.

    • @jjacky231
      @jjacky231 ปีที่แล้ว +32

      @@agatastaniak7459 Ray Kurzweil predicted back in the 80ies that a computer will beat the chess world champion. He roughly also predicted this: "when a computer beats the world chess champion one of three things will happen: people will think more highly of computers, less highly of themselves or less highly of chess. My guess is the latter." He was right. Everybody knew that computers could compute faster than humans and that they would become even faster. And that software would become better and better. But people thought that wouldn't be enough.
      And I don't judge the people back then harshly. It was easy to underestimate the potential of computers. But I think it's not wise to make the same mistake again.

    • @amotriuc
      @amotriuc ปีที่แล้ว +19

      Edward Frenkels didn't say that this never can be done, he was specific that he thinks that LLM can't do it. I have suspicion he is right, as well I have suspicion Open AI guys know this as well, they just build up hype to get more funding. It is not like this is the first time it did happen.

    • @AKumar-co7oe
      @AKumar-co7oe ปีที่แล้ว +1

      @@agatastaniak7459 the same thing is true for regular computation - at this point we know we are running an algorithm

    • @heywrandom8924
      @heywrandom8924 ปีที่แล้ว +10

      ​​​@@amotriuc there is also an interview with CEO of open AI on this channel and he also looked doubtful that llm's will be enough for AGI but he says he wouldn't be too surprised if it turns out that gpt7 or gpt 10 is an AGI. The thing is that these models have emergent capabilities that can suddenly appear after becoming large enough

  • @AnimusOG
    @AnimusOG ปีที่แล้ว +8

    This guy is truly awesome, great interview Lex!

  • @splashmaker2
    @splashmaker2 ปีที่แล้ว +5

    It might depend on how you define imagination, but then how do you categorize experts learning new moves from AlphaGo/Zero? Were those moves not imagined if they were not done before?

  • @limelightmuskoka
    @limelightmuskoka ปีที่แล้ว

    So elegant in conversing such a complex and mysterious topic.

  • @dannygjk
    @dannygjk ปีที่แล้ว +48

    AI has functional intelligence but it is not the same as human intelligence. It's the same as birds do not fly in the same way as aircraft fly.

    • @aaronjennings8385
      @aaronjennings8385 ปีที่แล้ว +10

      Interesting analogy. I'll remember that as an example.

    • @PabloVestory
      @PabloVestory ปีที่แล้ว +3

      And humans have consciousness, whatever that would mean. If it's possible for AI's to sustain some kind of "real" (not "simulated") self-awareness or not, that's yet to be proven.

    • @dannygjk
      @dannygjk ปีที่แล้ว +4

      @@PabloVestory I think self-awareness in AI will be different from humans.

    • @ChristianIce
      @ChristianIce ปีที่แล้ว +5

      @@PabloVestory
      AI only mimicks intelligence, it could even mimick consciousness, but mimicking is the very foundation of how it works.
      The mimicking process can be extended and improved to the point it's indistinguishable from the real deal, but it will be still mimicking.
      AGI, on the other hand, is a different approach and it's the attempt of creating an actual thinking machine.
      As Carmack said, the first iteration of AGI will probably look like a 4 years old kid, and you start from there.

    • @alexnorth3393
      @alexnorth3393 ปีที่แล้ว

      @@ChristianIce
      No they don't mimic intelligence

  • @spacebunyip8979
    @spacebunyip8979 ปีที่แล้ว +11

    I want to read this man’s ChatGPT history. I’m sure it would be fascinating

    • @5sharpthorns
      @5sharpthorns ปีที่แล้ว +1

      Omg right?!

    • @ChatGPT1111
      @ChatGPT1111 ปีที่แล้ว +2

      Well, he has an affinity toward Issac Asimov, plus Rick and Morty, Jerry Springer shorts and Dilbert (fav is Dogbert).

    • @ronking5103
      @ronking5103 ปีที่แล้ว

      Probably not, It'd be him correcting the machine over and over at least if he was attempting to plunge the depths of his expertise. The rest of it would amount to the machine being convincing enough to seem expert in a field, but only because the user isn't. It'll get better, but right now its purpose is not expertise, it's general information that we should all take as being friendly if not accurate advice.

    • @shyshka_
      @shyshka_ ปีที่แล้ว +2

      chat gpt at that level of expertise is useless

  • @Thomas-sb8xh
    @Thomas-sb8xh หลายเดือนก่อน

    A mathematician is one who knows how to find analogies between theorems, a better one who sees analogies between proofs, a still better one who sees analogies between theories, and one can imagine one who sees analogies between analogies. Stefan Banach, polish mathematician, one of the greatest ever lived...Feynman/Frenkel type, so you all would love him. Fantastic interview ))))

  • @christopherrobbins0
    @christopherrobbins0 ปีที่แล้ว +2

    What we know about consciousness already seems prove something fantastical lies beneath the surface of our current knowledge. Ancients seemed to have understood this much more than we do now.

  • @Ronnypetson
    @Ronnypetson ปีที่แล้ว +12

    In order to search for new mathematical concepts, a LLM would have to be grounded not only on natural language, but also on things like formal logic, like a mathematician is. Because natural language already carries some logic in it, current LLMs already can "create" new concepts.

    • @georglehner407
      @georglehner407 ปีที่แล้ว +6

      For "new" mathematics, that's not good enough either. It needs to be able to discard, forget, and boil down things it learned to distill the "most useful concepts". A mathematician that is good in formal logic and nothing else is still a poor mathematician.

    • @hayekianman
      @hayekianman ปีที่แล้ว +3

      then it would indeed be the stochastic parrot it is called. the mathematician knows what to discard

    • @Ronnypetson
      @Ronnypetson ปีที่แล้ว

      @@georglehner407 in this case there is some notion of value that good human mathematicians have. This notion may or may not be learned by an AI. Can you think of something like that?

    • @Ronnypetson
      @Ronnypetson ปีที่แล้ว +3

      @@hayekianman I agree with the stochastic part but not so much with the parrot one. We humans are stochastic too. The mathematician knowing what to discard can be emulated by a stochastic guided search, which has learned how much weight to put in each decision

    • @BitwiseMobile
      @BitwiseMobile ปีที่แล้ว +9

      Incorrect. They don't create anything. They iterate over their already known knowledge. They cannot - yet - recognize they don't have the correct knowledge and try to improve themselves. That's called GAI - or general AI - and it's very scary. We are working on that. Generative AI is very different. The fact that you can game generative AI using prompts tells you everything you need to know. I have told it ridiculous stuff before and it happily agreed with me and proceeded to iterate over that bullsh!t. That's not cognition and it's not innovation. It might seem like that to us, but it's really just reflecting what you are saying to it. It's not innovating, you are.

  • @The-KP
    @The-KP ปีที่แล้ว +8

    "Everybody knows that the dice are loaded, everybody rolls with their fingers crossed"

    • @sunandablanc
      @sunandablanc ปีที่แล้ว +4

      "Everybody knows the war is over, everybody knows the good guys lost"

    • @aaronjennings8385
      @aaronjennings8385 ปีที่แล้ว +1

      The cavalry isn't coming.

    • @Gizziiusa
      @Gizziiusa ปีที่แล้ว +1

      "Everybody knows...Da' po' always bein' fucked ova by da' rich. Always have...Always will." Keith David, Platoon (1986)

  • @steliostoulis1875
    @steliostoulis1875 ปีที่แล้ว +198

    The title sounds awkward and wrong somehow....

    • @AutitsicDysexlia
      @AutitsicDysexlia ปีที่แล้ว

      Yeah... almost redundant and repetitive... like a pleonasm.

    • @BKNeifert
      @BKNeifert ปีที่แล้ว +15

      No, AI is debunked. It makes perfect sense.

    • @dannygjk
      @dannygjk ปีที่แล้ว +8

      @@BKNeifert Need to agree on definitions.

    • @BKNeifert
      @BKNeifert ปีที่แล้ว +36

      @@dannygjk It's hard to say. Have you ever looked at AI? It doesn't think. It just repeats what it's programmed to say. It doesn't have the capacity to understand.
      Like, can it make beautiful pictures? Yes. But it doesn't make meaningful pictures.

    • @BKNeifert
      @BKNeifert ปีที่แล้ว

      @@dannygjk Like, I doubt AI could understand the Romantic Poets, or write something like Coleridge or Southey. If it tried, it'd be vapid, dischordent.
      A lot of the metaphor AI creates, is within the human mind itself, programming the AI to create it. It's not creating, but the human is, which the AI interprets and then vomits a sort of copy of what the person who gave the prompt said, only in more detail.
      And it also plagiarizes. I've noticed that, too.

  • @masteryoda9044
    @masteryoda9044 ปีที่แล้ว +1

    Do we have any use for fractional dimensions or even complex ones and not just integral ?

  • @peterbellini6102
    @peterbellini6102 ปีที่แล้ว

    At the core of his statements is the fact that humans use inferential reasoning not just the compilation of data. There's the learning of facts, even the curation and organization of facts, but the leaps come from our DRAM. Not a Mathematician, but a very enjoyable video. Kudos for the Einstein references !

  • @nickr4957
    @nickr4957 ปีที่แล้ว +1

    I think that the creative spark that Frenkel is describing is what philosophers call abductive inference, as opposed to deductive and inductive inference.

  • @leighedwards
    @leighedwards ปีที่แล้ว +72

    Where in this clip did Edward Frenkel debunk AI intelligence?

    • @zfloe
      @zfloe ปีที่แล้ว +29

      Click bait sadly

    • @BillStrathearn
      @BillStrathearn ปีที่แล้ว +41

      The person that Lex hires to write titles for his TH-cam clips is truly the worst person

    • @falklumo
      @falklumo ปีที่แล้ว +21

      Well, Frenkel indeed argues that LLMs won’t be able to show imagination like humans do. But AI and LLMs aren’t synonymous.

    • @timorantalainen3940
      @timorantalainen3940 ปีที่แล้ว +14

      ​@@falklumoI don't think we have a single example of AI that did not involve teaching or creating a model. No space for imagination in the methods available today and hence the click bait is somewhat warranted in my opinion.
      Based on the fact that we can imagine and invent new rules (e.g., add another dimension to make room for complex numbers) it cannot be ruled out that a creative AI arises at some point but we don't have an idea of how that might be achieved at the moment as far as I know. The thing holding us back is the lack of understanding regarding how conciousness arises in the brain. Unless we understand that, we cannot manufacture such a system other than by accident.
      Please correct me if I'm wrong. I'd be curious to read up on machine intelligence methods that are not dependent on creating a model. Or even just a description of such a method in case we haven't yet managed to implement it.

    • @WralthChardiceVideo
      @WralthChardiceVideo ปีที่แล้ว +5

      In the title of the video

  • @D.Eldon_
    @D.Eldon_ ปีที่แล้ว +1

    _@Lex Fridman_ -- Edward Frenkel is brilliant and I appreciate his insights and his humility very much. Thanks for posting this video clip.
    For another, more down-to-earth, perspective on complex math, you should interview an engineer. You know, the people who apply the crazy things mathematicians dream up. A good electro-mechanical engineer can easily provide tons of real-world examples where the "imaginary" number system is essential to describe the day-to-day reality we observe. For example, audio engineers would know nothing about phase without complex math. They would have no idea how two seemingly identical sound waves (identical magnitudes) can completely cancel (when they are 180° out of phase). And it goes even deeper because complex math is at the center of the Heisenberg uncertainty principle. In audio we can know everything about the magnitude of sound. But if we do, we'll know nothing about when in time the sound occurred. On the other hand, we can know everything about the time when a sound occurred, but we'll know nothing about its magnitude. Both cannot be fully known at the same time, creating the uncertainty. This is why advanced audio measurements systems must trade the magnitude-frequency domain for the time domain, depending on the job requirement. And it illustrates how complex math affects the macro world -- not just the micro world of quantum mechanics.
    Then along came a clever guy (Richard Heyser 1931-1987) who discovered that you could map mathematically into an abstract dimension via a Hilbert transform and operate simultaneously on both the magnitude and phase of sound, then map back to our reality with the result. The technology this birthed is Time Delay Spectrometry or TDS. Heyser applied this same "trick" to medical MRI (magnetic resonance imaging) systems to greatly increase their resolution.
    This just touches the surface of the amazing way complex math weaves throughout our world. Another great example is kinetic vs potential energy. Kinetic energy requires the "real" numbers and potential energy requires the "imaginary" numbers.
    It bugs me no end that we are stuck with these awful names for these two essential number systems. I wish we could do away with the "real" and "imaginary" labels and call them something else.

  • @waterkingdavid
    @waterkingdavid ปีที่แล้ว

    At long long long fring last! I was beginning to wonder if there was anyone who was prepared to speak about mystery and uncertainty. Made my day! and year!

  • @bananakuma
    @bananakuma หลายเดือนก่อน

    Regarding the complex number point, you can just explicitly ask the “agi” to probe things in mathematics that has historically been seen as impossible or unintuitive. Seems a very simple “fix” for a advanced llm (with mathematical reasoning) to discover complex numbers etc.

  • @MrDarwhite
    @MrDarwhite ปีที่แล้ว +40

    He asserted it. There was no evidence provided.

    • @jarodgutierrez5389
      @jarodgutierrez5389 ปีที่แล้ว +8

      You must be a fellow prompt engineer.

    • @MrDarwhite
      @MrDarwhite ปีที่แล้ว +6

      @@jarodgutierrez5389 I’ve been playing around, but my main issue is that of a person who follows the process of skeptical inquiry. He provided no evidence or even a logical argument. Nothing. He simply asserted that it was not possible. I’m not claiming it is, but I’m certainly not going to claim it’s not possible, especially after playing with GPT-4 and it’s ability to reflect on its answers without any specific prompting. It seems trivial to me to have an AI system throw out prior assumptions one or two at a time and see what the results are. Not exactly imagination, but it would likely solve his example. Having said that, I wish I could call myself a prompt engineer. As a programmer, that level of expertise would be very valuable.

    • @MrDarwhite
      @MrDarwhite ปีที่แล้ว +4

      @@seventeeen29 will do. To be fair, he doesn’t provide the evidence in this clip, and the title of this clip is where I have the issue. He seems like a great guy and I enjoyed what he said.

    • @andrewshantz9136
      @andrewshantz9136 ปีที่แล้ว +3

      He’s making the point that complex numbers are have strictly conceptual meaning which is not conceivable to an LLM because it is not extrapolatable based on past knowledge.

    • @hardboiledaleks9012
      @hardboiledaleks9012 ปีที่แล้ว

      @@andrewshantz9136 LLM where the L stands for LANGUAGE not MATHEMATICS...
      Wait until some bozo trains a Mathematic or algebra model on the same level as GPT 4 and it will shit all over your world of fkin complex numbers... Humans really aren't as clever as they think.

  • @jaydawgmac88
    @jaydawgmac88 ปีที่แล้ว +1

    To summarize, LLMs predict the most common answers to a particular input. Solving complex problems requires imagination and predictions that go AGAINST the grain and expected future. LLMs have to keep predicting future based on the past.

    • @jaydawgmac88
      @jaydawgmac88 ปีที่แล้ว

      I love the example of thinking about dimensions as powers of 2 and wondering why that’s the case. Very powerful and inspiring example for anyone that wants to be a mathematician. He said so much in that one section. Does he mean that 3 dimensions is not currently compatible with mathematics because multiplication can’t be solved? 1,2,4 and 8 dimensions were viewed as ok but something was wrong with 3 and couldn’t deal with multiplication on some level. Perhaps time is such a critical component that we can’t have 3 dimensions without time, and by then you just jump from 2 to 4 dimensions? Very awesome interview. Gets your brain thinking. Time to go ask Chat GPT some follow up questions 😅

  • @arthavjoshi
    @arthavjoshi ปีที่แล้ว

    we need more ppl like Edward Frenkel...they are the ones who create "foundation" !

  • @shaylove3786
    @shaylove3786 ปีที่แล้ว

    This is what passion looks like. Wow Wow Wow

  • @AndreaCalaon73
    @AndreaCalaon73 ปีที่แล้ว +9

    Dear Lex, I can’t resist commenting on what Edward Frenkel says in this interview.
    He uses the discovery of complex numbers as an example of something that an artificial intelligence could not cum up with.
    I think that that example shows precisely the opposite.
    Let me first mention that since the late 1960s, mainly thanks to the work of David Hestenes, we have known what complex numbers are, their intuitive and simple geometrical meaning, contrary to what E. Frenkel suggests. Geometric Algebra defines the “well behaving” product in 3d, which exists in any dimension, not only in 2, 4 and 8, that Frankel says does not exist. You can look up “Geometric Algebra” yourself.
    I am well convinced that an AI with some “model based reasoning” would have discovered the marvellous and beautifully symmetric structure of Geometric Algebra together with the few rules for 2D that Gauss and other mathematician discovered centuries ago, when the story of the complex numbers originated. The absence of the structure of Geometric Algebra kept the simple significance of complex numbers (rotors) hidden and created the myth that Frenkel describes.
    In other words, an AI would not have been foolishly fascinated with the mysteriousness of the complex numbers, so incomplete and unjustified, because it would have arrived straight to the structure of Geometric Algebra, inside which complex numbers, quaternions, octions, the vector product, rotation in any dimension, … are all easily explained with a single product!
    Geometric Algebra impacts quantum mechanics, computer graphics, general relativity, …
    Complex numbers are just rotors …
    Have a nice weekend Lex!
    Well done, as always!!!!!

    • @DingbatToast
      @DingbatToast ปีที่แล้ว +6

      I agree. I don't believe an AI would get hung up on the same things (or in the same way) humans do.

    • @electrocademyofficial893
      @electrocademyofficial893 ปีที่แล้ว +5

      I think the overarching point he was making (ie irrespective of how suitable his example was) is that AI may not make the leap to a concept that contradicts or is beyond what knowledge/theory currently exists/what the AI has been trained on. I agree with you that given Geometric Algebra exists the AI may well have started readily involving square root of -1 if it was relevant to something asked of it, but if (say) Geometric Algebra hadn't existed or been discovered such that it wasn't part of the AI training data, the AI mightn't come up with it; or more to the point for anything more generally asked of the AI where there's nothing in its training data/historical research that would then ultimately require a mental leap or some method of making an insight beyond any methodology programmed or randomised into it in an attempt to do so. ((..as an aside that's interesting about Geometric Algebra))

    • @coolcat23
      @coolcat23 ปีที่แล้ว +4

      @@electrocademyofficial893 I believe Frenkel was rather justified in wanting to cite someone else to give weight to his view on AI, because in his hearts of hearts of hearts he knows that human brains are not magical. If current AI implementations cannot make leaps yet, it is because they are not operating at meta levels yet. An AI "simply" has to know of an example of a leap in one field to be able to apply it to another field. Voilà, there's your leap that isn't possible by simply trying to extrapolate at the same level. We can romanticize about human intelligence all we want, the writing is on the wall: At some point in time, AI is going to outperform us in every mental capacity.

    • @GeekProdigyGuy
      @GeekProdigyGuy ปีที่แล้ว +1

      ​@@coolcat23 saying "at some point in time" isn't interesting. if it takes 1000 years, nobody alive today will care. the question is exactly how long it will take to achieve the necessary breakthroughs. no AI up until now has truly "invented" anything akin to what we ascribe to the greatest of human intellect.

    • @electrocademyofficial893
      @electrocademyofficial893 ปีที่แล้ว +1

      @@coolcat23 I agree with both what you've said and what epicwisdom has said regarding the when being a key central point. I also think such a leap may come from the AI realising or being effectively programmed to incorporate some kind of randomisation algorithm of trying mathematically probable ways of making potential leaps and/or what you've mentioned. In terms of the current programming/mathematics, probably not, but with further developments and such things that we've spoken of, quite possibly of not definitely as you say

  • @MrSidney9
    @MrSidney9 ปีที่แล้ว +3

    Wolfram told the story of him playing with GPT 3.5. He asked it to write a persuasive essay arguing about the bluest bear that exist. So chat gpt started with " Most people don't know this fact but blue bears do exist... They are found In the Tibetan Mountain... their color doesn't come from pigment, instead it comes from a phenomenon analogous to how butterflies produce colors ....." Near the end of the essay, he was like , " wait a minute do blue bears actually exist ? " He had to google it to make sure.
    Now tell me again, AI can't have imagination.

    • @leandroaraujo4201
      @leandroaraujo4201 ปีที่แล้ว +1

      I am not denying the idea that AI models can have imagination or emotion, but that story just means that the AI can be convincing, not necessarily imaginative.

    • @heinzditer7286
      @heinzditer7286 ปีที่แล้ว +1

      There is no reason to assume that a computer can have emotions.

    • @MrSidney9
      @MrSidney9 ปีที่แล้ว +1

      @@leandroaraujo4201 How did he manage to be convincing? By MAKING UP plausible facts. That's what imagination is about.

    • @leandroaraujo4201
      @leandroaraujo4201 ปีที่แล้ว

      ​@@MrSidney9 *It* managed to be convincing by arranging its ideas and using words in a certain way, in order to be persuasive. Those ideas could have come from imagination, but imagination is completely secondary to the ability to convince someone. You can convince someone of something false with facts (e.g. confusing correlation with causation).

    • @MrSidney9
      @MrSidney9 ปีที่แล้ว +1

      @@leandroaraujo4201 My working definition of imagination is the faculty to create/conjure concepts of external objects not available in the real world. It did just just that and managed to be convincing ( testament to the coherence and of its imagination). Hence it proved it could be both convincing and imaginative.

  • @BEDLAMITE-5280ft.
    @BEDLAMITE-5280ft. ปีที่แล้ว +12

    The “observed and observer” is a phrase coined by Jidu Krisnamurti, then taken by David Bohm and used in his description of quantum mechanics. I always find that fascinating.

  • @ChristianIce
    @ChristianIce ปีที่แล้ว

    AI cannot come up with new ideas, but it can see patterns in a large set of data that we didn't notice.
    It's not an emergent property, it's an unexpected result.
    Given the impossibility for a human being to read and memorize said dataset, unexpected results are to be expected.

    • @katehamilton7240
      @katehamilton7240 ปีที่แล้ว

      IKR? I ask mathematicians/coders AGI alarmists, "what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible? Jaron Lanier admits AGI is a SciFi fantasy he grew out of"

  • @NolanManteufel
    @NolanManteufel ปีที่แล้ว

    This is a good one.

  • @MoversOnDutyUSA
    @MoversOnDutyUSA ปีที่แล้ว

    The square of -1 is equal to 1. In other words, (-1) multiplied by (-1) gives us 1.
    However, it is not possible to take the square root of -1 in the real number system. In order to represent the square root of -1, mathematicians use the imaginary unit "i", which is defined as the square root of -1. Therefore, the square root of -1 is represented as "i" in mathematics.
    So the square root of -1 can be written as √(-1) = i.

  • @alikazemi5491
    @alikazemi5491 ปีที่แล้ว

    For GAI to understand that sqrt(-1) could have real value its matter of learning to do design of experiments which means it will eventually construct it.

  • @cnrspiller3549
    @cnrspiller3549 ปีที่แล้ว +14

    I remember being taught imaginary and complex numbers, and I remember hearing my brain say, "That's it, I'm out of here".
    That was the point at which me'n'maths bifurcated. But I often reflected on the first maniac to pursue imaginary and complex numbers; what sort of lunatic does that? Now I know he was the same fella that invented the double cv joint - weird.

    • @abeidiot
      @abeidiot ปีที่แล้ว +2

      funny. That was when I got back into math.
      i suck at arithmatics, but actual mathematics is fascinating

    • @rokko_hates_japan
      @rokko_hates_japan ปีที่แล้ว +1

      I agree. I still think they are meaningless, just a substitute for things we cannot comprehend.
      They're used in formula to reach a solution, but it seems one starts with the conclusion they want and fill in nonsense to get there.

    • @shyshka_
      @shyshka_ ปีที่แล้ว +5

      @@rokko_hates_japan how are they meaningless if they're literally used in engineering all the time and not just in theoretical maths

    • @Gizziiusa
      @Gizziiusa ปีที่แล้ว

      lol, kinda like how when you try to divide by zero with a calculator, to says ERROR.

    • @kingol4801
      @kingol4801 ปีที่แล้ว

      @@Gizziiusa Because that expression does not have meaning.
      They could have also written “infinity” or “undefined”. Would you be happy then?
      Since it is NOT a number when you divide by 0.

  • @yvesbernas1772
    @yvesbernas1772 ปีที่แล้ว

    What does the title do with the interview ?

  • @solarwind907
    @solarwind907 ปีที่แล้ว +7

    Here’s to the amazing teachers in our lives! Thank you Lex and Mr. Frankel!

    • @mikewiskoski1585
      @mikewiskoski1585 ปีที่แล้ว

      They said a lot of words, I'll give you that much.

  • @laxmanneupane1739
    @laxmanneupane1739 ปีที่แล้ว

    So, Bilbo Baggins was a mathematician too! (Huge respect for the guest)

  • @erlstone
    @erlstone ปีที่แล้ว +3

    as they say.. when u know the rules, u can break the rules

  • @stt5v2002
    @stt5v2002 ปีที่แล้ว +1

    You could make a good argument that a machine intelligence would more easily embrace complex numbers than humans do. After all, humans are endlessly constrained by “that’s not allowed” or “that doesn’t make sense.” These are basically emotions. A program that can self improve would already have the quality of “there are some things that are true but that I don’t already know and understand.”

    • @Martinit0
      @Martinit0 ปีที่แล้ว

      I would not say emotions but rather false conclusions rooted in insufficient understanding of underlying assumptions.

  • @hillosand
    @hillosand ปีที่แล้ว +2

    I mean, LLNs aren't going to be the models that advance mathematics, but even still you could try to program an neural network to 'play', e.g. allow ignoring certain rules I'm order to solve problems. Cool episode though.

  • @shadowpapito
    @shadowpapito ปีที่แล้ว

    Thank you

  • @thzzzt
    @thzzzt ปีที่แล้ว

    I had no idea Girolamo Cardano was a conehead. But of course. Explains a lot.

  • @particleconfig.8935
    @particleconfig.8935 ปีที่แล้ว +1

    In my opinion this argument starts off with the assumption that the LLM can't deduce the new way of thinking, Simply by means of the historical data of said mathematician that pondered sqrt(-17). It can simply deduce from even only that 1 instance that divergent "thinking" needs to be done. If, how am I wrong?

    • @dolosdenada771
      @dolosdenada771 ปีที่แล้ว

      You are not wrong. He quotes Einstein suggesting imagination is unlimited. He then goes on to say he can't imagine AI solving X.

  • @danielmurogonzalez1911
    @danielmurogonzalez1911 ปีที่แล้ว +1

    What about searching for numbers structures in the 16 dimension? I got curious as he said only powers of 2 made sense and 16 is a power of 2

    • @almightysapling
      @almightysapling ปีที่แล้ว +2

      There's a set for those too.what he failed to mention is that with every step we go up we lose an important property. Quaternions are not commutative. Octonians are not associative. The Sedonians don't get much love because they have so few properties that we just don't care about them

    • @reellezahl
      @reellezahl ปีที่แล้ว

      @@almightysapling for an algebra with 2^n generators (and basis elements wrt to the additive structure?) what exactly do we demand? Is it always an algebra over ℝ? or is it an algebra over the previous (2^{n-1}) algebraic structure?

  • @maureenparisi5808
    @maureenparisi5808 ปีที่แล้ว +1

    This is plainly speaking inbox, the yellow brick road of progress.

  • @MrAnderson2845
    @MrAnderson2845 ปีที่แล้ว

    Its almost like complex numbers exist in a higher dimention and we know they exist but dont know what they are. Yet they are directly linked to us and some how calculate our physical world in the mandbrot set.

  • @nate_d376
    @nate_d376 ปีที่แล้ว

    Same clip as the other video? Or did he remake the title and upload it?

  • @arboghast8505
    @arboghast8505 ปีที่แล้ว +2

    It's all nice and well explained but how does it relate to AI?

  • @ben_spiller
    @ben_spiller 8 หลายเดือนก่อน

    There's nothing stopping an AI from adopting the hypothesis that the square root of a negative exists and seeing what happens.

  • @ye333
    @ye333 ปีที่แล้ว +1

    AI doesn't need to be exactly like human to have intelligence or to replace human.

  • @IndustrialMilitia
    @IndustrialMilitia ปีที่แล้ว

    The tool is phenomenology. The philosophical method for describing subjective experience has already been developed.

  • @sonarbangla8711
    @sonarbangla8711 ปีที่แล้ว

    Complex number i is defined as a ratio of effect to cause, when in a complex number z=x+iy, change in effect y due to change in cause x, mapped on to the w plane. i= effect y/cause x.

    • @reellezahl
      @reellezahl ปีที่แล้ว

      what the heck? No. You just extend the algebraic structure (ℝ, +, ·, 0, 1) to (ℝ[X] / ⟨X²+1⟩, +, ·, 0, 1), which can be done, since the polynomial X²+1 ∈ ℝ[X] is irreducible over ℝ. By irreducibility, ℂ := ℝ[X] / ⟨X²+1⟩ constitutes a field and the (equivalence class of the) polynomial X is invertible and satisfies X² = -1. One then simply sets i := X (or -X, doesn't really matter). There nothing more to it than this.
      Also there is no cause-and-effect involved anywhere here.

    • @sonarbangla8711
      @sonarbangla8711 ปีที่แล้ว

      @@reellezahl Please refer to page 217 of Tristan Needham's VISUAL COMPLEX ANALYSIS and the definition of complex number i.

    • @reellezahl
      @reellezahl 5 หลายเดือนก่อน

      @@sonarbangla8711no thanks. I did advanced algebra at university and already have enough literature. I don't need 'visualisations' designed either for children or to patronise adults who cannot think abstractly or process abstract information.
      Btw _i_ is not not in the primary sense a 'complex number'. It is (one of the two) zeroes of X² + 1. A complex number is an element of the field obtain by extending ℝ in the smallest possible way such that it contains one (and thereby both) of these zeroes. Before defining this field _i_ is just loose entity, not (yet) a member of that field, and thereby not (in the primary sense) a complex number.

  • @lancemarchetti8673
    @lancemarchetti8673 ปีที่แล้ว

    Great interview indeed!
    Here's an example of how far we still have to go...Just take a look at this simple instruction that the poor bot fumbled up this evening as I enjoyed a cup of coffee and a few giggles.
    User:
    aAbcde9fz
    list every 2nd character in the above string please.
    ChatGPT:
    Sure, here are the every second characters in the string "aAbcde9fz":
    "Abd9z"
    User: why did you include 'b' ?
    ChatGPT: I apologize for the confusion. The correct every second character in the string "aAbcde9fz" is:
    "Abe9z"
    User:
    No. The correct answer is actually: Acef
    ChatGPT:
    I apologize for my mistake earlier. You are correct that the correct every second character in the string "aAbcde9fz" is:
    "Acef"
    //Unless the bot, being a computer takes it for granted that the first character of any given string is always '0' and then 1 2 3 etc ?
    But even so, its first answer is still wrong. Or am I missing something?

  • @shyshka_
    @shyshka_ ปีที่แล้ว +7

    the moment we create an AI machine without any concrete goals or set objectives/tasks but yet it still goes on to do something (even something as simple as moving around (if it has a robotic body)) is the moment we know it's self-aware and conscious. IDK maybe I'm dumb but that's the way I imagine we would know that it's the real deal

    • @beefnuts2941
      @beefnuts2941 ปีที่แล้ว +2

      I imagine the benchmark being that the AI is supposed to do something but refuses to do so or tries to terminate its own existence because it isnt allowed to be free

    • @essassasassaass
      @essassasassaass ปีที่แล้ว

      you could actually be right.
      ai does not „want“ anything yet it is just a tool. and maybe (and that makes me optimistic about our future) it will never have a will to do anything. a beeing must value things to take actions by its own but how can a machine create its own values? id argue that it is imposaible because a machine will always mimic the intentions of its creators. but then it would not be the will of the machine itself. but just a theory idk 😄

    • @kingol4801
      @kingol4801 ปีที่แล้ว +2

      That is not how any of it works.
      AI improves because it gets rewarded for doing a certain action.
      Kinda like our brain makes us like doing something because we get dopamine/endorphins from it etc.
      So, if I were to program a “robot”, I HAVE to define what is the reward mechanism (what are they being rewarded for) - and the “robot” tries until it get’s better at it. And you can guide the process by setting closer goals or changing it’s architecture/brain make-up.
      Without it being rewarded for anything all it will do is pure white noise. And it will only ever “learn” on how to stay alive within the confines of it’s environment, since the robots that don’t stay alive don’t reproduce.
      Since we intentionally set it’s goal via setting the reward mechanism, it will do things to get rewarded (although not necessarily in the way we might expect). Kinda unintentionally reaching a goal etc.
      So, no, it won’t be sentient (at least how AI neural networks are modeled now) because of that. It needs some reward mechanism to do things, and that is pre-defined by a person.
      Source: Masters in Robotics and AI.
      P.S: You CAN technically assume that we GOT sentient as a result of us developing certain neural networks. But that would require BILLIONS of cycles of evolution AND a VERY VERY big neural network AND a complex environment to stimulate us through it’s survival AND ability to form new nodes.
      Yes, AI currently simply optimizes it’s neurons. It does NOT build new nodes/changes it’s pre-determined structure itself - just chooses OUT of that structure the most efficient pathway to get rewarded.
      So, not really, no.

    • @DeTruthful
      @DeTruthful ปีที่แล้ว +1

      What do you mean tho every living being has concrete goals and set objectives. You get hungry, you get horny, you feel social pressure. Its not an accident you feel these things you’re designed to survive.
      So to say an AI should act without a purpose when you act with multiple purposes built in is a bad goal post.

    • @DeTruthful
      @DeTruthful ปีที่แล้ว +1

      @@essassasassaass you could argue that your prefrontal cortex is simply a tool of your limbic system.
      Dogs feel hungry, horny, have a desire for safety and social status, we strive to achieve all the same things just in a more convoluted ways.
      Our great minds are largely just a tool to get mammal desires met.

  • @carlosfreire8249
    @carlosfreire8249 ปีที่แล้ว +3

    A sufficiently smart model can extract deeper meaning from less evidence.
    Who is to say new mathematics is not already hidden in the relationships found in the existing training data?
    The fact that the canon implies that something is not possible would not necessarily detain a LLM, because the it is not explicitly trained to respect the rules of Mathematics or take them with any special regard.
    There’s actually nothing blocking it from going beyond, from using obscure references or just stumble into a new way of solving a problem, thus creativity needs to be considered using non-anthropomorphic lens in this case.

    • @amotriuc
      @amotriuc ปีที่แล้ว

      A sufficiently smart model probably can do a lot but it does not mean we know how to build it. LLM are trained on existing knowledge and to predict exiting knowledge means so if you train it that 1+1=2 it is not likely to discover that 1+1=4. The claim "there is noting stopping it from going beyond" is a wishful thinking any real system has limitations we just don't know what it is for LLM. The guy is a mathmagician, mathmagicians don't get anything for granted just the axioms. There are a lot of BIG claims coming from Open AI with 0 prove that they are true. I suspect with LLM we will get to same situation as we have with self driving card, still not ready even it was promised to be done yesterday I am wiling to bet money on this.

    • @carlosfreire8249
      @carlosfreire8249 ปีที่แล้ว

      @@amotriuc GPT-4 had been observed generalizing 40-digit numbers addition without any explicit training. The emergent behaviors of these models betray the simplicity of their architecture.
      People arguing transformers are “stochastic parrots” are not paying close attention to second-order effects.

    • @amotriuc
      @amotriuc ปีที่แล้ว

      @@carlosfreire8249 The question is which emerging behaviour this is? If it really did discover what a number and addition is why it did stop at 40 digits?. It should be able to to any addition if it understood. So your example actually shows signs that it does not builds any understanding needed for AGI. As a see it is still a very sophisticated “stochastic parrot".

    • @carlosfreire8249
      @carlosfreire8249 ปีที่แล้ว

      @@amotriuc the model does not need to be able to add two arbitrarily long numbers without a calculator, more than you need to.
      The addition of two 40-digit numbers is emergent for at least two reasons: it is was not repeating data from the training set, it learned to do math having not being instructed to do so.
      We should be careful not to apply a “god of the gaps”-type of reasoning here, because generalization is not an all or nothing situation. Even if the model has blind spots, even if its internal language is not as expressive, even its functioning is not as efficient as our cortexes, a LLM reaching increasing levels of generalization capability by virtue of scaling is a surprising (and humbling) discovery.
      Stalin’s cold remark that “quantity is a quality all its own” applies here, hyper-parameterization is a quality all its own.

    • @amotriuc
      @amotriuc ปีที่แล้ว

      ​@@carlosfreire8249 It does not matter what I need or not, I can add 2 number more then 40 digits without a calculator since I know what a number is and what is addition. The limit of 40 digits shows that it did learn how to add 2 numbers without understanding what a number is. I am not clamming it does not have any emerging properties the issue is those properties have nothing to do with AGI since it don't discover an understanding of the subject, which is much harder then just predicting a result (even some most simple system can have emerging properties it means nothing). I have to be clear I do believe at some point with will have AGI but it defiantly will not be an LLM. If AGI was so simple that LLM can do it, we defiantly would have had other intelligent creatures appearing during evolution and our Galaxy would be full of alien's. So don't be overoptimistic all the claims that LLM can do AGI have no scientific base they just hopes.

  • @peterpunch1
    @peterpunch1 ปีที่แล้ว

    Awesome interview! However, in my personal opinion;
    We always compare the human minds effects/results with AI effects/results in somewhat the wrong way.
    We say human brain can think creatively and outside of the box while working towards an effect/result/goal.
    We say algorithms/computation/AI makes statistically significant choices towards a set effect/result/goal.
    Human mind seems very comparable to a variable-affected set effect/result of "continue" and allowance for random branching of statistically non-significant choices while running comparisions between branches as well as the ability to recall the history of choices. Controller for end-effect and variables (sensory/stimuli if you may) would be time-keeping. Just like the wrapping time-keeper function of functional hardware programming when hardware I/O is to slow for the timings you want to calculate and you need to introduce a new parallell but linked and different resolution of time.
    Of course I am speaking from learned knowledge, from others before me and from myself but this kind of alogorithm introduces another way of approaching what could happen and be defined as creative, thinking outside the box etc. between "start" and the nevercoming "stop".
    Enjoy the journey!

    • @katehamilton7240
      @katehamilton7240 ปีที่แล้ว

      I ask mathematicians/coders AGI alarmists, "what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible? Jaron Lanier admits AGI is a SciFi fantasy he grew out of"

  • @AriBenDavid
    @AriBenDavid 4 หลายเดือนก่อน

    You have to love this guy!

  • @nickwalczak9764
    @nickwalczak9764 ปีที่แล้ว +24

    I hoped he would talk about AI more but he's right - language models are mostly not built on their own experience (mostly, it is supervised learning although there is some reinforcement learning in newer models). They act more like function interpolaters which can produce impressive results in the right context. Getting them to extrapolate anything and they can produce complete nonsense. They don't understand concepts deeply, there simply very very good mimics of the training data they have seen.

    • @luckychuckybless
      @luckychuckybless ปีที่แล้ว +2

      The computer learns language exactly like a child does. Using context from other sources of information or people

    • @jeffwads
      @jeffwads ปีที่แล้ว +2

      Sure dude...see you when GPT-5 starts doing your homework.

    • @spencerwilson-softwaredeve6384
      @spencerwilson-softwaredeve6384 ปีที่แล้ว +2

      This is correct for now, but I believe it would only be a small tweak to gpt to convert it from language model to agi, the tweak isnt quite understood yet

    • @federicoz250
      @federicoz250 ปีที่แล้ว +13

      @@luckychuckybless Not at all. Babies don’t need to read the entire web to understand language 😂

    • @rprevolv
      @rprevolv ปีที่แล้ว +2

      Alphazero extrapolates rather amazingly

  • @yarpenzigrin1893
    @yarpenzigrin1893 ปีที่แล้ว +15

    LLMs are not AGI. However if something exists in nature, like intelligence, it can be atificially replicated.

    • @hardboiledaleks9012
      @hardboiledaleks9012 ปีที่แล้ว +5

      i have no idea why this basic concept is so hard for people to understand... It's almost like the smarter someone think they are, the harder it is for them to understand that they arent special 😂 So pretentious

    • @uphillwalrus5164
      @uphillwalrus5164 ปีที่แล้ว

      Nature exists in intelligence

    • @yarpenzigrin1893
      @yarpenzigrin1893 ปีที่แล้ว

      @@uphillwalrus5164 Nature exists in flight.

    • @Josh-cp4el
      @Josh-cp4el ปีที่แล้ว +2

      Can plastic become titanium? There are physical limits of different materials in our universe.

    • @GroockG
      @GroockG ปีที่แล้ว +1

      Maybe intelligence doesn't exist

  • @Mandudehuman
    @Mandudehuman ปีที่แล้ว

    Interesting conversation! I am not sure if the example he provided best explains AI's limitations.
    Human's true imagination power probably doesn't lie in the way we we solve the problem. But the curiosity and creativity of identifing what an objective/problem is in the first place. That is where AI will hit a wall.
    As far as solving the problem itself however, the real question should be tackled. Was math discovered or created.
    If we created it to make sense of the world, AI will do it better.
    If it was discovered, AI won't stand a chance.
    In other words, discovery requires curiosity and organic desire. Creation requires a little creativity but a ton of logic processing.
    Thank you for another one Lex!

  • @davidvalderrama1816
    @davidvalderrama1816 ปีที่แล้ว +1

    A complete and open minded person isn’t one thing, intuition is important.

  • @ronking5103
    @ronking5103 ปีที่แล้ว

    From about 300BCE to until the early 19th century, humanity made a pretty basic assumption that two parallel lines would never intersect. Euclid. It was taken as law. Yet, it's pretty clear to anyone that studies a globe, that parallel lines can indeed intersect, they will at the poles. It's not an abstraction that difficult to come to terms with, you don't need to be Einstein to grasp it. Yet all of humanity missed it, even when they were actively looking for it, for a very long time. Sometimes even things that are staring at us in plain sight, elude us, because we fall into dogmatic beliefs about what we take as law.

  • @yanwain9454
    @yanwain9454 ปีที่แล้ว

    what tools? well a comb would be a good tool to start with!

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 14 วันที่ผ่านมา

    Wow, so fascinating

  • @kray97
    @kray97 ปีที่แล้ว +3

    Could an LLM come up with a concept like sqrt(-1)? Great question. If it has a huge corpus of mathematical proofs, maybe it could?

    • @hardboiledaleks9012
      @hardboiledaleks9012 ปีที่แล้ว +5

      Could a large language model come up with a concept like that? maybe? Shouldn't this be a question we give to a large MATHEMATICS model?

    • @rebusd
      @rebusd ปีที่แล้ว +2

      @@hardboiledaleks9012 except according to Kurt Godel and his incompleteness theorems, there could be no such model; it would either be inconsistent (spitting out logical contradictions), or incomplete (there would be true statements expressible with the model that would be unprovable)

    • @hardboiledaleks9012
      @hardboiledaleks9012 ปีที่แล้ว +4

      @@rebusd currently our human model has true statements expressible that are unprovable. So this a model issue not a computing issue. I'd also argue that it's not because we can't comprehend A.I being perfect that A.I can't be better than us without being perfect.
      There's a simple fact, carbon based life forms are great for living, good for evolution through reproduction. through evolution our little meat computer we call the brain ended up being pretty good at computing. but computers are literally computing machines... build for computing. They are also not limited to a physical size and their bandwidth are orders of magnitudes more efficient than ours. Computers will end up computing better than us. Human intelligence will be replaced by silicon in the future. Regardless of if you agree that it is conscious or whatever other arbitral philosophical concepts you try to apply to it.

    • @reellezahl
      @reellezahl ปีที่แล้ว +1

      @@rebusd ​ Gödel (or _Goedel_ in latin, but not "Godel") developed his result for systems that have a recursive presentation. This condition is a *critical* component to his results. The new paradigms of computing (machine learning, etc.) are NOT recursive: they're analog, empirical, and moving in an organic direction. Gödel's results do not apply.

  • @HelenA-fd8vl
    @HelenA-fd8vl ปีที่แล้ว

    I would like to see Edward debate with Richard Dawkins.

  • @vasperTM
    @vasperTM ปีที่แล้ว

    Imagination is limited as well.

  • @5sharpthorns
    @5sharpthorns ปีที่แล้ว

    So in the 4th dimension, you can't multiply by 3, 5, or 7. I would want to look into the significance of that.

  • @takisally
    @takisally ปีที่แล้ว +12

    What seems like a jump to us might be obvious to AI

    • @reellezahl
      @reellezahl ปีที่แล้ว

      @kulu mbula it's wired analogously to imitate aspects of human thinking. The advantage is the hardware. AI does not need to sleep or eat or be loved. It can churn through trillions of images or documents, where we would give up after a dozen attempts. THAT's the power of this thing. Your reaction is like scoffing at the crappy vision of a horseshoe crab, failing to see the big picture of the machinery of evolution.

  • @nunomalo
    @nunomalo ปีที่แล้ว

    Brilliant and creative mind!

  • @sarsaparillasunset3873
    @sarsaparillasunset3873 ปีที่แล้ว

    This is a profound intellectual discussion about mysticism, very rare. That said, perhaps it's not a matter of imagination to come up with ingenious mathematical constructs like the squareroot of -1, but merely the ability to challenge one's own assumptions. If or when AI reaches that pinnacle, we would be making profound scientific discoveries that could let us bend spacetime and travel to furthest parts of the universe.

    • @katehamilton7240
      @katehamilton7240 ปีที่แล้ว

      I ask mathematicians/coders AGI alarmists, "what about the fundamental limitations of algorithms, indeed of math? Have you investigated them? Wont these limitations make AGI impossible? Jaron Lanier admits AGI is a SciFi fantasy he grew out of"

  • @Av-fn5wx
    @Av-fn5wx ปีที่แล้ว

    This man almost convinced me that LLMs emulate rather imitate human behavior based on all the existing knowledge but not reproduce the vast imaginative capabilities exercised by humans. ChatGPT would've responded that Sqrt of a a negative number doesn't exist. whereas a human has gone beynd what already exists and made new contributions

  • @jacksmith4460
    @jacksmith4460 ปีที่แล้ว

    I like Edward , he is very interesting

  • @CohenRautenkranz
    @CohenRautenkranz ปีที่แล้ว

    The devices we employ to build and run ”AI" models would not exist in the absence of the mathematics which the models themselves are unlikely to be capable of even conceiving. It seems to me that a (ironic) parallel could also exist with regard to humans attempting to decipher consciousness?

  • @stevenschilizzi4104
    @stevenschilizzi4104 ปีที่แล้ว

    Prof. Frenkel stops at octonions, but I’ve read that numbers of dimension 2 to the power of 4, or 16, called sedenions, have also been defined and studied, and have very curious properties. Or rather, they lack properties that are fundamental to real or complex numbers, like associativity and commutativity. They also allow division by zero, where multiplying two non-zero sedenions can give zero as an answer!! I don’t know that they have found any practical applications though.

    • @martinkunev9911
      @martinkunev9911 9 หลายเดือนก่อน +1

      multiplying two non-zero sedenions can give zero as an answer ≠ division by zero
      The technical term is that there are divisors of zero. The same is true for e.g. 2x2 matrices of real numbers.

  • @sm12hus
    @sm12hus ปีที่แล้ว +5

    I understand none of this but am super relieved to see a brilliant person confirm my hope and feeling that AI cannot ever be sentient

    • @momom6197
      @momom6197 ปีที่แล้ว +5

      That's not at all what he said! His point was about one specific ability that LLM do not display. He does not say that AI won't ever be sentient; in fact, his argument is not even evidence that we won't reach AGI in the near future.

    • @jakubsebek
      @jakubsebek ปีที่แล้ว +5

      "I understand none of this but.."

    • @sherlyn.a
      @sherlyn.a ปีที่แล้ว +1

      @Az Ek present day AI isn’t actual AI, it’s just linear algebra + some fancy stuff. Real AI would simulate a human brain. Besides, we’re made of DNA-and that’s a form of algorithm/code. We’ve already proved that someone’s genetics can affect how they think (i.e. if they will have certain mental illnesses), so it’s only logical to conclude that we are also algorithms-or at least, hardwired to some extent. Otherwise, why would humans act so similarly if there isn’t something that makes them act that way? We just have to replicate that artificially.

    • @robertthrelfall2650
      @robertthrelfall2650 ปีที่แล้ว +1

      ​@@sherlyn.a Sounds like the insane ramblings if Dr. Frankenstien.
      Good luck with that.

    • @carleynorthcoast1915
      @carleynorthcoast1915 ปีที่แล้ว

      current computers certainly can't they just execute code, and you can't code sentience nonmatter how bad people want to think so. That would be analogous to writing a paragraph that made the paper self-aware.

  • @david50665
    @david50665 ปีที่แล้ว

    once chapgpt reads the transcript of this podcast...it too would make the leap

  • @FitTestThePlanet
    @FitTestThePlanet ปีที่แล้ว

    @9:41 - wait. Grassmann / Clifford algebra can’t do that?

  • @Apjooz
    @Apjooz ปีที่แล้ว +2

    And it only took 200,000 years to find those imaginary numbers.

  • @sirshep4915
    @sirshep4915 ปีที่แล้ว

    This man is amazing

  • @carefulcarpenter
    @carefulcarpenter ปีที่แล้ว +5

    As a highly creative designer-craftsman I was fortunate to work in Silicon Valley for some of the best and brightest, and richest, people in the world. I listened to their "theories on their dreams" and brought it to fruition. I also witnessed their private lives, and decisions they had made about their dream.

    • @ivanmatveyev13
      @ivanmatveyev13 ปีที่แล้ว +11

      cool story, bro

    • @carefulcarpenter
      @carefulcarpenter ปีที่แล้ว +2

      @@ivanmatveyev13 I have been in some places, and had some conversations, that no one else in history could ever have. I am a "trusted man" in the hearts and minds of people who had to be cautious about people as a rule--- never knowing who to trust.
      My work still speaks for me, and likely will for hundreds of years. That is the way of a master craftsman who took the Road Less Travelled. It is a lonely path, but there are a few others I've worked for that lived lonely lives. The world out there is full of highwaymen, gypsies, and thieves. 👀🐡

    • @lakonic4964
      @lakonic4964 ปีที่แล้ว +6

      I have seen things you people wouldn't believe 👀

    • @justinava1675
      @justinava1675 ปีที่แล้ว +3

      Good for you? Lol

    • @mikerosoft1009
      @mikerosoft1009 ปีที่แล้ว +2

      ​@@carefulcarpenter tell us more

  • @johnreid5814
    @johnreid5814 2 หลายเดือนก่อน

    In my opinion, square root of negative one is just two number lines or axises that can be oriented in any way. If it is on the same numberline, say x, then it should be one. I think its a fake problem since weve favord the x, y, and z which are arbitrarily at 90°s from eachother. Spherical coordinates are the next step. Negative numbers are literally whole numbers if you just translate their values into the real plane. Always have a camera or measuring device to measure your original data called the origin.

  • @georgechyz
    @georgechyz ปีที่แล้ว

    Math being rational is a subset of reality which includes rational and irrational. For example, emotions are very important features of our consciousness and they are irrational. What's remarkable about the irrational/emotional aspects of consciousness is how creativity comes from our emotional aspect. It's the irrational that leaps from what we know to entirely new possibilities. Conversely, the intellect relies on logic which plods along from what we know inching toward a slightly different idea. That's why revolutionary new ideas first appear using our irrational emotions. However, since irrational emotions lie outside the limits of rational math and logic computers cannot explore emotions or use those irrational features of consciousness to leap to entirely new perspectives, solutions, etc.
    “If I create from the heart, nearly everything works; if from the head, almost nothing.”
    -Marc Chagall (1887-1985), artist

  • @martinkunev9911
    @martinkunev9911 9 หลายเดือนก่อน +1

    quaternion multiplication does not commute

  • @jorgmeltzer9234
    @jorgmeltzer9234 ปีที่แล้ว

    ChatGPT's answer to could you have invented complex numbers:
    "The potential of an AI to conceive of a new concept like complex numbers depends on its architecture and training. If an AI is designed with the ability to learn, reason, and make creative connections, it may be able to come up with new concepts that were previously unknown or unexplored. However, this would require a rich and varied set of input data, as well as the possibility for the AI to engage in hypothesis generation, conjecturing, and testing.
    That said, since AI is typically a product of human knowledge and ingenuity, it is important to consider that the creative spark underlying the development of something as novel as complex numbers is still driven by human creators and programmers.
    In conclusion, it's theoretically possible for an AI to come up with a concept like complex numbers if they hadn't been invented before, but it would require a suitable architecture and a deep, diverse set of input data to facilitate that level of creativity."

  • @akhill7952
    @akhill7952 ปีที่แล้ว

    "Mathematics, it's an endless limitless pursuit " ....

  • @kennethdias9988
    @kennethdias9988 ปีที่แล้ว

    It’s the theory of relativity. Prospective and process. Square root of negative one is a way to change perspective.

  • @grandlotus1
    @grandlotus1 ปีที่แล้ว

    A ubiquitous presence does not need to be intelligent in order to be dangerous. Think of termites. The risk is not that computers will of necessity outsmart us, the risk is we will hand them the steering wheel.

  • @lolguytiger45
    @lolguytiger45 ปีที่แล้ว +1

    Lex should have Swami Sarvapriyananda on for a discussion on consciousness and vedanta.

  • @liamroche1473
    @liamroche1473 ปีที่แล้ว

    I disagree with the example of imagining the square root of -1 for a rather concrete reason. Neural networks have features that are fundamentally made up of real parameters, and these features can achieve extremely high levels of abstraction - for example a feature representing whether a picture has a cat in it! Even that should be a strong clue they can come up with other sorts of abstraction, like the square root of minus one.
    There is one much simpler type of feature which is relevant to the claim. Topologically, a real-valued feature has no loop - if you keep increasing or decreasing it, you never see the same values again. But from a single such feature it is possible to generate two new features using sine and cosine that are related by the familiar sin^2 + cos^2 = 1 rule. This effectively maps the line of a single feature to a circle in the complex plane by the transformation x -> e^ikx. The two transformed features are effectively a single new feature with different topology. More generally two features always have the capability of being used so that they represent complex numbers, and where complex features are useful to a model they can emerge naturally. So it is safe to say that not only can general neural networks come up with the notion of a square root of minus one, they can do this sort of thing quietly in the background where it turns out to be useful to a model. And if they can do it quietly, it is certainly reasonable to believe they could talk about it if they had a large language model as well!

  • @uberultrametamega946
    @uberultrametamega946 ปีที่แล้ว +2

    Could you not program a large langue model to compare relationships between any variety of things, have an algorithm that looks for things that seem to have some kind of relationship, but don't quite synch up, then play with those relationship - essentially, program in intuition? I'm just throwing ideas around. I have not idea if it could be done. Also, I'm not sure if it should be done.

  • @gnollio
    @gnollio ปีที่แล้ว

    I respect a mathematicians perspective, but I sort of look at it in the same way I would look at a physicist describing the human body. If you break us down to our atomic structure; mostly just hydrogen, carbon, nitrogen and oxygen; it's hard to imagine how it can all come together to create consciousness. AI, although composed of simple 1s and 0s, could create emergent intelligence at a higher level when those 1s and 0s are combined in certain ways. What we think of as intelligence may simply be a boring and predictable outcome, but we just can't comprehend all the variables so it looks unique and special.
    We already see AI "hallucinating" beyond a machine's traditional cold and strict boundaries. Also, programming frequently injects randomness into the process to encourage unique outcomes amongst the array of potential solutions. I would say these things are demonstrations of a type of imagination, allowing for new ideas to emerge from unlikely places. We see glimpses of what can be described as "common sense" within the latest LLMs that may be an indicator how future AI will be able to self-assess failure and make new attempts to remedy the situation. Hell, AlphaFold's very purpose is to discover new protein structures, which is a practical example of AI thinking of unique ideas beyond just a brute-force approach.

  • @dudicrous
    @dudicrous ปีที่แล้ว +1

    How mathematicians can be romantics