How To Prepare AI For Uses In Science

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 พ.ค. 2024
  • Is AI ready for use in the sciences? And if not, how can we get there? Stephen Wolfram, Chairman of Wolfram, spoke at Imagination In Action's 'Forging the Future of Business with AI' Summit and speaks about why AI is better with LLMs and how we can use AI usefully in science.
    Subscribe to FORBES: th-cam.com/users/Forbes?s...
    Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:
    account.forbes.com/membership...
    Stay Connected
    Forbes newsletters: newsletters.editorial.forbes.com
    Forbes on Facebook: forbes
    Forbes Video on Twitter: / forbes
    Forbes Video on Instagram: / forbes
    More From Forbes: forbes.com
    Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.

ความคิดเห็น • 59

  • @headofmyself5663
    @headofmyself5663 15 วันที่ผ่านมา +28

    Wow, Joscha Bach and Stephen Wolfram on one stage. Is there more of this discussion?

  • @ReflectionOcean
    @ReflectionOcean 15 วันที่ผ่านมา +8

    By YouSum Live
    00:00:00 Science and AI limitations in predicting complex systems.
    00:00:30 AI struggles with extrapolation beyond trained data.
    00:01:41 Language simplicity aids AI success in text analysis.
    00:02:07 AI's limitations in creativity and originality.
    00:06:53 Computational exploration of vast possibilities by humans.
    00:09:23 Computational language as a tool for formalizing the world.
    00:14:52 The importance of computational thinking and automation in work.
    00:15:20 Leveraging AI as an interface for computational tasks.
    00:16:48 Training AI models for specific computational tasks.
    00:17:45 Weak form of computation in llms.
    00:17:50 Challenges in guiding proofs using llms.
    00:18:00 Limitations of llms in mathematical proofs.
    00:18:43 Llms excel in making homework but struggle at edge of human knowledge.
    00:19:02 Llms prone to errors in math without guidance.
    00:19:37 OpenAI's focus on long-form reasoning surpassing human capabilities.
    00:20:20 Building systems to extend human capabilities.
    00:20:36 Exploring the fundamental workings of machine learning.
    00:21:26 Balancing computational capabilities with human needs.
    00:22:11 Challenges in developing effective AI tutoring systems.
    00:22:28 Goal for llms to understand and assist human learning.
    00:23:00 Conceptualizing beyond human intelligence and AI capabilities.
    By YouSum Live

  • @mikey1836
    @mikey1836 15 วันที่ผ่านมา +6

    My AI predicted the text “so to speak”. Only joking, I love Stephen’s videos. He’s a true genius.

    • @Mr.Monta77
      @Mr.Monta77 13 วันที่ผ่านมา

      I find 99% of all the TH-cam ‘geniuses’ to represent admiration rather than appreciating what the term implies. But in your case, yes, Stephen Wolfram is a genius in the true sense of the word.

  • @manit77
    @manit77 15 วันที่ผ่านมา +3

    Wolfram is our modern day genius.

  • @eyykendrick
    @eyykendrick 15 วันที่ผ่านมา +7

    Anyone know where to find the full talk? Thank you

    • @J3R3MI6
      @J3R3MI6 14 วันที่ผ่านมา +2

      Yeah I’m looking for the same

  • @XenoZona
    @XenoZona 13 วันที่ผ่านมา +2

    I liked the part where he said "computational"

  • @7350652
    @7350652 11 วันที่ผ่านมา

    thanks

  • @VivekYadav-ds8oz
    @VivekYadav-ds8oz 15 วันที่ผ่านมา +8

    Also it's pretty discrediting to LLMs to say they are only good because language has (easy) grammar. A lot of tests on LLMs show that they have a (though limited, incomplete) world model. It understands basic mathematics, and some basic things about our world.

    • @jcozyyt
      @jcozyyt 15 วันที่ผ่านมา

      I think that world model that LLMs have is a fundamental part of language, and shows there are deeper underlying patterns in language that hint at a world view. I think that's what we are seeing with this emergent world view in LLMs

    • @user-wc2lm2sm6m
      @user-wc2lm2sm6m 15 วันที่ผ่านมา

      It doesn't "understand" anything, it is able through it's massive training data to recall information it has seen before and piece it together in a legible format.

    • @jaimeberkovich
      @jaimeberkovich 10 วันที่ผ่านมา

      @@user-wc2lm2sm6m what is human understanding if not a bio-neural network taking in training data to create a cybernetic feedback system?

  • @jurycould4275
    @jurycould4275 8 วันที่ผ่านมา

    Thank you Stephen for being a scientist and a man of truth!

  • @GerardSans
    @GerardSans 14 วันที่ผ่านมา +4

    It’s encoding not compression. The difference is subtle but important for technical rigour and to explain the decoding which holds the generative capacity. Decompression wouldn’t be considered correct either it’s called decoding.

    • @lok2676
      @lok2676 12 วันที่ผ่านมา

      Very good observation

  • @dr.mikeybee
    @dr.mikeybee 15 วันที่ผ่านมา

    Semantic space has a shape. It's a model, so of course it has a similar shape to what is being modeled. I like the idea that only that which is simple or computationally reducible can be modeled sufficiently in current scale foundation models. Rigorous agentic behavior is necessary to deal with computationally difficult activation pathways.

  • @pauldannelachica2388
    @pauldannelachica2388 16 วันที่ผ่านมา

    ❤❤❤❤❤❤

  • @tripp8833
    @tripp8833 10 วันที่ผ่านมา

    17:00

  • @johnkintree763
    @johnkintree763 14 วันที่ผ่านมา

    So, there is a plugin for ChatGPT so it can access Wolfram resources. How about an interface to Wolfram resources that can be used by any language model?

  • @shanecormier1
    @shanecormier1 12 วันที่ผ่านมา +1

    Computationally speaking of course.

  • @johnkintree763
    @johnkintree763 14 วันที่ผ่านมา

    Before we can expect an AI to accurately predict meaningful events, it probably needs to be able to accurately describe the present, and prior events. A graph structure is probably a good way to represent the present and the past.

  • @wawalkinshaw
    @wawalkinshaw 15 วันที่ผ่านมา +1

    😊

  • @Ramkumar-uj9fo
    @Ramkumar-uj9fo 10 วันที่ผ่านมา

    I cannot challenge Stephen but Max os saying it can give a symbolic equation like sine theta.

  • @Ramkumar-uj9fo
    @Ramkumar-uj9fo 10 วันที่ผ่านมา

    I understood NKS is based on Emergence. Sabine is supporting now.
    Is Stephen Wolphram nks based on emergence? Say yes or no
    Yes.
    ChatGPT
    I understood you after the movie Automata

  • @obi_na
    @obi_na 14 วันที่ผ่านมา

    Map your perception onto the Transformers Perception

  • @Ramkumar-uj9fo
    @Ramkumar-uj9fo 10 วันที่ผ่านมา

    Dream : Wolphram and Tegmark takking to each other.

  • @siddharthpotti203
    @siddharthpotti203 15 วันที่ผ่านมา

    There is no specificity regarding the metrics of measuring computational intelligence and representing it.

  • @facozu2023
    @facozu2023 14 วันที่ผ่านมา

    2:44 Textadistics?

  • @Morris_MK
    @Morris_MK 15 วันที่ผ่านมา +1

    Would be helpful if he could produce a simple example in which LLM plus his calculation engine is better than LLM alone.

  • @En1Gm4A
    @En1Gm4A 15 วันที่ผ่านมา +1

    Sometimes I wonder why people talk about stuff so clearly and still miss the point 😂😂

  • @ydmoskow
    @ydmoskow 13 วันที่ผ่านมา

    What about protein folding. AI was wildly successful.

  • @mrtienphysics666
    @mrtienphysics666 15 วันที่ผ่านมา +1

    Hype vs Reality

  • @amelieschreiber6502
    @amelieschreiber6502 15 วันที่ผ่านมา +3

    “If it has linear activation functions it will predict a linear continuation of a sine wave”, really?! Not sure about that one 😅

    • @VivekYadav-ds8oz
      @VivekYadav-ds8oz 15 วันที่ผ่านมา +2

      Yeah I was very confused about that. It's very easy for a shallow, like 5-6 layer deep neural network to learn a very decent approximation of a sin(x) wave, very quickly. I don't know what he meant to say there. (And yes, with ReLU activation only)

    • @marcovoetberg6618
      @marcovoetberg6618 15 วันที่ผ่านมา +1

      @@VivekYadav-ds8oz Yes, but will it continue the sine wave outside of the data it was trained on? It will not. It can't because none of the math of the nn is periodic.

    • @user-wc2lm2sm6m
      @user-wc2lm2sm6m 15 วันที่ผ่านมา +1

      @@VivekYadav-ds8oz ReLU is not a linear activation function though

  • @thomasschmidt9264
    @thomasschmidt9264 15 วันที่ผ่านมา +3

    LMs in Science? Do LMs behave like scientists? After chatting with ChatGPT for some time I think it behaves like a lazy student during an oral exam. The student is brilliant in using the language, maybe because he already read some books in his life but he did not prepare for this particular exam. So, when asked by the teacher he produces a good sounding answer, the best he can produce, some kind of small talk inspired by a mix of everything he read in his life. The question is, are we (humans, scientists) all and always behaving like a lazy student? Does the ability to create or find new knowledge emerge from the size of a LM?

  • @AaBb-pp9bd
    @AaBb-pp9bd 13 วันที่ผ่านมา

    "have you know maybe 50,000 words in typical languages” THERES 170,000 words in English

  • @medhurstt
    @medhurstt 13 วันที่ผ่านมา

    My opinion is that Stephen Wolfram is struggling to understand modern transformer based AI and I'm not sure why because he's actually described them in the past. Whilst its technically true an LLM such as chat GPT simply produces a next word based on statistics, its disingenuous to say that's all its doing because with that statement there is no implication of the profound understanding inherent in the model of every word prior, in the context of its training leading to that choice.
    Stephen's downplaying of neural networks leaves me cold. Sorry, your attempt at creating AI using graphs and tokens didn't work out, Stephen.

  • @keithschaub7863
    @keithschaub7863 16 วันที่ผ่านมา +1

    Ok. So. That seems wrong. A sine wave is super easy to define and anyone can plot the next points based on the previous points whereas for a sentence the prediction, yes maybe easier than we thought, but it’s much harder to predict the next word. Anyway. I feel that was a poor example.

    • @amitojha1085
      @amitojha1085 16 วันที่ผ่านมา +3

      Regarding the sine wave example.. He meant that you have to make the machine learn trigonometry first to be able to complete the plot (Which is a herculian task). Without learning, it would just copy.

  • @jinbinongfu
    @jinbinongfu 15 วันที่ผ่านมา +6

    I love him but his conceit is exhausting

    • @Rawi888
      @Rawi888 15 วันที่ผ่านมา

      YOU JUST DON'T WATCH ENOUGH RICK AND MORTY ! YOU DON'T GET IT BRO, IT'S NOT BRAGGING IF YOU CAN BACK IT UP 🔥🔥😤😤😤💰🤑🔬🧑🏾‍🔬🥼⚗️🧫 NARCISSISTS ARE HUMANS TOO 😡🥵🙂‍↔️🤪🤨📸📸📸📸📸📸📸😵😵😵😵😵😵😵😵😵😵😂😂😂

    • @honkytonk4465
      @honkytonk4465 13 วันที่ผ่านมา

      ​@@Rawi888is that a comment or is it art?

    • @Rawi888
      @Rawi888 13 วันที่ผ่านมา

      @@honkytonk4465 inclusiveOR

  • @NickDrinksWater
    @NickDrinksWater 15 วันที่ผ่านมา

    ai will become more useful over time to help people drink water.

  • @user-fx7li2pg5k
    @user-fx7li2pg5k 15 วันที่ผ่านมา +4

    he lying out of teeth in some areas

  • @dzsman
    @dzsman 15 วันที่ผ่านมา +1

    This guy is overrated, he is saying much but concludes very little.

  • @OverLordGoldDragon
    @OverLordGoldDragon 6 วันที่ผ่านมา

    If ChatGPT writing code makes it boilerplate, then so is nearly all code. It outdesigned humans for ML reward heuristics (Eureka paper), for example.
    He's much more pessimistic on AI than I imagined. Disappointing.

  • @chrismai1889
    @chrismai1889 16 วันที่ผ่านมา +6

    Never heard of this guy, but he comes across as someone whose qualities do not include humility and curiosity. His idea of the genesis of human language seems rather unsophisticated. People 200000 years ago were certainly much more concerned with animals and plants they could eat than they were with rocks. That he mentions rocks first tells me that he has not really spent a lot of time thinking about how human language could have evolved.

    • @jinbinongfu
      @jinbinongfu 15 วันที่ผ่านมา +2

      Humility no, curiosity yes

    • @udaykadam5455
      @udaykadam5455 15 วันที่ผ่านมา +7

      Bud, I haven't yet watched the video but that's Wolfram, creator of the Wolfram language.
      Certainly he knows about the language, maths and qualities of the emergent complexities more than any of us

    • @jinbinongfu
      @jinbinongfu 15 วันที่ผ่านมา +1

      @@udaykadam5455 Hi Stephen

    • @NightmareCourtPictures
      @NightmareCourtPictures 15 วันที่ผ่านมา

      Bro people were drawing art on cave walls 50k years ago. Pretty sure rocks were not only important hunting material, but also to tell stories. I wouldn’t be surprised if rocks were worshiped like gods…
      Imagine what apes did when finding gold I wonder

  • @BibhatsuKuiri
    @BibhatsuKuiri 15 วันที่ผ่านมา

    he is too biased to say AI is dumb. this is not a good spirit

  • @user-yv4gg7jb2f
    @user-yv4gg7jb2f 15 วันที่ผ่านมา +1