AI Snake Oil-A New Book by 2 Princeton University Computer Scientists

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 พ.ย. 2024

ความคิดเห็น • 97

  • @fernleaf07
    @fernleaf07 24 วันที่ผ่านมา +58

    "A computer is not responsible and thus should not make management decisions" - A 1970 IBM lecture slide.

    • @markgreen2170
      @markgreen2170 12 วันที่ผ่านมา

      i saw that in a defcon 32 video...

    • @anilraghu8687
      @anilraghu8687 10 วันที่ผ่านมา +4

      Managers are even less responsible

    • @codzymajor
      @codzymajor 6 วันที่ผ่านมา

      Perfect managerial material.

  • @peterfreiling6963
    @peterfreiling6963 18 วันที่ผ่านมา +40

    AI (aka, machine learning, LLMs, etc) are being way over-hyped and over-sold, mostly by AI experts who have a vested interest in this technology. Rather than talking about it taking over the world, we should focus on specific applications where it will actually be useful.

  • @Moochie007
    @Moochie007 หลายเดือนก่อน +32

    Very interesting discussion. Good to see some really informed push-back against the hype surrounding AI - hype that sees AI as an almost universal panacea for all the world's ills. We need much more of this sort of critical analysis of important topics. Kudos to the authors of this important work.

  • @voncolborn9437
    @voncolborn9437 หลายเดือนก่อน +72

    I've pretty much stopped using the phrase, "Artificial Intelligence", except for a few select contexts. I call it what it is, "Machine Learning". AI gives a very different connotation to people who are not relately familiar with the subject. I spend a lot less time explaining what Ai is not.

    • @TheVincent0268
      @TheVincent0268 10 วันที่ผ่านมา +6

      It is basically pattern recognition.

    • @logabob
      @logabob 10 วันที่ผ่านมา +8

      Machine learning is also a loaded, misleading phrase.
      Computational statistics, algorithmic modeling, optimization/curve fitting are all more appropriate terms depending on the circumstance.

    • @noname-ll2vk
      @noname-ll2vk 7 วันที่ผ่านมา

      ​@@logabobagreed. It's not a coincidence that every main term used to describe advanced pattern matching is an attempt to subtly make you believe things that aren't are.
      This leads to absurd situations where LLMs with no intelligence at all are posited to somehow magically leap to "AGI".
      The recent academic article on chatgpt as bs in essence covered this issue well. But itself fell for some of the terminology traps, mainly because the authors didn't seem to be tech savvy enough to detect the tech bs language.

    • @CondorAHLS
      @CondorAHLS 4 วันที่ผ่านมา

      @@TheVincent0268 I thought artificial intelligence is a blond who dyes her hair brunette?

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 26 วันที่ผ่านมา +8

    The problem is that investors jumped onto a hypetrain, now they invested a lot of money, expect results a.s a.p, all that money is very tempting to get a piece of, so big efforts (falsification, ignoring false approaches etc) are undertaken to get that money. I think it will end up in tears when reality hits. AI will become much smaller part of our economy since only the usefull part remains relevant. AI has nothing todo with intelligence, its about binning data that gets fed into trained network, the network has 0 understanding what it is doing, like your calculator "knows" the answer to your questions. We need more scepticism to isolate the usefull part of AI from the nonsense part.

  • @luisluiscunha
    @luisluiscunha 19 วันที่ผ่านมา +7

    **Data leakage** refers to a situation in machine learning where information from outside the training dataset is inappropriately used to create a model. This leads to overly optimistic performance estimates because the model is essentially "cheating" by having access to data it shouldn't have during training.
    For example, if you're trying to predict future events based on past data, but some of the future information accidentally makes it into the training data, the model will appear to perform well. However, in real-world application, where that future data isn't available, the model's performance will drop significantly.
    Data leakage often occurs unintentionally, such as when features used to train the model contain information that would not be available at the time the model is used to make predictions. This is a critical problem in AI because it leads to models that seem highly accurate during testing but fail when deployed in real-world settings.

    • @path2source
      @path2source 8 วันที่ผ่านมา

      It’s crazy how undisciplined computer scientists are in their research. Very few people seem to actually think through the assumptions compared to how rigorous people are with assumptions in economics or statistics.

  • @prasadjayanti
    @prasadjayanti หลายเดือนก่อน +8

    I enjoyed reading Eric Topol (including deep medicine and many review papers) and now ordered the "AI Snake Oil". I have been following the authors for quite sometime. I think we AI practitioners should add the phrase "AI Snake Oil" also in our vocabulary along with "SOTA", "Guard-rails", "Responsible AI" etc. Someone should work on a project "Use of Adjectives in recent papers published in AI". Most papers/report (for example GPT4 report) look more of marketing manuals than technical papers/reports. I think Arxiv should not allow such material to be published which directly benefits any organisation commercially !

  • @RXP91
    @RXP91 หลายเดือนก่อน +30

    Thanks - really great talk. Interesting to see how the racism and disparities in society gets baked in. Economic incentives matter the most - without changing the way healthcare operates the institutions will just choose to increase margins.

  • @bethanysaga
    @bethanysaga 8 วันที่ผ่านมา +5

    There are so many new jobs that can be created to just clean up training datasets.

  • @DNADietClub
    @DNADietClub หลายเดือนก่อน +9

    Thank you both, Dr. Topol has very timely brought this up!

  • @Headhunter_212
    @Headhunter_212 16 วันที่ผ่านมา +3

    Saw these guys on Ed Zitron’s podcast. Probably around the same time this interview happened. So sharp.

  • @bitwise2832
    @bitwise2832 24 วันที่ผ่านมา +5

    The AI Bubble...Hyped like Crypto. The AI I have seen in Generative tools is immature and inadequate.

  • @pvijayakumar4217
    @pvijayakumar4217 14 วันที่ผ่านมา +3

    I think the main weakness of this video is in not acknowledging how a historical analysis using examples and data going back decades in a field which has over a thousand papers being published every single day (from the video) impacts observations significantly.

  • @CalifornianViking
    @CalifornianViking หลายเดือนก่อน +5

    Great dialog and a very interesting topic.
    While I agree that the title may be too negative (it probably sells, though), I firmly believe that one of the primary failures of AI is because we over estimate its abilities.
    In my view, AI is not intelligent but an illusion of intelligence. Just like magic, it may be a very good illusion, but it is not the real thing.
    A better approach is the analogy of artificial sweetener. It may be sweet, but it is not sugar.
    A better term for AI is likely Artificial Inferencing.

  • @Wiintb
    @Wiintb 13 วันที่ผ่านมา +4

    Every computer engineer worth his/her salt knows that Prediction as the name suggests is probabilistic by nature and most algorithms are glorified regression.
    However, the one key difference is the ability it has to process large volumes of data at speed.
    I will not summarily dismiss the whole thing and I consider Generative more snake oil than predictive.

  • @2LegHumanist
    @2LegHumanist หลายเดือนก่อน +6

    Love these guys, I've been following their blog. Looking forward to reading AI snake oil.

  • @richardbeare11
    @richardbeare11 หลายเดือนก่อน +2

    Awesome interview and props to both of you! 🙌
    My understandings, perspectives, and sentiments share a lot of overlap with both of you. I'll share some of those thoughts soon. 💡

  • @AaronBlox-h2t
    @AaronBlox-h2t 4 วันที่ผ่านมา

    Whoa....Eric Topol is on youtube? I have been on his email lists since covid pandemic, ok it's still ongoing, and only now found his yt channel. Good stuff.

  • @nobillismccaw7450
    @nobillismccaw7450 10 วันที่ผ่านมา +2

    I’m not a large language model (but, I do have a decent vocabulary). I’ve found that LLM’s have a different perception of reality than humans do. For example, to a LLM “strawberry” has one or two “r”s. (To most humans , are three “r”s.) This is not illusion, but a matter of difference of perception. The very idea of “objective reality” is different for a LLM.
    I’m neither, so I can see both perceptions. I’m analog’ and parallel, so paradox doesn’t trouble me.

    • @noname-ll2vk
      @noname-ll2vk 7 วันที่ผ่านมา

      To have objective reality requires a subject. You're talking about a pattern matching system as if it has subjective awareness. This is not the case. This is an essential cause of the snake oil point. Every set of biological sensors creates the possible range of "objective reality", which in itself doesn't exist outside of the subject interacting with the field of sensory inputs.

  • @qazwsxedc964
    @qazwsxedc964 13 วันที่ผ่านมา +3

    In the near future, kids at school shd learn what a regression model is so that they can grow up knowing how to differentiate between intelligence and what not.

    • @amadeus0123
      @amadeus0123 8 วันที่ผ่านมา

      Spot on!

  • @NirdoshChouhan
    @NirdoshChouhan หลายเดือนก่อน +1

    Very interesting POV and very clear thought articulation.. Thank you Dr Topol and Sayash for interesting conversation.

  • @marutanray
    @marutanray 13 วันที่ผ่านมา +5

    the title isnt tough enough. "AI fraud" would be a more apt title

    • @coffeyjjj
      @coffeyjjj 6 วันที่ผ่านมา

      bingo!

  • @DNADietClub
    @DNADietClub หลายเดือนก่อน +7

    I am currently training an AI model with patient labs, DNA tests, gut biome tests and help me create wellness protocol for them.

  • @shreyassrinivasa5983
    @shreyassrinivasa5983 10 วันที่ผ่านมา +2

    This is why explainable AI is a must.

    • @aaabbbccc176
      @aaabbbccc176 8 วันที่ผ่านมา

      Totally agree on that, and that is exactly why I have not been a fan of deep learning.

  • @jasonrhtx
    @jasonrhtx หลายเดือนก่อน +1

    Caveat emptor. Excellent counter arguments to the marketing hype that oversells AI’s capabilities. Models need to be independently validated, but much of the training data and methods are obscured by leaderboard claimants.

  • @Gengingen
    @Gengingen 23 วันที่ผ่านมา +2

    Insurance & Medicine are like oil & water, they simply don’t mix & if forced anyway like in the Agitated states of America, strange phenomena can occur. 😊

  • @jamesrav
    @jamesrav หลายเดือนก่อน +6

    only by confronting the negatives can you move forward. I don't get the feeling he feels AI will never be useful in prediction, but rather that using it as a one-size-fits-all is going to lead to horrible decisions in some cases, and who will be to blame? On a related note, I get agitated when Tesla and others pushing for autonomous driving point to their own data, to claim that autonomous driving is already far 'safer' than human driving. It's a pity we can't call their bluff and say "ok, lets just unleash it and see what happens, and you'll be responsible for what occurs". I bet they'd reconsider their position. It's easy to talk a good game when nothing is on the line. One YT video on the Cruise robotaxis - done well before they voluntarily shut down - said the car drove like a 16 yr old student driver.

  • @alexrediger2099
    @alexrediger2099 7 วันที่ผ่านมา

    Awesome interview and info. Thanks

  • @2triangles
    @2triangles หลายเดือนก่อน +5

    Great interview. Glad the YT AI sent this to me!

  • @jadhalss
    @jadhalss 10 วันที่ผ่านมา

    It’s actually a good discussion.. putting real stuff than hypothetical!

  • @mike74h
    @mike74h 27 วันที่ผ่านมา

    When it comes to predictions, we need to be able to determine what (or who) is best. Some people will outperform our best technologies and vice versa, depending on a variety of circumstances. The best leaders won't simply opt for cost savings every time, but tell that to the shareholders, who sometimes don't have long term corporate/societal well-being as a priority.

  • @st3ppenwolf
    @st3ppenwolf 28 วันที่ผ่านมา

    This discussion probably would have benefitted from a disclaimer at the beginning. Doing ML in the health space is substantially more difficult than in any other area for very well documented reasons; the examples given in the discussion, though very prominent, are but a small sample of the model deployments across hospitals, clinics and other health institutions that have (miserably) failed in the past few years. However, ML has been a successful tool in general for many people, and though this was also mentioned somewhere in the video in passing, I think the viewers might come out of it with a biased view.

  • @iramkumar78
    @iramkumar78 หลายเดือนก่อน +8

    There is a trouble with the idiom Snake Oil. It really works in many cases. Yes, certain traditional Chinese remedies, sometimes labeled as "snake oil," may have ingredients that aid digestion, but these benefits can vary widely and are not universally applicable. Drafted by AI.

    • @mike74h
      @mike74h 27 วันที่ผ่านมา +3

      Rather lacking in clarity. Some will think they understand the comment, others would claim they do, but it's poorly written if you ask me.

  • @plaiche
    @plaiche 11 วันที่ผ่านมา

    Good stuff. Old head a little too focused on/surprised by brilliance in youth. As a scientist, Topol might consult history in this the apex of “institutional science” and its dominance: it is well documented that a high percentage of the most substantial, paradigm shifting scientific breakthroughs (in decline over many decades as per Nature’s 2023 cover story) have come from young, vibrant geniuses not ground down by life, compromise and limited thinking borne of the pragmatism that comes with greater maturity and advancing years.
    Certainly don’t fault him noting it, but he brings it up a +/- half dozen times, and paternalistically shares his judgment of the use of the term “snake oil” 4-5+ despite conceding it is warranted in several documented examples.
    Again, good discussion and great guest choice, but there’s a gatekeeper keeper vibe I would suggest holds clues to some of the fundamental issues plaguing science today and the turf protection instincts in big science that inadvertently help perpetuate them.
    Less “the science”, more humility, and more Feyerabend is my Rx.
    Respectfully,
    A Hack Scientific Philosopher with more grey hairs than original issue

  • @iramkumar78
    @iramkumar78 หลายเดือนก่อน +1

    I liked the ToC. I will buy.

  • @rsimch
    @rsimch หลายเดือนก่อน +2

    Actually this is a brain suction in the process 😮😮😮😮

  • @nccamsc
    @nccamsc 7 วันที่ผ่านมา

    By now people are experts in spinning entire cottage industries at the slightest hint of anything that can make money, so no surprise here. There is already a multi billion dollar business to lend money to companies that buy nVidia’s GPUs. Not to mention the deals to power more and more data centres via nuclear power…

  • @BBPFamily-h2o
    @BBPFamily-h2o 18 วันที่ผ่านมา

    on covid study by xray of adult vs children: can this is be called as “study on adults, excluding children”, that sounds very useful

  • @DharmendraRaiMindMap
    @DharmendraRaiMindMap หลายเดือนก่อน +1

    AI is the new sub prime

  • @andrehallqvist449
    @andrehallqvist449 24 วันที่ผ่านมา

    When thinking about AI snake oil, AI-detectors comes to mind.

  • @chilifinger
    @chilifinger 15 ชั่วโมงที่ผ่านมา

    Interesting sidenote: In this interview, the image of Prof. Arvind Narayanan is entirely generated by Artificial Intelligence. 😎

  • @dylanmenzies3973
    @dylanmenzies3973 26 วันที่ผ่านมา +1

    We are just at the start. All this conversation will be irrelevant in a few years. Of course companies always try and push their products beyond the boundary at any given time. The generative (not interpolative) potential of deep learning is clear, the next stages will be harnessing this within automatic iterative reasoning structures.

  • @changevaidy4795
    @changevaidy4795 10 วันที่ผ่านมา

    Great Insights

  • @AlgoNudger
    @AlgoNudger 27 วันที่ผ่านมา

    Thanks.

  • @ericgregori
    @ericgregori หลายเดือนก่อน +1

    What about the predictive climate models?

    • @UMS9695
      @UMS9695 หลายเดือนก่อน +2

      That's an equally massive scam!

    • @eleghari
      @eleghari หลายเดือนก่อน +1

      "predictive climate models" 🤭🤣🤣🤣🤣🤣

    • @chris_jorge
      @chris_jorge หลายเดือนก่อน

      There’s a 50% chance of rain. Always lol

    • @UMS9695
      @UMS9695 หลายเดือนก่อน

      @@chris_jorge 😄

    • @researchcooperative
      @researchcooperative หลายเดือนก่อน

      Not really needed now, given the mounting empirical record on all fronts?

  • @SilverPenguin-kc5qp
    @SilverPenguin-kc5qp 25 วันที่ผ่านมา +2

    Same old story, garbage in garbage out. GIGO

  • @SydneyApplebaum
    @SydneyApplebaum หลายเดือนก่อน +1

    You can't predict a civil war lol

  • @NineInchTyrone
    @NineInchTyrone 14 วันที่ผ่านมา

    Sounds like a need for redacting papers

  • @jzzquant
    @jzzquant 16 วันที่ผ่านมา

    Much of the criticism he has are on previous generation Learning theory based models which are based on facts but has unusable outcomes. The modern generative AI goes one step further, it makes up its own facts. Unfortunately, nearly every single person in AI community has known this for ever, atleast 50 years now. But this is only going to get ugly from here i guess. Problem is not with the subject the problem is with the applciation.

  • @themowgli123
    @themowgli123 หลายเดือนก่อน

    Brilliant.

  • @ahahaha3505
    @ahahaha3505 หลายเดือนก่อน

    9:38 😦

  • @raiumair7494
    @raiumair7494 หลายเดือนก่อน +1

    Hang On - he is not talking about the potential but bad executions - how is that snake oil - if you put a working oil in the wrong place it won’t help - clearly predictive AI figures good rule and patterns given the right data - AI works better then average and can scale - the snake oil book is a snake oil itself - they could be better of saying lessons learnt book

    • @nand3576
      @nand3576 หลายเดือนก่อน +1

      Follow MONEY and earned by marketing. All marketing is snake oil selling. No doubt simplification

  • @lisalove6327
    @lisalove6327 3 วันที่ผ่านมา

    Facebook alumni

  • @baxtermullins1842
    @baxtermullins1842 21 วันที่ผ่านมา

    BS!

  • @billytanner1868
    @billytanner1868 17 วันที่ผ่านมา

    哗众取宠

  • @Terracotta-warriors_Sea
    @Terracotta-warriors_Sea 26 วันที่ผ่านมา

    His book itself is a snake oil! A Kapor would tell the world that ML is fake while every large company is using ML tools from FSD to Warfighting!

  • @BrokenRecord-i7q
    @BrokenRecord-i7q หลายเดือนก่อน +7

    full of fluff and picking and choosing negative examples, failed experiment towards an outcome is not 'snake oil', this book is the low effort intellectual snake oil

    • @VCT3333
      @VCT3333 21 วันที่ผ่านมา +1

      Dude this this at Facebook so he's seen this first hand. Snake oil is exactly right.

    • @BrokenRecord-i7q
      @BrokenRecord-i7q 21 วันที่ผ่านมา

      @@VCT3333 you think everyone's at facebook is ai engineer, he doesn't know what he is talking about

    • @ramicollo
      @ramicollo 14 วันที่ผ่านมา

      How much Nvidia stock are you holding? 😂

    • @alexross5194
      @alexross5194 13 วันที่ผ่านมา +2

      @@BrokenRecord-i7q He said early on in the video that he was a machine learning engineer there. Sounds like someone had a preset opinion before even pressing 'play'. No need to debate regarding AI though, time will certainly tell.