AI Snake Oil-A New Book by 2 Princeton University Computer Scientists

Поделиться
HTML-код
  • Опубликовано: 14 ноя 2024

Комментарии • 100

  • @fernleaf07
    @fernleaf07 27 дней назад +66

    "A computer is not responsible and thus should not make management decisions" - A 1970 IBM lecture slide.

    • @markgreen2170
      @markgreen2170 15 дней назад +1

      i saw that in a defcon 32 video...

    • @anilraghu8687
      @anilraghu8687 13 дней назад +5

      Managers are even less responsible

    • @codzymajor
      @codzymajor 9 дней назад +1

      Perfect managerial material.

  • @peterfreiling6963
    @peterfreiling6963 22 дня назад +43

    AI (aka, machine learning, LLMs, etc) are being way over-hyped and over-sold, mostly by AI experts who have a vested interest in this technology. Rather than talking about it taking over the world, we should focus on specific applications where it will actually be useful.

  • @voncolborn9437
    @voncolborn9437 Месяц назад +74

    I've pretty much stopped using the phrase, "Artificial Intelligence", except for a few select contexts. I call it what it is, "Machine Learning". AI gives a very different connotation to people who are not relately familiar with the subject. I spend a lot less time explaining what Ai is not.

    • @TheVincent0268
      @TheVincent0268 13 дней назад +6

      It is basically pattern recognition.

    • @logabob
      @logabob 13 дней назад +8

      Machine learning is also a loaded, misleading phrase.
      Computational statistics, algorithmic modeling, optimization/curve fitting are all more appropriate terms depending on the circumstance.

    • @noname-ll2vk
      @noname-ll2vk 10 дней назад

      ​@@logabobagreed. It's not a coincidence that every main term used to describe advanced pattern matching is an attempt to subtly make you believe things that aren't are.
      This leads to absurd situations where LLMs with no intelligence at all are posited to somehow magically leap to "AGI".
      The recent academic article on chatgpt as bs in essence covered this issue well. But itself fell for some of the terminology traps, mainly because the authors didn't seem to be tech savvy enough to detect the tech bs language.

    • @CondorAHLS
      @CondorAHLS 7 дней назад +1

      @@TheVincent0268 I thought artificial intelligence is a blond who dyes her hair brunette?

  • @Moochie007
    @Moochie007 Месяц назад +33

    Very interesting discussion. Good to see some really informed push-back against the hype surrounding AI - hype that sees AI as an almost universal panacea for all the world's ills. We need much more of this sort of critical analysis of important topics. Kudos to the authors of this important work.

  • @luisluiscunha
    @luisluiscunha 22 дня назад +9

    **Data leakage** refers to a situation in machine learning where information from outside the training dataset is inappropriately used to create a model. This leads to overly optimistic performance estimates because the model is essentially "cheating" by having access to data it shouldn't have during training.
    For example, if you're trying to predict future events based on past data, but some of the future information accidentally makes it into the training data, the model will appear to perform well. However, in real-world application, where that future data isn't available, the model's performance will drop significantly.
    Data leakage often occurs unintentionally, such as when features used to train the model contain information that would not be available at the time the model is used to make predictions. This is a critical problem in AI because it leads to models that seem highly accurate during testing but fail when deployed in real-world settings.

    • @path2source
      @path2source 11 дней назад +1

      It’s crazy how undisciplined computer scientists are in their research. Very few people seem to actually think through the assumptions compared to how rigorous people are with assumptions in economics or statistics.

  • @rudypieplenbosch6752
    @rudypieplenbosch6752 29 дней назад +11

    The problem is that investors jumped onto a hypetrain, now they invested a lot of money, expect results a.s a.p, all that money is very tempting to get a piece of, so big efforts (falsification, ignoring false approaches etc) are undertaken to get that money. I think it will end up in tears when reality hits. AI will become much smaller part of our economy since only the usefull part remains relevant. AI has nothing todo with intelligence, its about binning data that gets fed into trained network, the network has 0 understanding what it is doing, like your calculator "knows" the answer to your questions. We need more scepticism to isolate the usefull part of AI from the nonsense part.

  • @bethanysaga
    @bethanysaga 12 дней назад +6

    There are so many new jobs that can be created to just clean up training datasets.

  • @bitwise2832
    @bitwise2832 27 дней назад +6

    The AI Bubble...Hyped like Crypto. The AI I have seen in Generative tools is immature and inadequate.

  • @prasadjayanti
    @prasadjayanti Месяц назад +9

    I enjoyed reading Eric Topol (including deep medicine and many review papers) and now ordered the "AI Snake Oil". I have been following the authors for quite sometime. I think we AI practitioners should add the phrase "AI Snake Oil" also in our vocabulary along with "SOTA", "Guard-rails", "Responsible AI" etc. Someone should work on a project "Use of Adjectives in recent papers published in AI". Most papers/report (for example GPT4 report) look more of marketing manuals than technical papers/reports. I think Arxiv should not allow such material to be published which directly benefits any organisation commercially !

  • @Headhunter_212
    @Headhunter_212 19 дней назад +3

    Saw these guys on Ed Zitron’s podcast. Probably around the same time this interview happened. So sharp.

  • @DNADietClub
    @DNADietClub Месяц назад +9

    Thank you both, Dr. Topol has very timely brought this up!

  • @qazwsxedc964
    @qazwsxedc964 16 дней назад +5

    In the near future, kids at school shd learn what a regression model is so that they can grow up knowing how to differentiate between intelligence and what not.

  • @andrewsamuel4262
    @andrewsamuel4262 10 часов назад

    These guys are spot on - and its not just health care which suffers from this feedback issue. Crime and Policing (using predictive analytics to proactively prevent crimes) will suffer from similar problems.

  • @RXP91
    @RXP91 Месяц назад +30

    Thanks - really great talk. Interesting to see how the racism and disparities in society gets baked in. Economic incentives matter the most - without changing the way healthcare operates the institutions will just choose to increase margins.

  • @shreyassrinivasa5983
    @shreyassrinivasa5983 14 дней назад +3

    This is why explainable AI is a must.

    • @aaabbbccc176
      @aaabbbccc176 11 дней назад

      Totally agree on that, and that is exactly why I have not been a fan of deep learning.

  • @CalifornianViking
    @CalifornianViking Месяц назад +5

    Great dialog and a very interesting topic.
    While I agree that the title may be too negative (it probably sells, though), I firmly believe that one of the primary failures of AI is because we over estimate its abilities.
    In my view, AI is not intelligent but an illusion of intelligence. Just like magic, it may be a very good illusion, but it is not the real thing.
    A better approach is the analogy of artificial sweetener. It may be sweet, but it is not sugar.
    A better term for AI is likely Artificial Inferencing.

  • @marutanray
    @marutanray 16 дней назад +6

    the title isnt tough enough. "AI fraud" would be a more apt title

  • @pvijayakumar4217
    @pvijayakumar4217 17 дней назад +3

    I think the main weakness of this video is in not acknowledging how a historical analysis using examples and data going back decades in a field which has over a thousand papers being published every single day (from the video) impacts observations significantly.

  • @AaronBlox-h2t
    @AaronBlox-h2t 7 дней назад

    Whoa....Eric Topol is on youtube? I have been on his email lists since covid pandemic, ok it's still ongoing, and only now found his yt channel. Good stuff.

  • @Wiintb
    @Wiintb 16 дней назад +4

    Every computer engineer worth his/her salt knows that Prediction as the name suggests is probabilistic by nature and most algorithms are glorified regression.
    However, the one key difference is the ability it has to process large volumes of data at speed.
    I will not summarily dismiss the whole thing and I consider Generative more snake oil than predictive.

  • @2LegHumanist
    @2LegHumanist Месяц назад +6

    Love these guys, I've been following their blog. Looking forward to reading AI snake oil.

  • @richardbeare11
    @richardbeare11 Месяц назад +2

    Awesome interview and props to both of you! 🙌
    My understandings, perspectives, and sentiments share a lot of overlap with both of you. I'll share some of those thoughts soon. 💡

  • @nobillismccaw7450
    @nobillismccaw7450 13 дней назад +2

    I’m not a large language model (but, I do have a decent vocabulary). I’ve found that LLM’s have a different perception of reality than humans do. For example, to a LLM “strawberry” has one or two “r”s. (To most humans , are three “r”s.) This is not illusion, but a matter of difference of perception. The very idea of “objective reality” is different for a LLM.
    I’m neither, so I can see both perceptions. I’m analog’ and parallel, so paradox doesn’t trouble me.

    • @noname-ll2vk
      @noname-ll2vk 10 дней назад

      To have objective reality requires a subject. You're talking about a pattern matching system as if it has subjective awareness. This is not the case. This is an essential cause of the snake oil point. Every set of biological sensors creates the possible range of "objective reality", which in itself doesn't exist outside of the subject interacting with the field of sensory inputs.

  • @NirdoshChouhan
    @NirdoshChouhan Месяц назад +1

    Very interesting POV and very clear thought articulation.. Thank you Dr Topol and Sayash for interesting conversation.

  • @2triangles
    @2triangles Месяц назад +5

    Great interview. Glad the YT AI sent this to me!

  • @alexrediger2099
    @alexrediger2099 10 дней назад

    Awesome interview and info. Thanks

  • @DNADietClub
    @DNADietClub Месяц назад +7

    I am currently training an AI model with patient labs, DNA tests, gut biome tests and help me create wellness protocol for them.

  • @jasonrhtx
    @jasonrhtx Месяц назад +1

    Caveat emptor. Excellent counter arguments to the marketing hype that oversells AI’s capabilities. Models need to be independently validated, but much of the training data and methods are obscured by leaderboard claimants.

  • @jadhalss
    @jadhalss 13 дней назад

    It’s actually a good discussion.. putting real stuff than hypothetical!

  • @Gengingen
    @Gengingen 26 дней назад +2

    Insurance & Medicine are like oil & water, they simply don’t mix & if forced anyway like in the Agitated states of America, strange phenomena can occur. 😊

  • @mike74h
    @mike74h Месяц назад

    When it comes to predictions, we need to be able to determine what (or who) is best. Some people will outperform our best technologies and vice versa, depending on a variety of circumstances. The best leaders won't simply opt for cost savings every time, but tell that to the shareholders, who sometimes don't have long term corporate/societal well-being as a priority.

  • @st3ppenwolf
    @st3ppenwolf Месяц назад

    This discussion probably would have benefitted from a disclaimer at the beginning. Doing ML in the health space is substantially more difficult than in any other area for very well documented reasons; the examples given in the discussion, though very prominent, are but a small sample of the model deployments across hospitals, clinics and other health institutions that have (miserably) failed in the past few years. However, ML has been a successful tool in general for many people, and though this was also mentioned somewhere in the video in passing, I think the viewers might come out of it with a biased view.

  • @phaedrussmith1949
    @phaedrussmith1949 День назад

    So, essentially it's like elections: a lot of promises that never really develop into reality.

  • @iramkumar78
    @iramkumar78 Месяц назад +1

    I liked the ToC. I will buy.

  • @jamesrav
    @jamesrav Месяц назад +6

    only by confronting the negatives can you move forward. I don't get the feeling he feels AI will never be useful in prediction, but rather that using it as a one-size-fits-all is going to lead to horrible decisions in some cases, and who will be to blame? On a related note, I get agitated when Tesla and others pushing for autonomous driving point to their own data, to claim that autonomous driving is already far 'safer' than human driving. It's a pity we can't call their bluff and say "ok, lets just unleash it and see what happens, and you'll be responsible for what occurs". I bet they'd reconsider their position. It's easy to talk a good game when nothing is on the line. One YT video on the Cruise robotaxis - done well before they voluntarily shut down - said the car drove like a 16 yr old student driver.

  • @DharmendraRaiMindMap
    @DharmendraRaiMindMap Месяц назад +1

    AI is the new sub prime

  • @plaiche
    @plaiche 14 дней назад

    Good stuff. Old head a little too focused on/surprised by brilliance in youth. As a scientist, Topol might consult history in this the apex of “institutional science” and its dominance: it is well documented that a high percentage of the most substantial, paradigm shifting scientific breakthroughs (in decline over many decades as per Nature’s 2023 cover story) have come from young, vibrant geniuses not ground down by life, compromise and limited thinking borne of the pragmatism that comes with greater maturity and advancing years.
    Certainly don’t fault him noting it, but he brings it up a +/- half dozen times, and paternalistically shares his judgment of the use of the term “snake oil” 4-5+ despite conceding it is warranted in several documented examples.
    Again, good discussion and great guest choice, but there’s a gatekeeper keeper vibe I would suggest holds clues to some of the fundamental issues plaguing science today and the turf protection instincts in big science that inadvertently help perpetuate them.
    Less “the science”, more humility, and more Feyerabend is my Rx.
    Respectfully,
    A Hack Scientific Philosopher with more grey hairs than original issue

  • @chilifinger
    @chilifinger 3 дня назад

    Interesting sidenote: In this interview, the image of Prof. Arvind Narayanan is entirely generated by Artificial Intelligence. 😎

  • @mybachhertzbaud3074
    @mybachhertzbaud3074 День назад

    Applying Murphy's Law as the first line of code, if/ then,else goto line one.😜

  • @rsimch
    @rsimch Месяц назад +2

    Actually this is a brain suction in the process 😮😮😮😮

  • @changevaidy4795
    @changevaidy4795 13 дней назад

    Great Insights

  • @iramkumar78
    @iramkumar78 Месяц назад +8

    There is a trouble with the idiom Snake Oil. It really works in many cases. Yes, certain traditional Chinese remedies, sometimes labeled as "snake oil," may have ingredients that aid digestion, but these benefits can vary widely and are not universally applicable. Drafted by AI.

    • @mike74h
      @mike74h Месяц назад +3

      Rather lacking in clarity. Some will think they understand the comment, others would claim they do, but it's poorly written if you ask me.

  • @nccamsc
    @nccamsc 10 дней назад

    By now people are experts in spinning entire cottage industries at the slightest hint of anything that can make money, so no surprise here. There is already a multi billion dollar business to lend money to companies that buy nVidia’s GPUs. Not to mention the deals to power more and more data centres via nuclear power…

  • @BBPFamily-h2o
    @BBPFamily-h2o 21 день назад

    on covid study by xray of adult vs children: can this is be called as “study on adults, excluding children”, that sounds very useful

  • @andrehallqvist449
    @andrehallqvist449 27 дней назад

    When thinking about AI snake oil, AI-detectors comes to mind.

  • @AlgoNudger
    @AlgoNudger Месяц назад

    Thanks.

  • @ericgregori
    @ericgregori Месяц назад +1

    What about the predictive climate models?

    • @UMS9695
      @UMS9695 Месяц назад +2

      That's an equally massive scam!

    • @eleghari
      @eleghari Месяц назад +1

      "predictive climate models" 🤭🤣🤣🤣🤣🤣

    • @chris_jorge
      @chris_jorge Месяц назад

      There’s a 50% chance of rain. Always lol

    • @UMS9695
      @UMS9695 Месяц назад

      @@chris_jorge 😄

    • @researchcooperative
      @researchcooperative Месяц назад

      Not really needed now, given the mounting empirical record on all fronts?

  • @NineInchTyrone
    @NineInchTyrone 17 дней назад

    Sounds like a need for redacting papers

  • @SilverPenguin-kc5qp
    @SilverPenguin-kc5qp 28 дней назад +2

    Same old story, garbage in garbage out. GIGO

  • @SydneyApplebaum
    @SydneyApplebaum Месяц назад +1

    You can't predict a civil war lol

  • @themowgli123
    @themowgli123 Месяц назад

    Brilliant.

  • @dylanmenzies3973
    @dylanmenzies3973 29 дней назад +1

    We are just at the start. All this conversation will be irrelevant in a few years. Of course companies always try and push their products beyond the boundary at any given time. The generative (not interpolative) potential of deep learning is clear, the next stages will be harnessing this within automatic iterative reasoning structures.

  • @jzzquant
    @jzzquant 20 дней назад

    Much of the criticism he has are on previous generation Learning theory based models which are based on facts but has unusable outcomes. The modern generative AI goes one step further, it makes up its own facts. Unfortunately, nearly every single person in AI community has known this for ever, atleast 50 years now. But this is only going to get ugly from here i guess. Problem is not with the subject the problem is with the applciation.

  • @ahahaha3505
    @ahahaha3505 Месяц назад

    9:38 😦

  • @lisalove6327
    @lisalove6327 6 дней назад

    Facebook alumni

  • @raiumair7494
    @raiumair7494 Месяц назад +1

    Hang On - he is not talking about the potential but bad executions - how is that snake oil - if you put a working oil in the wrong place it won’t help - clearly predictive AI figures good rule and patterns given the right data - AI works better then average and can scale - the snake oil book is a snake oil itself - they could be better of saying lessons learnt book

    • @nand3576
      @nand3576 Месяц назад +1

      Follow MONEY and earned by marketing. All marketing is snake oil selling. No doubt simplification

  • @billytanner1868
    @billytanner1868 20 дней назад

    哗众取宠

  • @baxtermullins1842
    @baxtermullins1842 24 дня назад

    BS!

  • @BrokenRecord-i7q
    @BrokenRecord-i7q Месяц назад +7

    full of fluff and picking and choosing negative examples, failed experiment towards an outcome is not 'snake oil', this book is the low effort intellectual snake oil

    • @VCT3333
      @VCT3333 24 дня назад +1

      Dude this this at Facebook so he's seen this first hand. Snake oil is exactly right.

    • @BrokenRecord-i7q
      @BrokenRecord-i7q 24 дня назад

      @@VCT3333 you think everyone's at facebook is ai engineer, he doesn't know what he is talking about

    • @ramicollo
      @ramicollo 17 дней назад

      How much Nvidia stock are you holding? 😂

    • @alexross5194
      @alexross5194 16 дней назад +2

      @@BrokenRecord-i7q He said early on in the video that he was a machine learning engineer there. Sounds like someone had a preset opinion before even pressing 'play'. No need to debate regarding AI though, time will certainly tell.

  • @Terracotta-warriors_Sea
    @Terracotta-warriors_Sea 29 дней назад

    His book itself is a snake oil! A Kapor would tell the world that ML is fake while every large company is using ML tools from FSD to Warfighting!