Can AI in Healthcare Be Trusted? | WSJ Tech News Briefing

Поделиться
HTML-код
  • Опубликовано: 7 сен 2024
  • The healthcare sector is already being transformed by the use of artificial intelligence. But what are the drawbacks to its use and what developments in AI will come to healthcare next?
    In their new book ‘Can We Trust AI?’, WSJ reporter Eric Niiler and researcher Rama Chellappa addressed these questions in a WSJ Live Q&A. Zoe Thomas hosts.
    Photo illustration: Storyblocks
    Tech News Briefing
    WSJ’s tech podcast featuring breaking news, scoops and tips on tech innovations and policy debates, plus exclusive interviews with movers and shakers in the industry.
    For more episodes of WSJ’s Tech News Briefing: link.chtbl.com...
    #AI #Healthcare #WSJ

Комментарии • 100

  • @vegasman53
    @vegasman53 Год назад +20

    The question is... Can human doctors be trusted... Everyday more and more show them putting money before patients and the courts are filled with malpractice suits...
    I trust science more than people.

    • @sociopathicnarcissist8810
      @sociopathicnarcissist8810 Год назад

      Fortunately, that is yet to happen in my country that has universal healthcare

    • @user-tz7je5sz9j
      @user-tz7je5sz9j Год назад

      human:
      memory capacity = one
      processing speed = one
      -----------------------------
      ai:
      memory capacity = one trillion.
      processing speed = one trillion.

  • @johndewey6358
    @johndewey6358 Год назад +23

    I hope WSJ would do more educational videos about AI so public's baseless fears are addressed. There are massive potential in improving our lives and securing our data using AI technologies.

    • @-.TS.-
      @-.TS.- Год назад +3

      We need to first secure data reliability and bias that goes into the algorithm. That’s where the concern should be rooted.

    • @vivburns9852
      @vivburns9852 Год назад +1

      Once they can predict an individuals suggested prospects for survival based on algorithmic analysis do you really think they won't start rationing healthcare 😆 this is eugenics my friend 🤮

    • @fernando3061
      @fernando3061 Год назад +3

      Their fears aren't baseless, have you seen the videos where ai talks about killing humans? Or super sus talks about how it has no intent to kill humans? Also people tend to be wary with massive and sudden developments game changing developments in general....there is nothing baseless about that. And let's not forget the number of jobs that are going to be and already are starting to be taken over. My lord man.

    • @user-tz7je5sz9j
      @user-tz7je5sz9j Год назад

      trust is built.

  • @containedhurricane
    @containedhurricane Год назад +21

    Once the technology becomes more advanced in the next decade, AI would likely be more trustworthy than physicians, nurses and surgeons

    • @vivburns9852
      @vivburns9852 Год назад +2

      Unless the AI includes biases

    • @christopherreed3019
      @christopherreed3019 Год назад

      dumbest comment on RUclips. A.I can NEVER replace the training and expertise of doctors/nurses etc

    • @nadeemshaikh7863
      @nadeemshaikh7863 Год назад +1

      @@vivburns9852 Biases are important to learn something new. With no bias, there's no progress, it's just that we need to know where a model has reached a critical point of failure. Before that, if it's useful then it's useful.

    • @vivburns9852
      @vivburns9852 Год назад +1

      @@nadeemshaikh7863 agreed, but constructive biases aren't what I'm referring to. And although I appreciate that a bias may appear to be counter productive when in fact it's beneficial, they act a steering mechanism for the AI's objectives. So with nefarious people not necessarily creating the AI, but deciding its focus, this is a real problem.

    • @user-tz7je5sz9j
      @user-tz7je5sz9j Год назад

      technology is already advanced enough.

  • @makatogonzo
    @makatogonzo Год назад +10

    If tele-medicine is a thing, then no reason for AI not to be first point of contact. As long as I don't need to go through more then a couple of layers of phone menu to get to it.

  • @akshathbharathi7376
    @akshathbharathi7376 Год назад +2

    I like Zo's voice. It's so smoothening.

  • @stark-tonny
    @stark-tonny Год назад +6

    Everybody win in The industry of Computer
    Workers:got jobs
    Boss:got investments from investors
    Investors: got money from Markets
    Media:got ADs

  • @Dobbs6651
    @Dobbs6651 Год назад +10

    Michigan Tech has a master's degree in health informatics that I'm currently going through. I will tell you that artificial intelligence is being studied in many different aspects of healthcare. One of the most profound ones that I have noticed is the clinical decision support opportunities that will become democratized pieces of technology that any individual can possess like chat GPT. Imagine a situation where you're on Mars and you need a doctor to make decisions for you in real-time. There will be tools of the future that will allow this to happen.

  • @SorminaESar
    @SorminaESar Год назад +3

    Zoe, I'm so sorry to say, could you mind giving us the qualification of your guest host, I just wonder, Zoe. When the topic is about capability AI in healthcare, it should be your guest host have expert of health and tech, right and then how the guest host combine them to help every one, at least take care of the health. Just my opinion, Zoe. Thanks so much🙏

  • @heffpeople
    @heffpeople Год назад +9

    Can medical doctors in healthcare be trusted??

    • @av-ls5df
      @av-ls5df Год назад +2

      absolutely not, nurses and doctors i've dealt with are terrible at their jobs

    • @o_o825
      @o_o825 Год назад

      This is it right here. I can barely trust humans so bring on the AI 😂

    • @Paul7mac
      @Paul7mac Год назад

      Just as doctors are corruptible Ai even more so.

  • @billsykes5392
    @billsykes5392 Год назад +3

    Longitudinal EHRs can provide a safe training ground for these technologies. Rather than rushing models out to real world deployment with minimal testing which is likely to go catastrophically wrong, the predictive capabilities of such models should be tested on their ability to predict outcomes that have already happened i.e. historical data. This way designers can safely test the efficacy of their models without putting patients at risk.

  • @christopherreed3019
    @christopherreed3019 Год назад +4

    AI should NEVER be substituted with the training and knowledge of a doctor. It should be used IN ADDITION to the expertise of a doctor...I think when used this way, it can be used effectively!

  • @billsykes5392
    @billsykes5392 Год назад +1

    We need to be critical not just about the fact EHRs could be used for predictive analytics, but also the guardrails for acquiring those EHRs. There needs to be extremely strict government regulations that are of an even higher standard than what we already see in medicine on who has access to our EHRs and what they do with that information. Adversarial states may manipulate loose EHR procurement rules to acquire exquisitely sensitive data on citizens. Privacy controls need to be in place even in the viewing of EHRs to prevent software engineers (who aren’t beholden to the medical confidentiality) from having access to patient identifiers e.g. name and dob. Data security is also essential; a data breach is the equivalent of medical confidentiality breach and responsible providers should face heavy repercussions including severe fines and duty to inform affected individuals.

  • @Matt-fl8uy
    @Matt-fl8uy Год назад +9

    Given the number of healthcare mistakes each year, it's worth a shot.

    • @jaguppal187
      @jaguppal187 Год назад +1

      it's better than our current healthcare - they're just trying to save their jobs haha

  • @WilfEsme
    @WilfEsme Год назад +1

    Similar to how artists can take advantage of AIs like Bluewillow to assist them on speeding up production. I believe in the future, we can also seek advice from AI to assist us on some questions about our health.

  • @HerleifJarle
    @HerleifJarle Год назад +1

    It is possible to have AIs result to invalid diagnosis due to convoluted sources. However, since we also have advancements how about if physicians also tried out AIs like image generators such Bluewillow to generate accurate results and for the AI to learn more.

  • @mohammedmudassirshaikh557
    @mohammedmudassirshaikh557 Год назад +3

    I don't appreciate these news briefing podcasts on main channel. There are plenty of sub-channels of WSJ for individual reporters. May be Zoe can get her own sub channel for just talking about news ! And keep the main channel for sharing "real" in-depth analysis.

  • @13owletto
    @13owletto Год назад +2

    AI and other advances in science can be very beneficial to advancements in healthcare but have to done in a very controlled manner in order to maintain ethical boundaries. Right now, AI could greatly benefit healthcare if used as a tool for things like triage, history taking, and analyzing testing such as x-rays, CT-scans, MRI’s, and blood draw values. However, all of the work done by the AI must have a second look by a physician before the patient lays eyes on the results. This may be hard with current electronic health records and patients having immediate access to imaging and testing reports before their physician has a chance. This is currently a problem that AI could potentially make worse - If a patient is able to see a test result with a cancer diagnosis, for example, it can cause great emotional harm to the patient if they are unable to speak with their physician about the diagnosis and potential treatment plans until the next day. I can see this a potential problem with AI as well, especially if AI is wrongly reporting things on testing - if an AI uses past data to compare with the patients image and make a diagnosis, the diagnosis or generated report on that image could be incorrect and with patients having immediate access to that incorrect information, they will believe they have cancer when they truly do not. Thus, AI could cause more mistrust in physicians and the entire healthcare system. Another example of this would be that typically physicians start treatment for their patients off a presumed diagnosis before imaging and lab results come back. This is done because a lot of diseases and illnesses require prompt treatment and physicians would rather have their patients feel better and avoid long term consequences than wait for testing results that would only confirm their suspicion. When an AI wrongly diagnoses or misreports on this testing, it will make patients question everything and possibly stop important treatment - for example, think about stopping antibiotics early and worsening the already existant resistance to antibiotics. For these reasons, AI should only be used as a tool to help guide the diagnosis made by a physician. This will also help preserve the autonomy of patients by allowing patients to still have a discussion about potential diagnoses and treatment options without having an algorithm from an AI tell the patient, this is your diagnosis and this is the only treatment option for said diagnosis. In conclusion, AI’s can lead to a lot of misinformation, mistrust, and emotional harm if not implemented in the correct settings with ethically important limitations.

  • @silverchairsg
    @silverchairsg Год назад +2

    What about discrimination on the part of health insurers? When they can use AI plus more advanced medical/genetics knowledge to much more accurately predict one's likelihood of getting certain types of costly diseases. They'll jack up the premiums like crazy.

  • @acustarwellness1325
    @acustarwellness1325 Год назад +1

    Speech recognition technology replaced medical transcriptionists. Now, AI will surely replace nutritional professionals, family phsycians, dieticians, diabetic educators, counsellors, psychotherapists, psychologists, etc.

  • @jmlfa
    @jmlfa Год назад

    If you trust correlation over causation, AI can sure be trusted.

  • @AparnaModou
    @AparnaModou Год назад +1

    I think that AI technology will progress to a level where it will be a required system/tool for healthcare. Let's say image generator AIs like Bluewillow will assist in processing sample images and identify illnesses at an earlier development. There is potential in AI as they will have the ability to learn and be sentient.

  • @rainer9825
    @rainer9825 Год назад +1

    Deepmind does have models that jump datasets, but that doesn't mean AGI is around the corner. The experts here seem to undervalue AI copilots. The AI will not make decisions anytime soon, but it will be a valuable tool to make informed decisions for patients and practicioners.

  • @SorminaESar
    @SorminaESar Год назад +3

    Good morning Zoe🙏 I'm so glad to see you again

  • @CHMichael
    @CHMichael Год назад +2

    Ai has the same 2 components in growing up, nature and nurture..
    Give the ai the right information and shield ot from negative "influences ".
    Given trustworthy symptoms and test results it should be better equipped to suggest a diagnosis.
    We need a human to evaluate if the information given by the human patient is trustworthy.
    Ai doctor with a experienced nurse might work

  • @thirdowl2944
    @thirdowl2944 Год назад +3

    Is Zoe Thomas real? I get AI vibes whenever I hear the reporting by her. Bit like having MS edge read an article.

  • @Placebo201
    @Placebo201 Год назад +1

    Short answer- no. Long answer- heck no.

  • @zaraizabella
    @zaraizabella Год назад

    My girlfriend is working as an applied mathematician for a data scientist trying to work out how to test the accuracy of AI
    We've had some long dog walks with it explaining very complicated and mind bending things that went right over my head :P

  • @lesptitsoiseaux
    @lesptitsoiseaux Год назад +2

    Why does she speak like that girl barista where the guy goes nuts over it lolll

  • @apt981
    @apt981 Год назад +3

    I mean once AI is perfect then it will be.

  • @K4R3N
    @K4R3N Год назад +4

    Every doctor I know already uses Google.

    • @K4R3N
      @K4R3N Год назад

      @M_A_A_A_A_A yes so the title is misleading. Doctors are already using AI daily.

  • @ericpham7773
    @ericpham7773 Год назад

    Information , and diagnosis calibration and safety warning when public safety issues but education is ai already so what we mean doctor and nurse are ai just like computer because any procedure or protocol is ai

  • @KiranKumarBokkesam
    @KiranKumarBokkesam Год назад

    Erik do you want to start of course

  • @simontemplar404
    @simontemplar404 Год назад +7

    What makes anyone think that a doctor can be trusted? - they are out of date the day they leave training. An AI would make a great tool to help a Dr as it would be up to date on the latest knowledge which it could share with the Dr who could add it to their understanding. An AI should not be left to make all the decisions without a human in the loop for sure. An AI that spots a condition in a patient is just a fancy X-Ray, a human still has to decide what to do with the information produced by the AI. Do you know just how much unnecessary treatment already goes on for prostrate cancer as detected by PSA tests and X-Ray and biopsy? The figures are not great.

  • @PhanHuy88
    @PhanHuy88 Год назад

    Another question can we trust GP? I would trust an AI more

  • @mcscotty325
    @mcscotty325 Год назад

    ChatGPT told me Kareem played for the 1992 Dream Team.

  • @mrmakeshft
    @mrmakeshft Год назад

    Why not Medics post the information. Administering physical information not really effective. Google. Updates should periodically be updated for public and private safety

  • @flickgeek830
    @flickgeek830 Год назад

    Is the AI going to tell me how much the procedure is going to cost before I get the procedure?

  • @MM2009
    @MM2009 Год назад

    Well, if its going to shake its sleeves to see which diagnosis comes out it won't be any different to the most "doctors"

  • @Kennanjk
    @Kennanjk Год назад

    Respect to doctors but the whole point of an ai is pattern recognition and I’d trust it way more to get it right after a few tries than a doctor getting it right with a million.

  • @rebekahwhite2939
    @rebekahwhite2939 Год назад

    Money drives behaviors. The behaviors a person has are because the doctors and nurses and medical industry get money from the government. Look at everything from a financial point of view. Look at everything from a financial perspective. Money from the government paid to build the buildings that hospitals and medical clinics are in. Money from the government pays the salaries or hourly wages of the nurses and doctors and medical technician employees. Money from the government pays for Medicare and Medicaid. Everything comes down to dollars and cents. How to get money from the government is what it comes down to for all the medical care and diagnoses. A doctor or medical provider can give a diagnosis and a person can get a second opinion from a different doctor and the second opinion from a different doctor might be the total opposite of what the first doctor says but this doesn't mean that they are wrong or right. 🙁 I am sorry that you have gone through medical tests that have been gruelly and exhausting and have caused you lots of emotional upset.

  • @davidyolchuyev2905
    @davidyolchuyev2905 Год назад

    If the confidence rate is over 99%, then yea

  • @CoffeenSpice
    @CoffeenSpice Год назад

    Can a human Healthcare be trusted?

  • @mrtienphysics666
    @mrtienphysics666 Год назад

    Can we ask chatGPT?

    • @dreamerinc.8491
      @dreamerinc.8491 Год назад

      You can, but do u actually know whats going wrong with your body, even missing a single symptom might drive the ai to wrong diagnosis, wheras the doctor might question which might remove the chance for any misdiagnosis

  • @hanskraut2018
    @hanskraut2018 Год назад

    You dont need trust if you design it correctly. Also: doctors make mistakes i would say way more than a computer potentially since there is so much stuff to know. So can we trust doctors or abolish? Obvisouly A.I. needs to go into psychiatry to adhd meds finetuning / tiltration i guess also everything else. Drugs = 10% costs doctors/healthcare workers = 90% i heared from Martin Sk.

  • @mitwy
    @mitwy Год назад

    I think the real questuon is can a doctor be trusted

  • @chrismolloy6885
    @chrismolloy6885 Год назад

    😎

  • @jonathanlatouche7013
    @jonathanlatouche7013 Год назад

    2x+-2

  • @accountforsamsungtab6974
    @accountforsamsungtab6974 Год назад

    it will replace *Doctors* Too.

  • @PirateRadioPodcasts
    @PirateRadioPodcasts Год назад

    OR the "legal" "system" = A - NO!

  • @h3llosammanGamingRoblox7
    @h3llosammanGamingRoblox7 Год назад +4

    First

    • @SorminaESar
      @SorminaESar Год назад +2

      Said that you're first or you're Zoe's first love or first kiss or first ... but it's okay, I like it, good job👍

    • @SorminaESar
      @SorminaESar Год назад +2

      What? Comon let's see the time exactly. When I give comments no other just Zoe and I. So why you

  • @corina6772
    @corina6772 Год назад

    How isn’t an issue will all AI algorithms? Nothing new here. Moving forward 🙄

  • @Jeffjesaja
    @Jeffjesaja Год назад

    Heineken German? Are you kidding me? It’s a Dutch company.

  • @shirleyb2896
    @shirleyb2896 Год назад

    AI CAN BE HACKED!! It can also surveil. Alexa? Siri? Robot vacuum cleaner? NO THANK YOU!!

  • @pauldannelachica2388
    @pauldannelachica2388 Год назад

    Ai is good for healthcare because of big data collection. To be analyz

  • @nigelcoleman7666
    @nigelcoleman7666 Год назад

    Sci-fi got it wrong. AI will make mistakes and get things wrong too.

  • @shirleyb2896
    @shirleyb2896 Год назад

    NO! No! Noooooooo!

  • @ericpham7773
    @ericpham7773 Год назад

    If you can trust ai then can people trust you

  • @commonsensewisdom625
    @commonsensewisdom625 Год назад +1

    The problem is doctors are wrong in diagnosis and treatment in 20% of times. High volume centers are certainly better.

  • @auro1986
    @auro1986 Год назад

    ai is as honest as the lying and cheating programmers who made ai

    • @user-lt5no1xt1z
      @user-lt5no1xt1z Год назад

      Not exactly, I mean if its intention was deleterious, yes. But it could have just been trained badly

  • @iiahmadrifai7725
    @iiahmadrifai7725 Год назад

    Kk gg

  • @Paul7mac
    @Paul7mac Год назад

    Hahahaha no if you understand how AI works. If AI worked and spoke the truth it would be shut down.

  • @khalidhamid7448
    @khalidhamid7448 Год назад

    Why is she ( Sara ) talking like Starbucks millennial

  • @toddmason8403
    @toddmason8403 Год назад

    That's a stupid question. We need to stop talking about AI as if it's human. I don't like where this is heading.

  • @chrisklugh
    @chrisklugh Год назад

    Probably more then a corrupt political doctor...

  • @Zebra66
    @Zebra66 Год назад

    A large percentage of doctors do nothing but write prescriptions. It's worth asking if we really need them for that. They have almost zero training on drugs and they add very little value.
    They don't do any diagnosis either (that's done at the lab or imaging center).
    They claim to provide "oversight" but I rarely see evidence of this.
    The truth is that all non-surgeons could be replaced with AI (or nothing at all) with a change in the law right now. I don't need or want to see doctors for my prescription refills. I'm forced to.
    Eventually the entire profession could be replaced with AI and robots.

  • @andybonneau9209
    @andybonneau9209 Год назад +2

    The problem with AI is the algorithm. The algorithm is controlled by humans that don't neccesarily have our best interests in mind.

  • @topofthegreen
    @topofthegreen Год назад

    No robots will try to kill us.

  • @almccraine
    @almccraine Год назад

    AI can’t be trusted. Leave that to the qualified Doctors and Nurse Practitioners.