ChatGPT: Will It Transform the World of Health Care?

Поделиться
HTML-код
  • Опубликовано: 9 фев 2023
  • The recent introduction of the breathtaking AI tool ChatGPT has sparked a national dialogue about the future of artificial intelligence in health care, education, research, and beyond. In this session, four UCSF experts discuss AI’s current and potential uses, in areas ranging from research to education to clinical care. After a brief presentation by each speaker, DOM Chair Bob Wachter moderates a far-ranging panel discussion on the health care applications of ChatGPT.
    Speakers:
    Atul Butte, MD, PhD, professor of Pediatrics, Bioengineering and Therapeutic Sciences, and Epidemiology and Biostatistics; director, UCSF Bakar Computational Health Sciences Institute; chief data scientist, University of California Health System
    Daniel Lowenstein, MD, professor of Neurology; former executive vice chancellor and provost, UCSF
    Sara Murray, MD, MAS, associate professor, Division of Hospital Medicine at UCSF Health; associate chief medical information officer, Inpatient Care and Data Science, UCSF Health
    Aaron Neinstein, MD, associate professor, Division of Endocrinology at UCSF Health; vice president of Digital Health, UCSF Health; senior director, UCSF Center for Digital Health Innovation
    Note: Closed captions will be available within 48-72 hours after posting.
    Program
    Bob Wachter: Introduction
    00:04:12-00:13:43 - Aaron Neinstein, MD
    00:13:50-00:20:34 - Sara Murray, MD, MAS
    00:20:41-00:27:16 - Daniel Lowenstein, MD
    00:27:21-00:37:36 - Atul Butte, MD, PhD
    00:37:43-00:59:33 - Panel Discussion
    See previous Medical Grand Rounds:
    • February 2: Covid-19 in 2023: A Conversation with White House Covid Coordinator Ashish Jha
    • Covid-19 in 2023: A Co...
    • January 19: A Health Equity Tapestry: Weaving Research on Health, Structural Violence, Stigma, and Our Lived Experiences
    • A Health Equity Tapest...
    • January 12: The Kardos Renal Grand Rounds: Centering the Margins to Achieve Kidney Health Equity
    • The Kardos Renal Grand...
    • January 5: Covid-19 in 2023: Where Are We, and What Should We Expect?
    • Covid-19 in 2023: Wher...
    See all UCSF Covid-19 grand rounds, which have been viewed over 3M times, at • UCSF Department of Med... .

Комментарии • 25

  • @wordysmithsonism8767
    @wordysmithsonism8767 8 месяцев назад

    For progress, that last guy before the Q&A is the man.

  • @jk35260
    @jk35260 Год назад +3

    Very interesting and meaningful discussion.
    GPT4 is already out and seems to be excel in the field of biology and medicine. DAX is a great tool.
    MSFT and OpenAI will certainly create an easy to use API for institution to fine tune GPT4. I think the human touch with patient is important.
    Later on, GPT4 will be able to look and analyse pictures .

  • @jch3117
    @jch3117 Год назад

    Excellent conversation

  • @equinoxhit
    @equinoxhit 11 месяцев назад

    Thank you for sharing!

  • @wandabedinghaus
    @wandabedinghaus Год назад

    Thanks for this inspiring presentation. Now doctors can be the healers they went into medicine to do.

  • @sandybayes
    @sandybayes 7 месяцев назад

    I would hope that the next generation of health care incorporates nutrition into its treatment and prevention options for their patients. This area has been sorely neglected yet there are many humans who seek other medical treatment modalities outside of the traditional medical community for better, less invasive treatment options. Most medical doctors I have encountered do not have the foggiest idea of what the human bodies nutritional requirements are in order to function well and to prevent specific chronic diseases.

  • @entrepremed
    @entrepremed Год назад

    "if we don't do it , someone else will do it"... the only part I don't find so cool of this talk.

  • @freedomlife3623
    @freedomlife3623 Год назад +4

    If the consult to the patient is wrong, who does the patient sue? The Doctor or the ChatGPT this Doctor used?

    • @laurenpinschannels
      @laurenpinschannels Год назад

      right now it would still be the doctor I think.

    • @shihtzusrule9115
      @shihtzusrule9115 Год назад

      Doctors or NPs or PAs, whoever is creating the report/documentation, will still have to sign off on it, so the signature shows the doctor read, edited and approved the document. It's the same scenario with medical transcriptionists who do the typing or cleaning up after voice wreck-ognition or the dictator now or scribes who do it without the practitioner dictating just an intake sheet and access to the patient's chart and compile the medical data into the reports that gets the hospital/clinic closer to billing and reimbursement. Microsoft bought Nuance, a pioneer in voice recognition software and probably the largest prime contractor with hospitals and clinics to transcribe these dictated reports, and the snippets of conversation or text the woman was talking about iSoftstone hired people to transcribe 10-second recordings of conversations to help voice recognition (and now I'm thinking ChatGPT) to help build a database to draw from to correct some of its problems earlier on. IBM worked on voice rec since 2001: A Space Odyssey. An IBM employee and friend of Stanley Kubrick came to work on the set as a technical advisor. In the movie, the computer's name was HAL and those 3 letters one up in the alphabet from I-B-M. He took a leave from IBM's voice rec b/c it wasn't going anywhere and working on the movie set sounded fun. So, this solution has been worked on since the 1960s if no the 50s. The problem is a computer is a difference engine. There is no "intelligence" or "learning." You have to have a database that can cover all contingencies and enough "if then' scenarios to guide the computer through the data and to select the most probable solution when trying to match the request or task to the solution. Refining the question improves the probability of a correct answer.
      HIPAA allows patients to sue for errors in their medical records and believe me, they abound but laypeople don't get it. After you go to the doctor or hospital even the ER. You should quickly get your dictated reports and see what is in them. It could dispel any misapprehensions you have about your care. Initial evaluation or H&P, follow-up visits/consults, discharge summary, operative reports, imaging, etc.

  • @brozbro
    @brozbro Год назад

    There will be many chats. ChatMedical, ChatLaw, ChatEngineering, etc. each learning their specialty...

  • @laurenpinschannels
    @laurenpinschannels Год назад

    there's incredible research happening - don't miss the clinical language model research if you're going to use it. it's important to be able to check how reliable the ai will be; it learns surprising, invisible interference patterns.

  • @laurenpinschannels
    @laurenpinschannels Год назад +2

    HIPAA has not yet approved chatgpt, is it?

    • @whatisiswhatable
      @whatisiswhatable Год назад +1

      it has approved it in Azure.. saw an article on this

    • @hongmeixie409
      @hongmeixie409 6 месяцев назад

      ​@@whatisiswhatablewhat's document?

  • @laurenpinschannels
    @laurenpinschannels Год назад +1

    Re: the other comment warning of fascism - definitely a risk due to the rising threat; but it could be worse than that if the ai learns to be authoritarian and then does it back to the human authoritarians who create it. we do need this sort of advanced AI, but it's important to understand causal validity and how to respect agentic rights of the patient. medical safety is the same problem as general inter-agent-systems ai safety; we need to be able to check whether the ai is structured in a way that will promote the patient's agency right than trying to control them into doing the maximally safe thing and trick the doctor into controlling. Don't accidentally build the matrix while trying to make humans live forever - people need to retain their own choice making, and yet have the tools to live as long as they want healthily and truly understand the world.

  • @cmorth2413
    @cmorth2413 Год назад +2

    WOW, if this gets my doctor to pay more attention to me…hurrah!

    • @shihtzusrule9115
      @shihtzusrule9115 Год назад

      It is guaranteed to not. When they go over your chart and think about your case and recap and document your history, objective data through tests and exams, and come up with an assessment and plan for treatment, it makes them think about you. They may not be able to put a face with the name or diagnosis but they do have to know who they are dictating on. If this allows them to skip this step, the answer is no, not going to think about you, your case or get closer to the ah-ha moment some patients need to get to the right diagnosis and treatment.

  • @chris-hu7tm
    @chris-hu7tm Год назад +1

    we need to push for a diagnosis chatgpt were you can diagnose yourself

  • @santinorider7536
    @santinorider7536 Год назад

    This is horrifying.

  • @jamesedmonds926
    @jamesedmonds926 Год назад +3

    Tech pharma fascisism

  • @laurenpinschannels
    @laurenpinschannels Год назад

    the claim about the expected growth of gpt4 is an exaggerated meme and is wrong. parameter growth is going much more slowly than that.