Can ChatGPT Plan Your Retirement?? | Andrew Lo | TEDxMIT

Поделиться
HTML-код
  • Опубликовано: 4 май 2024
  • What does it take for large language models (LLMs) to dispense trusted advice to their human users? Three key features: (1) domain-specific expertise; (2) the ability to tailor that expertise to a user’s unique situation; and (3) trustworthiness and adherence to the user’s moral and ethical standards. These challenges apply to virtually all industries
    and endeavors in which LLMs can be applied, such as medicine, law, accounting, education,
    psychotherapy, marketing, and corporate strategy. In this talk, Prof. Lo focuses on the specific context of financial advice, which serves as an ideal test bed both for determining the possible shortcomings of current LLMs and for exploring ways to overcome them.
    AI, Business, Finance, Money "Andrew W. Lo is the Charles E. and Susan T. Harris Professor at the MIT Sloan School of Management, the director of MIT’s Laboratory for Financial Engineering, a principal investigator at MIT’s Computer Science and Artificial Intelligence Lab, and an external professor at the Santa Fe Institute. His current research focuses on systemic risk in the financial system; evolutionary approaches to investor behavior, bounded rationality, and financial regulation; and applying financial engineering to develop new funding models for biomedical innovation and fusion energy. Lo has published extensively in academic journals (see alo.mit.edu) and his most recent book is The Adaptive Markets Hypothesis: An Evolutionary Approach to Understanding Financial System Dynamics. His awards include Batterymarch, Guggenheim, and Sloan Fellowships; the Paul A. Samuelson Award; the Eugene Fama Prize; the IAFE-SunGard Financial Engineer of the Year; the Global Association of Risk Professionals Risk Manager of the Year; one of TIME’s “100 most influential people in the world”; and awards for teaching excellence from both Wharton and MIT. He received a B.A. in economics from Yale University and an A.M. and Ph.D. in economics from Harvard University.
    "
    "Andrew W. Lo is the Charles E. and Susan T. Harris Professor at the MIT Sloan School of Management, the director of MIT’s Laboratory for Financial Engineering, a principal investigator at MIT’s Computer Science and Artificial Intelligence Lab, and an external professor at the Santa Fe Institute. His current research focuses on systemic risk in the financial system; evolutionary approaches to investor behavior, bounded rationality, and financial regulation; and applying financial engineering to develop new funding models for biomedical innovation and fusion energy. Lo has published extensively in academic journals (see alo.mit.edu) and his most recent book is The Adaptive Markets Hypothesis: An Evolutionary Approach to Understanding Financial System Dynamics. His awards include Batterymarch, Guggenheim, and Sloan Fellowships; the Paul A. Samuelson Award; the Eugene Fama Prize; the IAFE-SunGard Financial Engineer of the Year; the Global Association of Risk Professionals Risk Manager of the Year; one of TIME’s “100 most influential people in the world”; and awards for teaching excellence from both Wharton and MIT. He received a B.A. in economics from Yale University and an A.M. and Ph.D. in economics from Harvard University.
    " This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at www.ted.com/tedx

Комментарии • 5

  • @wildfoodietours6702
    @wildfoodietours6702 3 дня назад

    ChatGPT and AI are game changers. Hello, new world!

  • @knightfamily8124
    @knightfamily8124 2 дня назад

    I wonder(I don't know) if LLMs can be taught to understand what to do for their clients when there are changes to things like tax codes and laws which they haven't been "trained" on.

  • @BenUK1
    @BenUK1 2 дня назад

    Another point, regarding the Alignment issue. Why would we want LLMs to copy what humans do, including their mistakes. Just because humans accept a 40% offer on average doesn't make this the mathematically optimal number (I have no idea what the optimal number is). Humans are flawed... I don't want an AI advisor to intentially mimic those flaws, I'd want it to do a better job than a human, if possible.
    The video seems to suggest that by mimicing humans (Or aligning) that they would meet their fiduciary duty, which indirectly implies that if they gave better advice than a human advisor typically would then they would not be meeting the fiduciary duty criteria. I'm not convinced by this argument... and even if it is true then it implies that the rules/criteria are the problem. We shouldn't make systems that meet the flawed criteria, but rather should update the criteria for the modern AI age.

  • @dGooddBaddUgly
    @dGooddBaddUgly 3 дня назад

    @15min6sec. Does this mean we are going to use LLM and make them as selfish and human behavior? They will understand greed, fear, optimism and more feeling and make decisions based on that. Computers can take irrational decisions if they experience these human traits and will fight for its survival anyway possible to eliminate existential threats one way or the other.