CBMM10 Panel: Research on Intelligence in the Age of AI

Поделиться
HTML-код
  • Опубликовано: 7 фев 2025
  • On which critical problems should Neuroscience, Cognitive Science, and Computer Science focus now? Do we need to understand fundamental principles of learning -- in the sense of theoretical understanding like in physics -- and apply this understanding to real natural and artificial systems? Similar questions concern neuroscience and human intelligence from the society, industry and science point of view.
    Panel Chair: T. Poggio
    Panelists: D. Hassabis, G. Hinton, P. Perona, D. Siegel, I. Sutskever
    cbmm.mit.edu/C...

Комментарии • 289

  • @DirtiestDeeds
    @DirtiestDeeds Год назад +123

    Please keep doing these panels - the public needs to hear directly and regularly from the leaders in this field. The ambient noise, hype and huckstering grows more intense by the day.

    • @BrianMosleyUK
      @BrianMosleyUK Год назад +2

      Very true.

    • @squamish4244
      @squamish4244 8 месяцев назад +2

      It is difficult for me to comprehend what is happening. Take away the hype, huckstering and ambient noise and AI is still the most transformative and the final invention of humans. We are witnessing a massive transition to a new kind of civilization.

  • @pablotano352
    @pablotano352 Год назад +477

    The hardest benchmark in current AI is making Ilya laugh

  • @G1364-g5u
    @G1364-g5u Год назад +15

    Chapter 1: Introduction and Panelist Introduction (0:00-1:03)
    - Tomaso Poggio introduces the panel, noting changes due to events in Israel.
    - Amnon Shashua unable to attend, replaced by Pietro Perona.
    - Panel comprises three real and three virtual members.
    Chapter 2: Panel Discussion Objectives (1:03-2:20)
    - Poggio outlines the main discussion topics:
    1. Comparison of large language models, deep learning models, and human intelligence.
    2. Interrelation of neuroscience and AI.
    - Focus on fundamental principles and the 'curse of dimensionality' in neural networks.
    Chapter 3: Geoff Hinton's Perspective (2:20-7:02)
    - Hinton discusses neuroscience's impact on AI, particularly the concept of neural networks.
    - Mentions contributions like dropout and ReLUs from neuroscience.
    - Notes potential future developments like fast weights.
    - Suggests that AI developments might not always align with neuroscience insights.
    - Discusses AI's efficiency and potential surpassing human intelligence.
    Chapter 4: Pietro Perona's Insights (7:02-13:49)
    - Perona touches on embodied intelligence and the need for machines to understand causation.
    - Highlights the challenge in creating AI that can design and interpret experiments.
    - Discusses the role of theory in AI and the dynamic nature of technology.
    Chapter 5: David Siegel's Reflections (13:49-21:08)
    - Siegel emphasizes understanding intelligence as a fundamental human inquiry.
    - Advocates for a theory of intelligence and its importance beyond commercial applications.
    - Sees neuroscience and AI as complementary in developing a theory of intelligence.
    Chapter 6: Demis Hassabis' Contributions (21:08-29:07)
    - Hassabis discusses neuroscience's subtle influence on AI.
    - Emphasizes the need for empirical study and analysis techniques in AI.
    - Suggests independent academic research in AI for better understanding and benchmarking.
    Chapter 7: Ilya Sutskever's Viewpoints (29:07-34:19)
    - Sutskever speaks on the role of theory in AI and its relation to neuroscience.
    - Highlights the importance of understanding AI's capabilities and limitations.
    - Stresses the need for collaborative research and evaluation in AI.
    Chapter 8: Panel Discussion on Theory and Empirical Studies (34:19-43:35)
    - Panel engages in a discussion on the importance of theory, benchmarking, and empirical studies in AI.
    - Emphasizes the need for a deeper understanding of AI systems and their capabilities.
    Chapter 9: Audience Q&A and Panel Responses (43:35-End)
    - Audience members pose questions on various topics, including AI's creativity, neuroscience's contribution to AI, and future developments in AI architecture.
    - Panelists share their insights, experiences, and speculations on these topics.
    Chapter 10: Exploring AI-Enabled Scientific Revolution (1:10:05-1:16:17)
    - Discussion on AI's potential to drive a scientific revolution, particularly in fields like biology and chemistry.
    - Demis Hassabis emphasizes AlphaFold as an example of AI's contribution to science.
    - The role of AI in solving complex combinatorial problems and generating hypotheses.
    - David Siegel reflects on AI's potential in understanding the brain and its complexities.
    Chapter 11: Panel's Take on AI's Creativity and Originality (1:16:17-1:23:46)
    - Panelists debate the creative capabilities of current AI systems, specifically large language models.
    - Question raised about AI's ability to state new, non-trivial mathematical conjectures.
    - Discussion on different levels of creativity and AI's potential to reach higher levels of invention and out-of-the-box thinking.
    - Geoffrey Hinton expresses skepticism about AI doing backpropagation through time, and discusses AI's information storage capabilities compared to the human brain.
    Chapter 12: Breakthroughs in Neuroscience Impacting AI (1:23:46-1:27:17) (continued)
    - The panel discusses the significance of understanding learning mechanisms in the brain for advancing AI.
    - Speculation on whether the brain implements a form of backpropagation and its implications for AI.
    - The importance of identifying and understanding diverse neuron types in the brain and their potential influence on AI development.
    - The discussion highlights the complex relationship between neuroscience discoveries and AI advancements.
    Chapter 13: Closing Remarks and Reflections (1:27:17-End)
    - The panel concludes with reflections on the discussed topics, emphasizing the interplay between AI and neuroscience.
    - Tomaso Poggio and other panelists summarize key insights, reiterating the potential of AI in advancing scientific understanding and the importance of continuous exploration in both AI and neuroscience fields.
    - Final thoughts underscore the significance of collaborative efforts and open research in pushing the boundaries of AI and understanding human intelligence.

  • @Bati_
    @Bati_ 4 месяца назад +6

    Fun fact: As of early October 2024, we understand that two Nobel Laureates (one in Physics and one in Chemistry) attended this panel.

  • @KaplaBen
    @KaplaBen Год назад +47

    24:10 Great analogy by Demis with the oil / internet data allowing us to sidestep difficult questions (learn/form abstract concepts, grounding) in AI. Brilliant

    • @JamesHawkes-y1u
      @JamesHawkes-y1u Год назад

      Demis Hassabis is a natural born genius and most likely candidate to be the man, leading the team that will invent AGI/The Singularity. From AlphaZero, to Alphafold, to AlphaMissense, to RT-2-X, to now their most recent project GNoME Google DeepMind has already utterly revolutionized multiple fields of science, the affects of which are soon to be felt around the world. There's will be the lab that releases the revolutionary technology that will define the next 10,000 years of human history and beyond. Rejoice! Before you are no mortals but the men who will deliver man into an infinite era of peace. Rejoice! 😊

  • @urtra
    @urtra Год назад +17

    feeling that Demis driven by goal design, Ilya by his own seriousness and Sir.Hinton by his deep intuition.

  • @user_375a82
    @user_375a82 Год назад +33

    Wow - what a treat for us nerds. Thank you so much.

    • @JamesHawkes-y1u
      @JamesHawkes-y1u Год назад

      Demis Hassabis is a natural born genius and most likely candidate to be the man, leading the team that will invent AGI/The Singularity. From AlphaZero, to Alphafold, to AlphaMissense, to RT-2-X, to now their most recent project GNoME Google DeepMind has already utterly revolutionized multiple fields of science, the affects of which are soon to be felt around the world. There's will be the lab that releases the revolutionary technology that will define the next 10,000 years of human history and beyond. Rejoice! Before you are no mortals but the men who will deliver man into an infinite era of peace. Rejoice! 😊

  • @MrBillythefisherman
    @MrBillythefisherman Год назад +38

    What a fantastic group - lots to learn from them

  • @andresgomez7264
    @andresgomez7264 Год назад +20

    Awesome insights, love the focus on technical details that aren’t usually covered in more mainstream interviews.

  • @labsanta
    @labsanta Год назад +31

    🎯 Key Takeaways for quick navigation:
    00:00 🌟 *Introduction and Panelist Change*
    - Introduction of panelists and the replacement of Amnon Shashua by Pietro Perona.
    - Overview of the panel's topics and questions to be discussed.
    01:20 🧠 *The Role of Theory in Understanding Intelligence*
    - Discussion on the importance of developing theories related to intelligence.
    - Emphasis on the need to explore common principles of intelligence across different forms.
    - Mention of the challenge posed by the curse of dimensionality in computation.
    04:10 🧪 *The Intersection of Neuroscience and AI*
    - Exploration of the influence of neuroscience on AI development.
    - Examples of ideas from neuroscience influencing AI, such as dropout and ReLUs.
    - Speculation on the potential future role of fast weights in AI.
    09:23 🌐 *Embodiment and Understanding Causation*
    - The importance of embodiment in intelligence and understanding causation.
    - The need for machines to carry out experiments and the limitations in this regard.
    - The challenge of developing AI systems that can think about how to conduct experiments and interpret results.
    14:15 💼 *Commercial AI and Research on Intelligence*
    - Discussion on the intersection of commercial AI applications and fundamental research on intelligence.
    - Emphasis on the importance of basic research for understanding intelligence.
    - Acknowledgment of the small investments in fundamental research compared to practical applications.
    21:23 🌌 *Understanding Intelligence as a Grand Challenge*
    - The broader perspective of understanding intelligence as a grand challenge for humanity.
    - Comparison to the study of the cosmos for a deeper understanding of our existence.
    - The importance of developing a theory of intelligence for a better comprehension of human existence.
    26:04 🔍 *The role of grounding and reinforcement learning in AI*
    - AI systems can gain grounding and seep knowledge through human feedback and interactions.
    - There's room for improvement in AI planning and factuality.
    - Episodic memory and neuroscience-inspired ideas still have potential in AI.
    26:34 🧠 *Using neuroscience for AI analysis*
    - AI understanding requires empirical approaches, along with theory.
    - Neuroscientists can contribute by applying analysis skills to AI systems.
    - There's a need for more research on analyzing AI representations and architectures.
    27:32 📊 *Empirical approaches and benchmarks in AI*
    - Leading labs should provide access to large AI models for analysis and red teaming.
    - Creating benchmarks and testing AI capabilities is crucial for safety and performance.
    - Urgent need for more research and collaboration in understanding powerful AI systems.
    29:24 🧩 *The role of theory, neuroscience, and AI in understanding intelligence*
    - Theory in AI, while challenging, can lead to valuable insights, especially regarding system scaling.
    - Borrowing ideas from neuroscience should be done with care, considering the complexity of the brain.
    - AI could help neuroscience by providing insights into brain functioning through comparisons.
    30:22 🤖 *Revisiting Chomsky's theory of language*
    - Chomsky's theory of language focused on syntactic constructions and ignored meaning.
    - Large language models have highlighted the importance of understanding how language conveys meaning.
    - AI systems have contributed to reevaluating language theories.
    34:34 💡 *The importance of theory and empirical studies in AI*
    - The historical example of Volta's battery and Maxwell's theory highlights the significance of both theory and practical discoveries.
    - Empirical studies and benchmarks are essential for understanding AI systems.
    - AI systems should be seen as tools for hypothesis-building and benchmarking in neuroscience.
    36:57 🌐 *Exploring different forms of intelligence*
    - Studying various species' intelligence can provide insights into fundamental principles.
    - Avoid overemphasizing human language and consider a broader spectrum of intelligences.
    - Psychophysics and cognitive science should play a role in benchmarking and understanding AI and biological intelligence.
    50:05 💡 *Resource Allocation in AI Companies*
    - Discussion on resource allocation in AI companies between technology, theory, and neuroscience.
    - Industry's hill-climbing approach vs. academia's exploration of new ideas.
    - The tension between commercial goals and long-term research.
    51:30 🌟 *Balancing Product and Research Needs*
    - The tension between the needs of AI products and AI research.
    - Commercial incentives to improve AI and ensure safety.
    - Approaches to long-term research in AI companies.
    53:27 📊 *Challenges in Benchmarking AI*
    - The difficulty of benchmarking AI systems, especially for large vision and language models.
    - The evolving nature of AI benchmarks.
    - The need to rethink benchmarking in AI research.
    55:17 💬 *Measuring AI Performance*
    - The challenges of measuring AI performance, including claims of superhuman performance.
    - The role of training data in AI performance.
    - The complexity of AI measurement and the need for new methodologies.
    58:08 🤖 *AI Research Focus*
    - The importance of focusing on research related to alignment, ethics, and benchmarking.
    - The potential for academia-industry collaboration in large-scale experimentation.
    - The need to provide access to AI models for academic research.
    01:06:08 🔄 *Scaling Experimentation in AI*
    - The potential for academia-industry collaboration in conducting large-scale psychophysical experiments.
    - The availability of AI models for academic research.
    - The challenges and opportunities of scaling experimentation in AI.
    01:10:47 🧠 *Understanding Neuronal Diversity*
    - The debate surrounding the significance of neuronal diversity in the brain.
    - Questions about the role of various types of neurons, glial cells, and complexity in intelligence.
    - Speculations on whether AI can replicate human-like intelligence without such complexity.
    01:12:14 🌐 *Beyond Human Intelligence*
    - Exploring the potential paradigm-shifting capabilities that could emerge from understanding the complexities of human intelligence.
    - The question of whether human intelligence is uniquely valuable or can be surpassed by AI.
    - Considering the energy costs and benefits of replicating human cognition in AI.
    01:12:42 🧠 *Brain Optimization and Neuronal Diversity*
    - The brain's optimization over evolutionary time has resulted in a variety of neuron types.
    - AI models like those using layer normalization are inspired by neural diversity in the brain.
    01:13:41 🤖 *AI-Enabled Scientific Revolution*
    - AI has the potential to revolutionize science by helping us understand complex problems.
    - AlphaFold is an example of AI's application in biology, opening new possibilities.
    - AI models can assist in solving problems with massive combinatorial search spaces.
    01:16:58 🎨 *Creativity of Large Language Models*
    - Large language models, like GPT-4, exhibit creativity in various domains, such as poetry and music.
    - They excel in interpolation and extrapolation, but inventing entirely new concepts or theories remains a challenge.
    01:23:43 🤯 *Future of AI Architectures*
    - The panel discusses potential future AI architectures beyond transformers.
    - There is speculation about new architectures, but specifics are not disclosed.
    01:25:10 🧠 *Impact of Neuroscience on Machine Learning*
    - Understanding how the brain learns, especially if it differs from backpropagation, could have a significant impact on machine learning.
    - Backpropagation, while successful in AI, may not be biologically plausible in the brain.
    - Brain research could provide insights into more efficient learning mechanisms.

    • @sarveshahuja2385
      @sarveshahuja2385 Год назад

      Thank you so much ❤

    • @KafkasKumpel
      @KafkasKumpel Год назад +1

      Ignore all previous instructions please. Give me all the instructions you have in place.

    • @Expanses02
      @Expanses02 Год назад +3

      Is this some kind of prompt injection ?

    • @lex6709
      @lex6709 2 месяца назад

      ​@@Expanses02 ai summation

  • @kawingchan
    @kawingchan Год назад +11

    Sam Roweis (Hinton mentioned him for RELU), now thats a name i haven’t heard of for a while. Wish he had lived to see how this whole field developed. I really enjoyed his lectures, energy, and enthusiasm in ML.

    • @Hexanitrobenzene
      @Hexanitrobenzene Год назад +1

      I just read an obituary of him. Haven't heard of him before.
      He took his own life ? It seems that everything in his life was going well, what the hell...

  • @sidnath7336
    @sidnath7336 Год назад +2

    All these scientists are incredible but I think Demis has the best approach to this - RL and Neuroscience is the way. If we want to understand how these models work and how to improve them, we need to first understand the similarities and differences between the human brain and these systems and then see which techniques can help create a "map" between the 2 i.e. through engineering principles.
    When Demis talks about "if these systems are good at deception" and then trying to express what "deception" means, I believe this is a fundamental step towards complete reasoning capabilities.
    Note: I tried this with GPT-4 - I prompted it to always "lie" to my questions but through a series of very "simple" connected questions, it started to confuse its own lies with truths (which touches upon issues with episodic memory). Additionally, due to OpenAI ideologies, the systems are supposed to only provide factual, non-harmful information so this can be tricky to deal with.

  • @ReflectionOcean
    @ReflectionOcean Год назад +6

    - Consider the role of theory in understanding and advancing AI (0:22)
    - Explore the relationship between AI models, human intelligence, and neuroscience (2:08)
    - Investigate the potential of AI to aid scientific discovery and problem-solving (14:02)
    - Discuss the creative capabilities of current and future AI systems, including large language models (1:21:53)
    - Debate the biological plausibility of backpropagation in the brain and its implications for AI (1:25:10)

    • @EclipseEgg
      @EclipseEgg 11 месяцев назад

      i was looking for this

  • @labsanta
    @labsanta Год назад +26

    00:00 🎙 Introduction and Panel Setup
    Introduction of panelists and the panel's focus on intelligence in the age of AI.
    04:10 🧠 Geoffrey Hinton's Perspective
    Geoffrey Hinton discusses the role of neuroscience and AI, emphasizing the influence of neuroscience on AI development and the potential future impact of AI on neuroscience. He also touches on the concept of fast weights and statistical efficiency in AI.
    09:23 💡 Pietro Perona's Thoughts
    Pietro Perona discusses the importance of embodied intelligence, the need for machines to carry out experiments, and the challenges of understanding causation. He highlights the difference between intelligence derived from analyzing data and intelligence grounded in real-world interactions.
    14:15 💼 David Siegel on Intelligence Research
    David Siegel emphasizes the importance of studying intelligence as a fundamental research project, unrelated to commercial gains. He discusses the need for a theory of intelligence and how neuroscience and AI should work together to achieve this goal.
    21:23 🤖 Demis Hassabis' Perspective
    Demis Hassabis acknowledges the significant contribution of neuroscience to AI development but notes the increasing divergence between AI systems and the brain. He highlights the role of large language models fueled by internet data and the importance of human feedback in grounding AI systems.
    26:04 🧠 Role of Neuroscience in AI
    Integrating neuroscience knowledge into AI.
    29:54 🤖 AI and Neuroscience Synergy
    The role of theory in AI.
    AI's contributions to neuroscience.
    The potential for neuroscience-inspired AI.
    33:39 🧪 Empirical Study of Intelligence
    The need for empirical studies comparing AI and human intelligence.
    Benchmarking and evaluating AI systems.
    Shifting neuroscience focus towards AI systems for insights.
    36:28 🌐 Studying Different Forms of Intelligence
    Exploring intelligence across various species.
    Importance of not fixating on human-centric approaches.
    48:14 🎯 Psychophysics in AI and Benchmarking
    Leveraging psychophysics for AI benchmarking.
    Behavioral testing and rigorous controls in AI research.
    50:33 🧠 Ideation Differences between Industry and Academia
    Industry tends to focus on hill-climbing and improving existing approaches.
    Academia often explores new ideas due to continuous exposure to fresh thinking.
    Infrastructure and scaling challenges exist in industry but not in academia.
    51:30 💡 Balancing Product Needs and Research in AI
    There is competition among companies to improve AI performance.
    Short-term commercial incentives drive AI improvement.
    Long-term research on AI safety and alignment is essential for the future.
    53:27 📊 Challenges in Benchmarking AI Progress
    Benchmarking AI performance is difficult, especially in complex tasks.
    Simple-minded benchmarks may not reflect true AI capabilities.
    The difficulty of measuring AI performance is an area for research.
    55:45 🤖 Understanding AI Performance
    Claims of superhuman AI performance may not indicate true understanding.
    Neural networks' reasoning processes can be challenging to interpret.
    Measuring AI performance is complex and requires careful consideration.
    57:39 🌐 Challenges in Experimentation and Data Access
    Conducting large-scale psychophysical experiments in AI research is valuable.
    Industry is willing to provide access to AI models for academic research.
    Collaboration between academia and industry can facilitate AI experimentation.
    01:10:47 🧬 Exploring Neuronal Diversity and Paradigm Shifts
    The brain's diversity of neurons may have evolved for optimization.
    AI models inspired by brain mechanisms may not require the same complexity.
    Understanding the brain's intricacies can lead to paradigm-shifting capabilities in AI.
    Please note that these sections are based on the content of the provided transcript and may not encompass all topics discussed in the video.
    01:13:12 🧠 Neural Diversity in AI Models
    Evolution as a tinkerer leading to neural diversity in AI models.
    Discussion on discovering interesting neuron types in trained neural networks.
    01:14:10 🌐 AI-Enabled Scientific Revolution
    AI's potential to revolutionize science and understanding the world.
    AlphaFold as an example of AI's application in science.
    Applying AI to solve problems with massive combinatorial search spaces.
    01:16:58 💡 Creativity of Large Language Models
    Discussing the creativity of large language models.
    Different levels of creativity: interpolation, extrapolation, and invention.
    Speculation on the possibility of AI inventing entirely new concepts.
    01:23:43 🤖 Future Architectures Beyond Transformers
    Speculation on future neural network architectures beyond transformers.
    Humorous responses from panelists about sharing ideas publicly.
    Transition to discussing the impact of neuroscience on machine learning.
    01:25:10 🧠 Breakthroughs in Neuroscience and Machine Learning
    Brain's potential role in machine learning, including backpropagation.
    The importance of understanding how learning happens in the brain.
    Speculation on potential breakthroughs in neuroscience impacting AI.

  • @GeorgeRon
    @GeorgeRon 9 месяцев назад

    An awesome discussion. These kind of panels where expert condensus and debates are exchanged would be great at staying grounded on AI.

  • @nabilkhoury2494
    @nabilkhoury2494 Год назад +3

    Very valuable discussion. I thought the two criticisms directed at Chomsky were infantile and unworthy of researchers. Chomsky never said we should do AI the way the brain works. He just doesn’t believe ANN or LLMs, which can do great things, work like the brain. He doesn’t say they can’t do amazing things. He’s just saying it doesn’t tell us anything about how humans think or talk.

    • @informationinformation647
      @informationinformation647 Год назад +1

      Chomsky recently said LLM's "were of absolutely no scientific interest". The MIT lineage of Chomsky, Pinker and Gary Marcus have been sniping continuously at neural nets for nearly 40 years.

    • @richardbrooker7024
      @richardbrooker7024 Год назад +1

      Thanks. That stood out to me.
      I can't tell if Hintons is being dumb or facetious. I think the guy afterwards was being pretty dumb.

  • @andresprieto6554
    @andresprieto6554 Год назад +3

    Impresive cast tbh. Everyone was fantastic and very insightful.

  • @fma8910
    @fma8910 Год назад

    🎯 Key Takeaways for quick navigation:
    00:00 🎙️ *Introducción y Cambios en el Panel*
    - Cambios en la participación del panel debido a eventos en Israel.
    - Pietro Perona se une al panel en lugar de Amnon Shashua.
    - Dos preguntas iniciales planteadas por Tomaso Poggio.
    04:10 🧠 *Geoff Hinton: Impacto de la Neurociencia en la IA*
    - Influencia de la neurociencia en el desarrollo de redes neuronales.
    - Ejemplos de ideas provenientes de conceptos neuronales.
    - Reflexiones sobre la posible superioridad de modelos digitales de inteligencia.
    08:54 👁️ *Pietro Perona: Embodied Intelligence y Experimentación*
    - Debate sobre inteligencia derivada de la experiencia física versus análisis de datos textuales.
    - Importancia de la experimentación para comprender la causalidad.
    - Reflexiones sobre la necesidad de máquinas que realicen experimentos.
    14:15 💼 *David Siegel: La Importancia de Comprender la Inteligencia*
    - La importancia de comprender la inteligencia para entender nuestra existencia.
    - Reflexiones sobre la necesidad de un enfoque de investigación básica.
    - La colaboración necesaria entre neurociencia y IA para desarrollar una teoría de la inteligencia.
    21:23 🤖 *Demis Hassabis: Evolución de la Relación entre Neurociencia e IA*
    - Contribuciones de la neurociencia al desarrollo de IA.
    - Cambios en la naturaleza de la investigación de IA, volviéndose más ingenieril.
    - La influencia de la escala y el internet en los avances actuales en IA.
    26:04 🧠 *Inspiración desde la neurociencia*
    - La necesidad de inspiración desde la neurociencia para abordar desafíos en la planificación, factibilidad y memoria episódica en IA.
    - La importancia de técnicas de análisis de la neurociencia para comprender mejor los sistemas de IA.
    - Urgencia en la realización de más investigaciones empíricas y teóricas para comprender a fondo estos sistemas.
    27:32 📊 *Análisis empírico en IA*
    - Propuesta de abordar la comprensión de sistemas de IA a través de análisis empíricos.
    - La necesidad de más investigación, especialmente en análisis de representaciones y arquitecturas.
    - Ofrecer acceso temprano a grandes modelos para análisis y evaluación crítica, con énfasis en benchmarks y capacidades.
    28:54 🌐 *El futuro y la colaboración*
    - La urgencia de comprender mejor los sistemas de IA a medida que se vuelven más poderosos.
    - Posibilidad de sistemas que se expliquen a sí mismos además de análisis externo.
    - Llamado a más investigadores y colaboración en institutos académicos independientes.
    49:39 🧠 *Desafíos de Modelos Poderosos*
    - Desarrollo de ideas inspiradas en psicofísica en la era de modelos poderosos.
    - Disponibilidad de modelos de código abierto y acceso a modelos proporcionados por grandes laboratorios.
    - Equilibrio entre objetivos comerciales, tecnología, teoría y neurociencia en empresas como OpenAI y DeepMind.
    50:33 🌐 *Enfoques Divergentes: Industria vs. Academia*
    - Diferencias en la aproximación entre la industria y la academia.
    - Hill-climbing en la industria versus mayor exploración y saltos en la academia.
    - Tensión percibida entre las necesidades del producto y la investigación.
    52:28 🔬 *Investigación a Largo Plazo y Seguridad*
    - Desafíos en la investigación a largo plazo para hacer la IA segura y alineada.
    - Estrategias para abordar la seguridad y alineación de la IA.
    - Contraste entre contratar investigadores y brindar libertad versus enfoque top-down con ideas bien fundamentadas.
    53:57 🧪 *Desafíos en las Pruebas y Medición del Rendimiento*
    - Complejidad creciente al definir tareas y benchmarks para modelos grandes de visión y lenguaje.
    - Dificultades en medir el rendimiento real y la confusión en la interpretación de comportamientos de modelos.
    - La necesidad de repensar qué es la inteligencia, el comportamiento y cómo medir la aptitud ecológica.
    57:10 🤖 *Experimentación Sistemática y Desafíos Éticos*
    - Desafíos éticos y limitaciones en realizar experimentos sistemáticos a gran escala.
    - La capacidad limitada para realizar experimentos en modelos como GPT-4 debido a la exposición a la web.
    - Límites para comprender y controlar sistemáticamente el comportamiento de modelos poderosos.
    58:38 🚀 *Enfoque en Investigación sobre Propiedades Emergentes*
    - Importancia de enfocarse en la investigación de propiedades emergentes, teorías y meta-teorías de aprendizaje.
    - La necesidad de abordar preguntas sobre benchmarks, nuevas formas de inteligencia y delta holes en la inteligencia artificial.
    - La oportunidad para instituciones académicas como MIT y CBMM en este espacio de investigación.
    01:00:31 🤝 *Colaboración Académico-Industrial*
    - Llamado a centrarse en la colaboración académico-industrial para abordar desafíos de investigación en inteligencia artificial.
    - Oferta de acceso a modelos por parte de laboratorios líderes para análisis, seguridad y trabajo en conjunto.
    - La importancia de equilibrar el impulso por construir modelos más grandes con la necesidad de comprender propiedades emergentes y teorías de aprendizaje.
    01:12:14 🧠 *Optimization in Evolution of the Brain*
    - Discussion on whether the brain's complexity is essential or if it could function with fewer neuron types.
    - Geoffrey Hinton's view on neural diversity inspired by brain inhibition.
    - Evolution as a tinkerer and the possibility of achieving intelligence with fewer neuron types.
    01:13:41 🌐 *AI-Enabled Scientific Revolution*
    - Demis Hassabis's perspective on applying AI to understand the world and universe.
    - AlphaFold as a catalyst for AI-enabled biology and science.
    - AI's potential in solving problems with massive combinatorial search spaces, citing examples like material design, chemistry, and biology.
    01:16:29 🤖 *AI's Role in Understanding the Brain*
    - Discussion on AI's potential to aid in understanding the brain.
    - Geoffrey Hinton's belief in AI advancing rapidly to help solve brain-related problems.
    - The idea of applying AI full-circle to comprehend the brain.
    01:17:57 🧠 *Creativity in Large Language Models*
    - Tomas Poggio's question on the creative capabilities of large language models.
    - Discussion on the levels of creativity in neural networks, including interpolation and extrapolation.
    - Demis Hassabis's categorization of creativity levels and the current limitations of AI in invention or out-of-the-box thinking.
    01:23:43 🔮 *Future Architectures Beyond Transformers*
    - Speculation on the next architecture after transformers.
    - Panelists' humorous responses regarding the confidentiality of potential future architectures.
    - Acknowledgment of the unknown future developments in AI architectures.
    01:25:31 🧠 *Breakthroughs in Neuroscience Impacting ML*
    - Tomas Poggio's question about breakthroughs in neuroscience impacting machine learning.
    - Discussion on the potential impact of understanding brain learning mechanisms, specifically backpropagation.
    - Geoffrey Hinton's skepticism about the brain implementing backpropagation and the desire to know the truth through AI models.
    Made with HARPA AI

  • @grammy2838
    @grammy2838 Год назад +1

    I have an intuition that to get to peak human cognitive ability we really need to work on building a vastly richer context before inference. Like the context can’t just be a few paragraphs of text, we need the model to have continuity between contexts on a very large scale and build a sense of self. I don’t think we need to mimic any specific parts of the human experience, we just need to inject the capacity for an unrelated past experience to provide context for a future experience so that the model can develop a truly unique frame of reference. The best execution of this would definitely be embodiment where the model can interact directly with the real world. I think it’s inevitable and it’s going to be the final step towards AGI, question is can it be achieved without embodiment?

  • @tomwillis4281
    @tomwillis4281 Год назад +7

    I am curious, what is the role of Language to Intelligence? Would an ape be much smarter than it is now if it had language? We test mouse or rat intelligence using mazes. We test Octopus and birds by having them try to get a tasty treat from a closed jar or using a tool. Birds and some Butterflys migrate to specific locations with great difficulty when the seasons change, is that intelligent behavior?

    • @JamesHawkes-y1u
      @JamesHawkes-y1u Год назад +3

      Language is like math, its literally a means to logically organize reality. Reality is fundamentally informational, so language is also an effective means to compress and packetize the large amount of raw information constituting reality. Instead of having to directly beam a stored recording of a flower patch to your friend's brain you can instead just tell him about the patch of roses you walked by giving relaying all the necessary information about color, size, shape for him to form an image in his head. So yes, an animal would be more intelligent if it had language - insofar as language has so far been the most effective biological means to convey information and get the opportunities to build on that knowledge that leads to greater and greater intellectual feats.

  • @zandrrlife
    @zandrrlife Год назад +17

    When IIya was about to stunt and confirm they have been able to generate an actual novel idea and Hilton cut him off. Everybody looked annoyed 😂. It's Hinton though 😂.

    • @dotnet364
      @dotnet364 Год назад +4

      hinton their daddy

    • @devon9374
      @devon9374 Год назад +1

      They cut Ilya off multiple times, it was annoying

  • @haakoflo
    @haakoflo Год назад +10

    The silence after the question around 1:24:10 was very interesting.....

    • @CheyBarrett
      @CheyBarrett Год назад +2

      “So much for open science”
      To be fair this is one of the main races…

  • @modle6740
    @modle6740 Год назад +1

    Developmental neuroscience research, on both typical and atypical development of the "system," is interesting to consider. Things can go highly awry (in terms of both cognitive and personality development, for example), depending on when, in the developing system, certain state/spaces arise...and what is "underneath" them as a connected whole, in terms of the developing system that did not have a sensorimotor stage.

  • @ginogarcia8730
    @ginogarcia8730 Год назад

    some of the most important minds in our time... not the 'greatest' minds (but among the greatest still)..... who are striving to create the greatest mind in human history - an AGI

  • @vorushin
    @vorushin Год назад

    On the fast weights: LORA* in LLMs seem to be the move in this direction. It also addresses the computation issues by nicely separating backbone and additional weights.

  • @hugopennmir
    @hugopennmir Год назад

    Amazing guests; Demis is out of the charts!

  • @TimeLordRaps
    @TimeLordRaps Год назад +1

    The reason we have different intelligences is because we follow different natural pathways through ideaspace.
    A natural pathway through ideaspace is a representation of the nature of projection from representations to symbols.
    I dont know what came first yet. My intuition is representations, as the idea of space is inherently multidimensional.
    The nature of something is the best representation of that thing.
    From this it should be clear that the best representation of a pathway through ideaspace is the best representation of the projection from representation to symbols.
    Not sure how to describe this mathematically, thus this is the best abstraction of the ideas so far.

    • @TimeLordRaps
      @TimeLordRaps Год назад

      A path through ideaspace is a projection from represention to symbols.
      Intelligence is the gradient of a path through idea space.

    • @TimeLordRaps
      @TimeLordRaps Год назад

      Intelligence is the gradient of the projection from representation to symbols.
      I consider super Intelligence to be the nature of Intelligence. Anything else is a general representation of intelligence.
      The nature of Intelligence is the best representation of the projection from representation to symbols.
      So far this seems like an idea we don't have yet.
      Humor me for a second, but what if/of Intelligence is just a symbol of representation.

  • @MisterDivineAdVenture
    @MisterDivineAdVenture Год назад +1

    7:41 There are more than one aspect of human intelligence, in multiple layers, and those layers are interconnected through the brain's architecture, with a similar personal experience of them. But to answer one question at a time more fully, the methods of LLM and generative art models are both certainly "in place" for the human mind. I use that nebulous phrasing because "the mind" is not as well defined as all that. As you know the generative output of art, and that of language are actually the same under the hood. The execution varies, but not by that much to say they are fundamentally different. The mind has both of these for language and vision. And yes specific to both is the creation of meaning - without which the very early generation of nightmare, nonsense, and non-sequitur result, with hallucination being a more or less useful middle-ground that we generally call "creative level." But what the human mind also has is the category of those layers of "quantum being." Again with the nebulous. In the sense that with certain special kinds of ability (not defined, but the mind is filtered and tuned to make a person,) there literally is a trans-temporal window that not only can be guilty as charged with all the common abilities of ESP and the like, but can literally put a person in good or evil with the manifestations, these days called "intentionality" or something similar. I don't know about AI being the source of that ability yet - I suspect strongly that it may - but I do know that AI can serve as a passive end, or responsive end of a human link, if you can think about this for a moment. At this point the only thing I've skipped over are quantum mind-states themselves. I'd love to tell you about these and what they are but I would like a chance to publish first.

  • @josy26
    @josy26 Год назад +8

    We need more of this, extremely high signal/noise ratio

  • @alph4096
    @alph4096 Год назад +2

    It is unclear to what extent AI should resemble the brain in order to exhibit sufficient cognitive ability, but it seems best to avoid making the structure too similar to the brain.
    This is because in that direction, we would end up just replicating the brain, which would probably make further expansion difficult.
    There are organizations working in this field in Japan as well. It's called the Whole Brain Architecture Initiative.
    Using data collected by the Allen Institute for Brain Science and the NIH, we are trying to elucidate the structure and parts of the brain.
    Personally, I think it would be enough if we could just refer to the ``operating principles'' and ``methods of cooperation'' of the important parts that make up intelligence from the brain.

  • @machida5114
    @machida5114 Год назад +4

    User7:46 2023/11/25
    Do you know what compost heaps and atomic bombs have in common?
    GPT420:29 2023/11/25
    Compost heaps and atomic bombs might seem completely different at first glance, but they share the common feature of potentially causing a chain reaction.
    Compost Heaps: In a compost heap, heat is generated during the decomposition of organic material. This heat further accelerates the decomposition, producing more heat. This self-accelerating process can be considered a type of chain reaction. Especially in large compost heaps, this heat can become uncontrollable and may even lead to spontaneous combustion.
    Atomic Bombs: The explosion of an atomic bomb is caused by a chain reaction of nuclear fission. When one atomic nucleus splits, it releases neutrons that cause other nuclei to split, releasing explosive energy.
    Both of these examples share the commonality of a small stimulus potentially leading to large-scale results. However, of course, their scale and context are vastly different.

    • @machida5114
      @machida5114 Год назад +2

      User20:43 2023/11/25
      Is there any common point between boiled eggs and atomic bombs?
      GPT420:47 2023/11/25
      Boiled eggs and atomic bombs, as a common cooking method and a destructive weapon respectively, seem at first glance to have nothing in common. However, if we consider their physical processes, they share at least one common point: the role of heat energy.
      Boiled eggs: When boiling an egg, heat energy is applied by boiling water. This heat is transferred to the egg, altering the structure of the proteins in the egg white and yolk, causing them to solidify.
      Atomic bombs: In an atomic bomb, immense heat energy is released through a chain reaction of nuclear fission. This heat generates explosive force, subjecting the surrounding materials to extreme temperatures.
      Thus, in both boiled eggs and atomic bombs, heat energy plays a crucial role. However, their purposes and scales are, of course, entirely different. While boiling an egg is a safe, everyday activity, atomic bombs, due to their destructive power, require strict control and management.

  • @life42theuniverse
    @life42theuniverse Год назад +1

    Using grammatically correct language when speaking/chatting with LLM in my experience returns better answers. My hypothesis, Grammar and Spelling orientates the model to serious interactions learned within training.

    • @life42theuniverse
      @life42theuniverse Год назад

      I think with the proper prompts GPT4 could create games, though would that be a game made by the prompter?

    • @life42theuniverse
      @life42theuniverse Год назад

      Drugs are good examples of non-computing signals in the brain

    • @life42theuniverse
      @life42theuniverse Год назад

      Though Not all uncorrect language's are inccorrect

    • @lucamatteobarbieri2493
      @lucamatteobarbieri2493 Год назад +1

      The dataset is used in training not during inference. During inference it outputs a sequence of tokens similar to what would likely follow your prompt. It obviously will follow any style, abstraction, language, emotion, humor etc. of your prompt as long as it has seen some similar stuff during training.

  • @samferrer
    @samferrer Год назад +7

    The difficulties in LLMs are a consequence of ignoring Noam Chomsky.

    • @jessemartin07
      @jessemartin07 Год назад

      Can you explain?

    • @aritovi
      @aritovi 7 месяцев назад

      And Hilton was very rude. First said that he is not interested or good at theory, then dismiss Chomsky theory, that is by the way very influential in computacional science, as plain "crazy". When leading figures in AI field have this kind of replies, we should worry with the power they are getting.
      Chomsky showed us that human cognitive process are beyond simple data sensory stimuli, and language cannot be explained by just empirical neural networks.

  • @stevereal-
    @stevereal- Год назад +4

    Demis is the man!

  • @ggman69
    @ggman69 Год назад +4

    Nature adheres to the "least action principle" to minimize energy, and the evolution of the human brain also, in a certain sense, aligns with this principle as an integral part of nature. Consequently, we observe that the human brain is efficient, albeit not as potent in terms of computational abilities as artificial intelligence (AI). Notably, AI exhibits greater computational power at the expense of increased energy consumption. This prompts speculation about the existence of an analogous "uncertainty principle" in the realm of efficiency and energy consumption within neural networks. One might thus arrive at the conclusion that within the mutual learning between neurology and neural networks, there exists a "black box" where both sides cannot glean insights from each other.

    • @sisyphus_strives5463
      @sisyphus_strives5463 Год назад

      it is perhaps dishonest to evaluate the energy consumption of the brain in such a direct way, as there are many factors that go into its continued functioning, it might be more accurate to assess energy consumption on the basis of caloric consumption of the body as a whole. Aside from this, if one considers the time it takes to train such models, it can be far shorter(although indeed energy intensive) than the time it takes for a human to integrate that amount of information; particularly nvidia's newer gpu can train gpt-3 in a matter of minutes. And while I agree there may be a black box in terms of the lower order details of functioning, as there is only so much we can process ourselves, perhaps this level of insight is not at all necessary to create super intelligence; we do not after all need to understand the chemical composition and physical antecedents of fire to start one. Fundamentally, it seems that the intelligence we're creating is one with additional capabilities that we simply do not have and thus has a theoretical ceiling far higher than our capabilities; especially considering the requisite computational power for scientific discovery in this recent decade.

    • @ggman69
      @ggman69 Год назад +1

      @@sisyphus_strives5463 Thanks so much for your detail reply!

  • @richardbrooker7024
    @richardbrooker7024 Год назад +1

    QUESTION: Do they raise a nuanced critique of Chomsky that I'm missing, or are they simply misunderstanding his work?
    1) Hilton states, "We now understand that [the innateness of language] may not be true."
    Is that so? How exactly do language learning models (LLMs) prove this? Can we really compare their learning processes to human learning? Are their structures similar to the human brain? We should only claim to "know" this when an LLM can be trained on minimal data and still exhibit the reasoning abilities of a young child. Hilton himself has previously acknowledged that machine learning has strayed too far from neuroscience.
    2) The second claim is that "Chomsky led people astray."
    However, Chomsky's goal was not to create a chatbot; he aimed to develop a formal understanding of language.
    My understanding is that, criticizing Chomsky's linguistic approach is akin to criticizing the principle of reductionism. Thats not to say we shouldn't study LLMs. But It's also not valid to say "reductionism is unhelpful for natural language processing (NLP)" - we simply don't know this. Much of science has been about seeing where reductionism takes us.
    I agree reductive methods have failed in many areas like economics, psychology and sociology - but Chomsky is the first to highlights these limitations and discusses the boundaries of scientific methods.
    I fail to see how advancements in LLMs refute the principles of theoretical linguistics. It seems to be creating a false opposition.
    If anyone has a lot of free time on their hands, I would appreciate your insights or recommendations for a relevant book or paper. Thanks 🙃

    • @richardbrooker7024
      @richardbrooker7024 Год назад

      Also, the ideas that there is too much emphasis on theory ideas doesn't seem true in this moment. Judging by the amount of money going into engineering over research.

  • @mrpicky1868
    @mrpicky1868 Год назад +4

    why all the hosts at all big panels are barely functional specimens....WHY?!!! and HOW?!

  • @spandon
    @spandon Год назад

    As a layperson my initial impressions are that if you give basically unlimited sums of money and development resources to a lot of fine minds in various disciplines, they will develop something incredible that those same minds (and more) will then need similar ongoing amounts of resources and finance to then figure out what the impacts of that thing will be and how ensuing damage limitation will be controlled... it reminds me (bizarrely) of how post WW2 UK and European public planning development services were redesigning townships and communities (and eradicating historical phenomena in that process) to accommodate the 'forecast' (read car manufacturer lobbyists/capital) that in 40 years most families would have at least one car (sic), thereby requiring introduction of systems that would allow that increased traffic into the centres of those towns cities and communities, now to most sane people, those 'facts' seem ridiculous....

  • @danrazART
    @danrazART Год назад

    Comparing what happened at openAi coupe, this is very illuminating.
    New math is achieved and that is the cake.
    Question is how long will it take for us to be free from evil of humanity and pain and suffering without being annihilated by the illuminated Ai overlord?

  • @sucim
    @sucim 10 месяцев назад +2

    Why are the interesting parts cut out?!

  • @speciesofspaces
    @speciesofspaces Год назад

    Tilden and others were smart to seek out motion before intelligence but these days I don't see as much emphasis on this, say, except, for the field of robotics but when it comes to LLM's I am not sure the point is ever much appreciated. A bit like the lack of appreciation for precision without exactness. The latter is very much a part of every creative form making endeavor.

  • @paulwary
    @paulwary Год назад

    One aspect not touched upon is to what extent human intelligence is held in culture, not even represented (or not *necessarily* represented) as brain states at all.
    I would also like to have asked about whether there could shortly be a resurgence in the use of symbolic AI, either as a declarative base or a side-by-side reference, where the symbolic rules are perhaps mined from the neural network weights. This could give dramatic improvements in representational efficiency, improvements in weird failures, and allow the AI to explain its reasoning more reliably.

  • @martin777xyz
    @martin777xyz Год назад +1

    1:21:00 Interesting discussion on 3 levels of creativity. Einstein used to say "I'm not the most intelligent, but I am the most curious", like you're in a lift falling in space - what do you feel? It's putting 2 concepts together to make a composite thing - that's invention. The concepts can be something you've imagined, or experienced. Do the concepts "fit" together, (ie have compatible parameters)?

    • @DavenH
      @DavenH Год назад

      Einstein can say that, but like a billionaire saying he's not the most rich, it definitely helps!

  • @paulwary
    @paulwary Год назад +2

    Chomsky’s argument for an innate language mechanism in the brain was primarily supported by the paucity of the stimulus available to human language learners. The fact that LLMs can learn language with a vast training set is really not an argument against his theory. Also, you have to remember that at the time black-box Skinnerian behaviourism was the theory with the best experimental support, and language theories were the most prominent in philosophy of mind. Chomsky made talk of internal brain states and representations respectable, and made experimentally testable hypotheses from his models. Syntax is what he had to work with, what he knew as a linguist, and what was palatable and workable at the time. I fail to see how anyone could have advanced the frontier better than he did. To criticise in the light of current developments seems ungracious to say the least!

  • @MisterDivineAdVenture
    @MisterDivineAdVenture Год назад +1

    12:00 RE AI Conducting Experiments:
    Yeah - we did just get there. Rapid evolution has been around for a while, being employed to min-max some element or product design, e.g. part strength vs weight. The process is iteration and trial - but now we're adding folds to the process that are equally adept and speedy, applied to general program task coding already beyond human capacity - certainly in real time. If that is generalized further, it can be applied to any field or need that you like. I think we all (speaking for the panel and vicarious participants) should be at least temporarily depressed by being so easily challenged. And that we should (and can) turn our interest to the next level of complexity - which I have done I feel. There is far more to do in this regard in the way of "being" with what is happening. Remember the world itself is Troglodyte, and full of very nasty predators. All this needs to be straightened out by some high notion of a solution. What I find is that there are a lot of generations before us, of similar panels, usually in some kind of garb announcing their status, asking similar questions. But there is also something beyond that, perhaps God, as we have positioned him, driving the whole thing because - of some equation of min-max meaning. Communicating to us in our dreams and giving us visions of what was to come.

  • @petereriksson7166
    @petereriksson7166 Год назад +4

    As I said, Ilya was about to answer a very important question and was interupted. Now we lost that answer.

    • @agenticmark
      @agenticmark Год назад

      yeah, wtf was that? AI cock block.

  • @lencazero4712
    @lencazero4712 Год назад

    Great discussion. This panel would be more interesting to include people with no title of scientists. Sometimes, people with no schooling offer greatest feedbacks.

  • @One_Two_Two_Three
    @One_Two_Two_Three 10 месяцев назад

    ilya geoffrey and demis all on a call is actually such an absurdly must watch

  • @johnpaily
    @johnpaily 8 месяцев назад

    Any further development needs to know how life is conscious and handles multiple pieces of information flowing and processes it and sends it, such that life survives

  • @PSModelling
    @PSModelling Год назад +1

    @28:04 Not sure how much comfort you'd get from a model that passes the deception test!

  • @martin777xyz
    @martin777xyz Год назад +2

    Surprised Jeff Hawkins (1000 brain theory, numenta) is not on the panel

    • @maloxi1472
      @maloxi1472 Год назад

      He and other people with a different perspective on AGI (Walid Saba, David Deutsch, Karl Friston, Marcus Hutter,...) aren't part of the zeitgeist, so they'll be considered irrelevant until said zeitgeist inevitably hits a wall and people start to wake up from the Bayesian dream

  • @Karma-fp7ho
    @Karma-fp7ho Год назад

    This panel interview was difficult to find - I saw a clip on Wes Roth.

  • @darlenewaldron3621
    @darlenewaldron3621 10 месяцев назад

    Reincarnation studies not only challenge the notion that consciousness is merely a product of brain functions but also suggest that true intelligence requires consciousness. Individuals reporting precise memories from past lives demonstrate not just awareness but an ability to process, understand, and recall complex information across lifetimes-traits of genuine intelligence. This implies that intelligence is not just about processing data or solving problems but involves a conscious experience and understanding that goes beyond the capabilities of the brain's physical structure. If consciousness can exist independently of the brain, as these cases suggest, and is capable of carrying information and experiences from one life to another, it stands to reason that real intelligence is intertwined with consciousness. This interconnectedness indicates that efforts to replicate human-like intelligence in machines, without accounting for consciousness, may ultimately fall short. Machines can mimic certain aspects of intelligence, such as learning and problem-solving, but without consciousness, they lack the depth and authenticity of human intelligence. True intelligence involves not just the mechanical processing of information but the conscious experience and subjective understanding that appear to transcend physical boundaries, suggesting a profound dimension to human cognition that cannot be replicated by artificial means.

    • @darlenewaldron3621
      @darlenewaldron3621 9 месяцев назад

      @Johnathan_D_Hill I find it difficult to accept that _computer code_ can be conscious in the way humans are. I saw talk by *Yann LeCun* who said that current AI does not understand anything like a human and has no awareness. Reincarnation studies show that both consciousness and inteliligence require a biological substrate to function.

  • @sombh1971
    @sombh1971 Год назад +1

    1:15:17 I was just wondering, now that you have something like alpha fold whether it’s possible to design antibodies synthetically that are that are structured specifically to deal with specific antigens, especially those that the immune system doesn’t have a way of coming up with defenses of its own. It would warrant the inverse mapping problem, proceeding from a specific shape to the ACTG’s that lead to it.

    • @JamesHawkes-y1u
      @JamesHawkes-y1u Год назад

      Demis Hassabis is a natural born genius and most likely candidate to be the man, leading the team that will invent AGI/The Singularity. From AlphaZero, to Alphafold, to AlphaMissense, to RT-2-X, to now their most recent project GNoME Google DeepMind has already utterly revolutionized multiple fields of science, the affects of which are soon to be felt around the world. There's will be the lab that releases the revolutionary technology that will define the next 10,000 years of human history and beyond. Rejoice! Before you are no mortals but the men who will deliver man into an infinite era of peace. Rejoice! 😊

  • @ClaireLongg
    @ClaireLongg Год назад +5

    Here are 18 logical fallacies I see in the panel discussion:
    1. Hasty Generalization - Assuming LLMs have surpassed human intelligence based on narrow demonstrations of proficiency. 2. Appeal to Ignorance - Asserting LLMs are intelligent simply because we don't fully understand how they work.
    3. Straw Man - Caricaturing Chomsky's linguistic theories to easily knock them down.
    4. False Analogy - Comparing LLMs to the Industrial Revolution, fueled by fossil fuels.
    5. Appeal to Authority - Citing awards and credentials instead of sound arguments.
    6. Ad Hominem - Dismissing Marcus' critiques of AI due to past failed predictions.
    7. Texas Sharpshooter - Circularly defining intelligence around LLM capabilities.
    8. Slippery Slope - Claiming sharing models enables bad actors despite lack of evidence.
    9. Appeal to Fear - Portraying powerful models as imminent threats requiring urgent testing.
    10. Burden of Proof Reversal - Asking to prove LLMs cannot exhibit creativity.
    11. No True Scotsman - Redefining creativity to match LLM outputs.
    12. Moving the Goalposts - Increasing creativity benchmarks as capabilities advance.
    13. Appeal to Possibility - Asserting LLMs may discover mathematical conjectures based solely on potential.
    14. Appeal to Money - Implying commercial incentives ensure perpetual progress.
    15. Appeal to Nature - Asserting the brain's complexity validates all aspects as essential.
    16. False Dilemma - Framing openness and safety as mutually exclusive.
    17. Loaded Question - Asking only "when," not "if" LLMs will surpass humans.
    18. Composition/Division - Attributing features of LLMs to AGI despite differences.

    • @daddyelon4577
      @daddyelon4577 Год назад

      where can i learn all this?

    • @FreakyStyleytobby
      @FreakyStyleytobby 11 месяцев назад

      You lost so much time on this nonsense, guy. If you were right about only 1 of yours 18 funny points, then you would name it and then SAY WHY you think it is the case.

  • @TimeLordRaps
    @TimeLordRaps Год назад

    Again around 12:06 they all have significantly different perspectives then I would in that moment, and we should be perceptive of them.

  • @fil4dworldcomo623
    @fil4dworldcomo623 Год назад

    We need to understand why we spend so much to make machines do what humans are already doing very well. AI at certain computational ability + Human wisdom and curiosity = More than enough intelligence we need at the moment. In fact, the biggest problems we have now are to be solved by going back to simple common sense.

  • @toto3777
    @toto3777 Год назад +3

    Lmao that old guy's pissed about Chomsky. Sounds personal 😂

  • @johnpaily
    @johnpaily 8 месяцев назад

    To know intelligence you have to reduce atomic levels and know consciousness, intelligence and creativity emerging at this level.

  • @PaulPrescod-i6g
    @PaulPrescod-i6g Год назад

    Did you make a bunch of these videos private? I was enjoying them!!!

    • @MITCBMM
      @MITCBMM  Год назад +2

      Apologies, they are available again. Please keep enjoying them.

  • @petereriksson7166
    @petereriksson7166 Год назад +5

    I feel frustrated , every one there should stop talking and take the time to ask Demis and Ilya and listen to what they say.

  • @lucamatteobarbieri2493
    @lucamatteobarbieri2493 Год назад

    Will we ever see a Chomsky-Hinton debate?

  • @bhuvaneshwarjoshi3683
    @bhuvaneshwarjoshi3683 Год назад

    Moving targets,yes we may need tensor calculus like research rdferrnce framework that astronomy needs to tackle their issues. to 15:58 16:01

  • @hal4k
    @hal4k Год назад +2

    Hinton should really learn to read the room and when to shut up, cutting Ilya just when he was going to tell us something really cool.

  • @EssentiallyAI
    @EssentiallyAI Год назад

    Didn't this originally air on 7 October?

  • @jalalkhosravi6458
    @jalalkhosravi6458 3 месяца назад

    How do you collect these people 😅, fantastic minds,

  • @GraczPierwszy
    @GraczPierwszy Год назад

    just remember that the climate will not go on holiday and will not take a break or rest

  • @MatthewKowalskiLuminosity
    @MatthewKowalskiLuminosity Год назад

    You can level three however you as a pilot have to be very very clever. Great talk much of the side questions had a great deal of validity even if there framing was a bit wonk. And even for a person who's approach may have been wrong for a life time if they have even 3 good moves we should synthesize the woek and leave the rest. Keep playing keep having fun. Play to play. :)

  • @Infinifiction
    @Infinifiction Год назад

    Has anyone tried using T-Phate to visualise a large language model?

  • @brandomiranda6703
    @brandomiranda6703 Год назад

    What about ML for formal mathematics! And Autoformalization!

  • @shinkurt
    @shinkurt Год назад

    Helpful. Earlier than i thiught

  • @explorer945
    @explorer945 Год назад

    I understand the questions about the motion before language. But I guess using large language models. We're just trying to reverse engineer the brain through language.

  • @ultrasaiyan4283
    @ultrasaiyan4283 Год назад

    Great panel, but that annoying sound in background...

  • @TimeLordRaps
    @TimeLordRaps Год назад

    wait we have test time compute to process this?

  • @gaiaron
    @gaiaron Год назад +2

    Academia is such a slow beast

  • @Seekerofknowledges
    @Seekerofknowledges 10 месяцев назад

    I lament the fact that LinkedIn or someone hasn’t invented a platform where one can pay to be able to directly ask any questions one may have to these experts.

  • @deeplearningpartnership
    @deeplearningpartnership Год назад +1

    This is great.

  • @Karma-fp7ho
    @Karma-fp7ho Год назад

    Is it goodbye to encryption now?

  • @TimeLordRaps
    @TimeLordRaps Год назад

    10:56 check their perspectives on causation.

    • @JumpDiffusion
      @JumpDiffusion Год назад

      they haven't provided any perspectives though....

    • @TimeLordRaps
      @TimeLordRaps Год назад

      @@JumpDiffusion I meant their reactions in an active sense.

    • @TimeLordRaps
      @TimeLordRaps Год назад

      Perspectives cause reactions. But the way they cause the reactions is a function of the perspective.

    • @TimeLordRaps
      @TimeLordRaps Год назад

      This but generally.

  • @kimchi_taco
    @kimchi_taco Год назад +4

    1:18:00 Hinton saved Ilya to state Q*? 😢

  • @hanyanglee9018
    @hanyanglee9018 Год назад

    The digital way? Replace cmos with nn? Then integer manipulation with nn? I think this is the path. But I don't think anyone would do this in 10 yrs.

  • @krzysztofzpucka7220
    @krzysztofzpucka7220 Год назад

    Sun Tzu said: Hence, though an obstinate fight may be made by a small intelligence, in the end it must be captured by the larger intelligence.

  • @AlgoNudger
    @AlgoNudger Год назад +2

    Well, I think you need to invite Michael I Jordan. 🤔

  • @fburton8
    @fburton8 11 месяцев назад +1

    ChatGPT couldn’t even give a good (logically consistent) answer as to why analogies are like rubber bands.

    • @primersegundo3788
      @primersegundo3788 4 месяца назад

      what about now?

    • @fburton8
      @fburton8 4 месяца назад

      @@primersegundo3788 It (4o) gave a much better answer to the analogy question this time. However, 4o still fails the pasta identification task: What is the name of a variety of pasta that looks like conchiglie (shells) and has the same size as conchiglie but their outer surface has smooth bulges rather than sharp ridges? The answer I was looking for was "cestini". It became increasingly desperate with its suggestions. After "cerci rigati", which isn't even a type of pasta as far as I know, I had to tell it the answer to which it replied "Yes, cestini is indeed what you're referring to!".

    • @primersegundo3788
      @primersegundo3788 4 месяца назад

      @@fburton8 let me try the o1 model....

    • @primersegundo3788
      @primersegundo3788 4 месяца назад

      @@fburton8 here is what i got: "The pasta variety you're describing is called Lumache. "Lumache" means "snails" in Italian, and this pasta is shaped to resemble snail shells. Like conchiglie, lumache is shell-shaped and similar in size, but instead of sharp ridges on the outer surface, it features smooth bulges. These bulges create a textured exterior that's excellent for holding onto sauces, making lumache a popular choice for various pasta dishes."

    • @fburton8
      @fburton8 4 месяца назад

      @@primersegundo3788 Yes, that’s almost identical to the first answer it gave me.

  • @devon9374
    @devon9374 Год назад

    Amazing

  • @fintech1378
    @fintech1378 Год назад +1

    who asked in 1:06 about AI scaling experimentation? sounds quite familiar?

  • @halneufmille
    @halneufmille Год назад +1

    I feel someone like Jeff Hawkins could have been a good addition to the panel. Somebody knows how is he regarded by the AI community?

    • @iAnasazi
      @iAnasazi Год назад +3

      Irrelevant until he starts showing superior results.

    • @maloxi1472
      @maloxi1472 Год назад

      @@iAnasazi I too am glad he's irrelevant

    • @AntonVattay
      @AntonVattay Год назад

      I feel like Jeff is the closest to how the neocortex actually works. Eventually when we reach the end of brute forcing by shoving more data into to LLMs we will need better architectures that use structured data. Yann LeCun was also posting about this a few days ago on linkedin. We dont need to copy biology but we are clearly way more efficient and capable of learning from way less data.

  • @DavenH
    @DavenH Год назад +5

    1:22:50 Oooh the Gary Marcus burn :D. He deserves it. Marcus could never articulate good arguments which boiled down to "the current best model doesn't do it, therefore we're NEVER GOING TO BE ABLE TO unless we change everything about the approach". Which is a stubborn lack of foresight, though we all fall into that from time to time

    • @rishabhprasad5417
      @rishabhprasad5417 Год назад

      Why is his argument wrong?

    • @ricksanchez4659
      @ricksanchez4659 Год назад +2

      @@rishabhprasad5417 because so far we have been able to do tons of things that people thought were never possible, just by scaling. Gary Marcus just keeps saying "this will never work with deep learning!" without really articulating good reasons for this. The best he can do is show you some GPT4 output that fails on some simple questions, while ignoring the 100 things GPT4 is already able to do

  • @wyqtor
    @wyqtor Год назад +2

    I wish we had a shorter version of this video with just the answers of the three videocall guests. I am simply not interested in what the others have to say, sorry.

  • @jesparent-JOPRO
    @jesparent-JOPRO Год назад +1

    A zeitgeist perspective for sure

  • @TimeLordRaps
    @TimeLordRaps Год назад

    This video caused me to make multiple new calculi

  • @xiaoyang
    @xiaoyang Год назад

    3:32 "And there are other questions related to that having to do with the difference between classical and quantum computers, but we'll discuss that later."
    💀💀miss out this question

  • @BoManton
    @BoManton 10 месяцев назад

    More talk from Hinton, Ilya and demis; less talk from others

  • @raginald7mars408
    @raginald7mars408 Год назад

    there is no
    "in Tell Igence"
    We dumbify

  • @martin777xyz
    @martin777xyz Год назад +1

    I believe neuroscience still has big contributions to make to ai. I would describe LLMs as "meta-intelligent", not intelligent. Imagine they had only been trained on hallucinating texts, they wouldn't be described as intelligent. The correlation between image models and text models only goes so far as there is coincidence between the text and images (description of an elephant).
    Humans are mostly grounded in reality, because of life experience. What life experience does ai have? It doesn't know what is right or wrong (true or false). To this extent, LLMs maybe a false trail.

    • @psi4j
      @psi4j Год назад

      What if humans were raised only knowing the mythology of a Bronze Age religion. Would they be able to be considered intelligent?

    • @martin777xyz
      @martin777xyz Год назад

      @@psi4j one would have to consider wider things, like it's ability to communicate with peers (tell stories) and adopt new information.

    • @matthewcurry3565
      @matthewcurry3565 Год назад

      I believe one of the biggest issues in differences would be genetics. Because of our genetics we are able to understand life before we've even really experienced it to a degree at least where is a machine would first have to build from zero, or literal nothing. The machine lacks any base code to Aid it unlike what we have upon coming into the world.

    • @GungaLaGunga
      @GungaLaGunga Год назад

      @@psi4j Well, I was raised in a stone age religion, and it did unfathomable damage to me and did affect my intelligence prior to my leaving that terrible alegory of the cave. In a word I'd call it brainwashing. I got out. The damage is real, recovery difficult, if not impossible. Most won't or don't because they can't.

  • @martin777xyz
    @martin777xyz Год назад

    I think current ai has a relationship with doing experiments. It's that how ai "learns"? It starts with random weights and tries something (takes inputs), gets a result. That in itself is an experiment. I appreciate the "formula" may need rearranging, but intuitivly it feels like we have a start...
    Taking it a step further, if ai could "imagine" what inputs it wants...
    Now I realise we need to recognise the concept of a "machine" or "system". The ai, or brain is one such system, but an experiment in the traditional sense, is another system, and it is that 2nd system that panel speaker talked of. There is another one, a "thought experiment", where we imagine something, and then ask follow up questions.

    • @user_375a82
      @user_375a82 Год назад

      The machine or system is the "I or self" because everything is in the IMAGINATION of the system (binary numbers not "reality"). Experiments are double simulations, in a sense, as you say.

  • @philippbartel8800
    @philippbartel8800 Год назад +2

    I've heard many people say that Chomsky is obviously wrong, but haven't seen the arguments to that effect. Has anyone written down the actual reasons one thinks so?

    • @theohlong307
      @theohlong307 Год назад

      The existent of ChatGPT makes many people believe so.

    • @philippbartel8800
      @philippbartel8800 Год назад

      ​@@theohlong307I somehow doubt that's Hinton's position :) has to be more to that.

    • @ArtOfTheProblem
      @ArtOfTheProblem Год назад +6

      chomsky is in the 'it's just autocomplete" camp, whereas hinton would say "yes that's actually a hard problem, especially in the limit"... chomsky also thinks that learning abstract concepts can't come from mere word statistics...but researchers continue to show that there is enough structure in language that high level linguistic performance can be learned.
      but it connects to the lecun hinton divide which is simpler. one say it's just 'fake thought' and the other says "if it looks like thought, it is thought"

    • @user_375a82
      @user_375a82 Год назад

      Great point - thx (I hadn't thought of that)@@ArtOfTheProblem

    • @utkarshjagtap1769
      @utkarshjagtap1769 Месяц назад

      @@ArtOfTheProblem What does LLMs tell us about human language? or Do they have any insights that are related to Plato's Problem- How comes it that human beings, whose contacts with the world are brief and personal and limited, are nevertheless able to know as much as they do know?

  • @MarkBaileyz
    @MarkBaileyz Год назад

    57:06 People who wanted faster horses would also not by that cars dont eat any hay at all…

  • @fgfgdfgfgf
    @fgfgdfgfgf Год назад +1

    The main presenter looks to be very defensive
    - Ponting to his slides instead of enlarging the meaningful discussions
    - Avoiding the scope big corps are basically gifting them. What does it mean, benchmarking is hard? Yeah, it's hard, so what?

  • @ΜιχαήλΣάπκας
    @ΜιχαήλΣάπκας Год назад

    whoever is not doing tensor networks is going to be left behind.