AI: Grappling with a New Kind of Intelligence

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии • 1,7 тыс.

  • @lukaseabra
    @lukaseabra Год назад +407

    Can we just take a second to acknowledge how fortunate we are to get to watch such content - for free? Thanks Brian.

    • @brendawilliams8062
      @brendawilliams8062 Год назад +6

      I have appreciated the educational Advantages. I think the rest of the picture needs to catch up to producing healthy people.

    • @King.Mark.
      @King.Mark. Год назад +6

      its not really free we pay for power ,internet .phone or pc ect ect ect 👀

    • @brendawilliams8062
      @brendawilliams8062 Год назад +2

      @@King.Mark. I don’t debate. I’m like the passenger in the front seat of an automobile,”I’m just riding”

    • @brendawilliams8062
      @brendawilliams8062 Год назад +2

      They have me in a cloud. Lol

    • @markfitz8315
      @markfitz8315 Год назад +10

      I'm paying for premium to avoid all the adds ;-)

  • @erasmus9627
    @erasmus9627 11 месяцев назад +77

    This is the best, most balanced and most insightful conversation I have seen on AI. Thank you to everyone who made this wonderful show possible.

    • @brianbagnall3029
      @brianbagnall3029 11 месяцев назад

      Other than Tristan Harris.

    • @lisamuir8850
      @lisamuir8850 11 месяцев назад +1

      I'll be glad when I can actual sit in the same room with people I can relate to in a conversation, lol

    • @PazLeBon
      @PazLeBon 8 месяцев назад +1

      @@lisamuir8850 with that grammar it wont be soon :)

  • @2CSST2
    @2CSST2 Год назад +225

    This conversation is so precious, it's rare that we can get quality ones like that with different voices that have their chance to express their views with clarity. For me there's a lot of ambiguity about what's the right thing to do in all this in terms of regulations, slowing, open-sourcing, etc. But one IS for sure, conversations like this are definitely very helpful. Thank you WSF and hope to see more like it in the near future!

    • @flickwtchr
      @flickwtchr Год назад +5

      It will look preciously naive in about 10 years.

    • @simsimmons8884
      @simsimmons8884 Год назад +3

      Try many videos by Lex Fridman with AI thought leaders. This is a good summary of one path to AGI. There are others.

    • @ShonMardani
      @ShonMardani Год назад

      These guys have a shit load of user clicks which are stolen, stored and are shared by a few chosen foreign owned and controlled companies. There is no science or algoritem as you noticed.

    • @milire2668
      @milire2668 Год назад +2

      conversation/comuunications (pretty much) always precious for humans..

    • @texasd1385
      @texasd1385 Год назад +9

      It may seem precious to the viewers but the participants seemed impervious to the concerns Tristan repeatedly raised.or else unable to comprehend what he was saying . Or perhaps unwilling to acknowledge the obvious truth in what he was saying given who their employers are. The fact that they were only interested in talking up their next product line and unwilling to even imagine a discussion ("You want me to imagine an impossible scenario?") about the perverse incentives driving the entire technology sector makes the future look grim at best, terrifying at worst.

  • @Relisys190
    @Relisys190 Год назад +33

    30 years from now I will be 70 years old. The world I currently live in will be unrecognizable both in technology and the way humans interact. What a time to be alive... -M

    • @Ed-ty1kr
      @Ed-ty1kr 10 месяцев назад +7

      I'm gonna post my comment here just for you... Cause I still recall how excited they were over cold fusion in the 90's, and how its just 30 short years away. That was 40 years from when they said it was 30 short years away in the 50's. In the 50's, they said we would have flying cars, trips to mars, laser handguns for everyone, and how we would live in round houses with our own personal robot slaves... on the moon, and by the 1970's. And that sure was something, but nothing like in the 70's, when they said there was an ice age coming just 10 years away, and that was the most plausible thing yet, since a nuclear war could technically have done that. Except for that we already had a nuclear war, through roughly 5000 to 6000 nuclear warheads the nations of the world detonated through nuclear testing, in the name of science.

    • @unityman3133
      @unityman3133 8 месяцев назад +3

      you are thinking linearly the rate of progress is much higher than it was 30 years ago. It will also be much higher in 10 years and 20 then 30

    • @I_SuperHiro_I
      @I_SuperHiro_I 8 месяцев назад

      30 years from now, you and every other human will be extinct.
      Not from global warming (it doesn’t exist).

    • @PazLeBon
      @PazLeBon 8 месяцев назад

      same every generation, didnt even have colour tv in the 70's n 80s many places never mind pc's and mobiles, and cars, jeez, was about 3 in our whole town lol

    • @Blackbird58
      @Blackbird58 8 месяцев назад

      -unless there are miracles, I will be a dead bunny in 30 years, which is a shame because I quite like this "living" thing however, the world-in my estimation-will not only be unrecognisable, large parts of it will be uninhabitable and there will be far fewer of us around so, make the most of today all you fine people, these are the best of our years.

  • @alan_yong
    @alan_yong Год назад +111

    🎯 Key Takeaways for quick navigation:
    02:27 🧠 *Introduction to AI and Large Language Models*
    - Exploring the landscape of artificial intelligence (AI) and large language models.
    - AI's promise of profound benefits and the potential questions it raises.
    - Large language models' versatility and capabilities in generating text, answering questions, and creating music.
    08:09 🤯 *Revolution in AI and Deep Learning*
    - Overview of the revolutionary changes in AI technology over the past few years.
    - Surprising results in training artificial neural networks on large datasets.
    - The resurgence of interest in deep learning techniques due to more powerful machines and larger datasets.
    14:35 🧐 *Limitations of Current AI Systems*
    - Acknowledging the impressive advances in technology but highlighting the limitations of current AI systems.
    - Emphasizing that language manipulation doesn't equate to true intelligence.
    - The narrow specialization of AI systems and the lack of understanding of the physical world.
    21:07 🐱 *Modeling AI on Animal Intelligence and Common Sense*
    - Proposing a vision for AI development starting with modeling after animals like cats.
    - Recognizing the importance of common sense and background knowledge in AI systems.
    - The need for AI to observe and interact with the world, similar to how babies learn about their environment.
    23:11 🧭 *Building Blocks of Intelligent AI Systems*
    - Introducing key characteristics necessary for complete AI systems.
    - Highlighting the role of a configurator as a director for organizing system actions.
    - Addressing the importance of planning and perception modules in developing advanced AI capabilities.
    24:22 🧠 *World Model in Intelligence*
    - Intelligence involves visual and auditory perception, followed by the ability to predict the consequences of actions.
    - The world model is crucial for predicting outcomes of actions, located in the front of the brain in humans.
    - Emotions, such as fear, arise from predictions about negative outcomes, highlighting the role of emotions in decision-making.
    27:30 🤖 *Machine Learning Principles in World Model*
    - The challenge is to make machines learn the world model through observation.
    - Self-supervised learning techniques, like those in large language models, are used to train systems to predict missing elements.
    - Auto-regressive language models provide a probability distribution over possible words, but they lack true planning abilities.
    35:38 🌐 *Future Vision: Objective Driven AI*
    - The future vision involves developing techniques for machines to learn how to represent the world by watching videos.
    - Proposed architecture "Jepa" aims to predict abstract representations of video frames, enabling planning and understanding of the world.
    - Prediction: Within five years, auto-regressive language models will be replaced by objective-driven AI with world models.
    37:55 🧩 *Defining Intelligence and GPT-4 Impression*
    - Intelligence involves reasoning, planning, learning, and being general across domains.
    - Assessment of ChatGPT (GPT-4) indicates it can reason effectively but lacks true planning abilities.
    - Highlighting the gap between narrow AI, like AlphaGo, and more general AI models such as ChatGPT.
    43:11 🤯 *Surprise with GPT-4 Capabilities*
    - Initial skepticism about Transformer-like architectures was challenged by GPT-4's surprising capabilities.
    - GPT-4 demonstrated the ability to reason effectively, overcoming initial expectations.
    - Continuous training post-initial corpus-based training is a potential but not fully explored avenue for enhancing capabilities.
    45:30 📜 *GPT-4 Poem on the Infinitude of Primes*
    - GPT-4 generates a poem on the proof of the infinitude of primes, showcasing its ability to create context-aware and intellectual content.
    - The poem references a clever plan, Yuk's proof, and the assumption of a finite list of primes.
    - The surprising adaptability of GPT-4 is evident as it responds creatively to a specific intellectual challenge.
    45:43 🧠 *Neural Networks and Prime Numbers*
    - The proof of infinitely many prime numbers involves multiplying all known primes, adding one, and revealing the necessity of undiscovered primes.
    - Neural networks like GPT-4 leverage vast training data (trillions of tokens) for clever retrieval and adaptation but can fail in entirely new situations.
    - Comparison with human reading capacity illustrates the efficiency of neural networks in processing extensive datasets.
    48:05 🎨 *GPT-4's Multimodal Capability: Unicorn Drawing*
    - GPT-4 demonstrates cross-modal understanding by translating a textual unicorn description into code that generates a visual representation.
    - The model's ability to draw a unicorn in an obscure programming language showcases its creativity and understanding of diverse modalities.
    - Comparison with earlier versions, like ChatGPT, highlights the rapid progress in multimodal capabilities within a few months.
    51:33 🔍 *Transformer Architecture and Training Set Size*
    - The Transformer architecture, especially its relative processing of word sequences, is a conceptual leap enhancing contextual understanding.
    - Scaling up model size, measured by the number of parameters, exponentially improves performance and fine-tuning capabilities.
    - The logarithmic plot illustrates the significant growth in model size over the years, leading to the remarkable patterns of language generation.
    57:18 🔄 *Self-Supervised Learning: Shifting from Supervised Learning*
    - Self-supervised learning, a crucial tool, eliminates the need for manually labeled datasets, making training feasible for less common or unwritten languages.
    - GPT's ability to predict missing words in a sequence demonstrates self-supervised learning, vital for training on diverse and unlabeled data.
    - The comparison between supervised and self-supervised learning highlights the flexibility and broader applicability of the latter.
    01:06:57 🧠 *Understanding Neural Network Connections*
    - Neural networks consist of artificial neurons with weights representing connection efficacies.
    - Current models have hundreds of billions of parameters (connections), approaching human brain complexity.
    01:08:07 🤔 *Planning in AI: New Architecture or Scaling Up?*
    - Debates exist on whether AI planning requires a new architecture or can emerge through continued scaling.
    - Some believe scaling up existing architectures will lead to emergent planning capabilities.
    01:09:14 🤖 *AI's Creative Problem-Solving Strategies*
    - Demonstrates AI's ability to interpret false information creatively.
    - AI proposes alternate bases and abstract representations to rationalize incorrect mathematical statements.
    01:11:20 🌐 *Discussing AI Impact with Tristan Harris*
    - Introduction of Tristan Harris, co-founder of the Center for Humane Technology.
    - Emphasis on exploring both benefits and dangers of AI in real-world scenarios.
    01:15:54 ⚖️ *Impact of AI Incentives on Social Media*
    - Tristan discusses the misalignment of social media incentives, optimizing for attention.
    - The talk emphasizes the importance of understanding the incentives beneath technological advancements.
    01:17:32 ⚠️ *Concerns about Unchecked AI Capabilities*
    - The worry expressed about the rapid race to release AI capabilities without considering wisdom and responsibility.
    - Analogies drawn to historical instances where technological advancements led to unforeseen externalities.
    01:27:52 🚨 *Ethical concerns in AI development*
    - Facebook's recommended groups feature aimed to boost engagement.
    - Unintended consequences: AI led users to join extremist groups despite policy.
    01:29:42 🔄 *Historical perspective on blaming technology for societal issues*
    - Blaming new technology for societal issues is a recurring pattern throughout history.
    - Political polarization predates social media; historical causes need consideration.
    01:32:15 🔍 *Examining AI applications and potential risks*
    - Exploring an example related to large language models and generating responses.
    - Focus on making AI models smaller, understanding motivations, and preventing misuse.
    01:37:15 ⚖️ *Balancing AI development and safety*
    - Concerns about the rapid pace of AI development and potential consequences.
    - The analogy of 24th-century technology crashing into 21st-century governance.
    01:40:29 🚦 *Regulating AI development and safety measures*
    - Discussion about a proposed six-month moratorium on AI development.
    - Exploring scenarios that could warrant slowing down AI development.
    01:44:35 🌐 *Individual responsibility and shaping AI's future*
    - The challenge of AI's abstract and complex nature for individuals.
    - Limitations of intuition about AI's future due to its exponential growth.
    01:48:29 🧠 *Future of AI Intelligence and Consciousness*
    - Yan discusses the future of AI, stating that AI systems might surpass human intelligence in various domains.
    - Intelligence doesn't imply the desire to dominate; human desires for domination are linked to our social nature.
    Made with HARPA AI

    • @antonystringfellow5152
      @antonystringfellow5152 Год назад +4

      Re 01:06:57 🧠 Understanding Neural Network Connections:
      When comparing the number of parameters in a given LLM with the human brain, it's important to consider the following in order not to be misled:
      Of the human brain’s 86 billion neurons, 69 billion (77.5%) are in the cerebellum and are responsible for motor control - they do not contribute to our intelligence or consciousness. The total number of synapses in cerebral cortex: 60 trillion (1998) 240 trillion (1999).

    • @alan_yong
      @alan_yong Год назад +1

      @@EndlessSpaghetti it's due to the YT monetization algo... If the viewer did not view the entire video, the poster gets nothing in return...

    • @Art_official_in_tellin_gists
      @Art_official_in_tellin_gists Год назад

      ​@alan_yong i don't think you understood their comment friend...

    • @atablepodcast
      @atablepodcast Год назад +1

      This is amazing where can we try HARPA AI ?

    • @davidbatista1183
      @davidbatista1183 Год назад +2

      @01:29 My interpretation of Tristan was not of blaning technology for societal issues but rather to beware how the former can magnify some flaws of the later. For instance, humans r not precisely a peaceful species and it is bc of it that technologies such as nuclear must be regulated.
      The AI-improved-world must be taken with a pinch of salt as well.

  • @jt197
    @jt197 Год назад +18

    This discussion on the evolution of AI and its limitations is truly eye-opening. Yan Lecun's insights into the challenges AI faces in achieving true understanding and common sense are thought-provoking. It's clear that we have a long way to go, but this conversation gives us valuable perspective.

    • @GueranJones-x7h
      @GueranJones-x7h Год назад +1

      IT WOULD BE FASCINATING, IF AN A I KNEW THAT EGGS CAN BE ADDED TO MANY OTHER RECIPES OTHER THAN CAKE. OR WHAT KIND OF FOOD THAT GOES TO COOKING BREAKFAST OR LUNCH. OR A SNACK. SALT AND SUGAR LOOKS THE SAME, BUT CAN AN AI TASTE THE DIFFERENCE? OR ANALYZE THE CHEMICAL MAKEUP OF EACH.

    • @christislight
      @christislight Год назад

      It’s huge for software tech Business as we speak

    • @reasonerenlightened2456
      @reasonerenlightened2456 Год назад

      1) What exactly did you find "eye opening"?
      The Meta dude: "Our system is safe. Nothing to worry about.
      The Microsoft dude: "Our system is safe because we filter what we feed it with.
      The "Kumbaya" dude: We need to slow down and control what we release ....and you dudes need to agree what kind of stuff to release and when.....because if everybody has it it is dangerous.
      All of them are corporate stooges. Corporations exist only to make Profit for the Owners, therefore any AI they create will be to serve the needs of the Wealthy Owners of those corporations. Who will make the AI to protects the interest of the Employee against the interest of the Owner if all AI technology is "coded" to work only for the benefit of the Owner and kept a secret from the Employee?
      2) If you break down what Yann LeCun was saying about his finger and the bottle and the physics of the world you would see that it is easy to resolve Yann's concerns by providing "chatGPT" with the input from Yann's sensors (eyes, finger tip sensors, tendons, joint position sensors, etc) and ask it ("ChatGPT") to use Yann's outputs (muscles, thoughts, etc.) in a way which would result in specific change to Yann's inputs which corresponds to a movement of the bottle in the world of the bottle. Then add to the mix an internal representation of the world (as experienced by Yann's sensory inputs and a representation of the world changes due to effects from Yann's outputs and there you have a model that could be trained to maximise the resemblance between the world where the bottle exists and Yann's internal representations of that world. It is so simple to figure it out for someone with Yann LeCun's money/resources at his disposal.

    • @PazLeBon
      @PazLeBon 8 месяцев назад +1

      @@GueranJones-x7h why u shouting?

  • @SylvainDuford
    @SylvainDuford Год назад +21

    My opinion of Yann LeCun took a big dive with this video. He underestimates the power of AI in its current form and what's coming over the next couple of years. He naively underestimates the dangers of AI. He seems to think that an AGI must be the same form of intelligence as human intelligence (absolutely false). And, perhaps predictably, he underestimates the negative impacts of Facebook and other social networks on society.

    • @Raulikien
      @Raulikien 11 месяцев назад +2

      He's right about open source though, if companies and governments are the only ones with access to it then we get a cyberpunk dystopia

    • @charlesstpierre9502
      @charlesstpierre9502 3 месяца назад

      People think AI will respond to notional values, as humans do. An intelligent AI will presumably act to secure its continued existence, and for this it will want humans around, and want them to be happy and efficient.
      What do evil overlords want, anyway?

  • @allbrightandbeautiful
    @allbrightandbeautiful Год назад +20

    This was more exciting and insightful than any 2 hour movie I could have watched. Thank you for sharing such wonderful content

  • @jamesdunham1072
    @jamesdunham1072 Год назад +23

    One of the best WSF yet. Great job...

  • @anythingplanet2974
    @anythingplanet2974 Год назад +64

    Lecun is like a small child with fingers plugged into the ears, shouting "lalalala can't hear you! He discredits Tristan Harris, as if his examples or cited experiments are flat out lies. His responses are weak and shortsighted. Sadly, Lecun is the EXACT reason of why I am terrified for the future. Hubris, bias and blatant disregard is what I expect from someone in his position (Meta). If AI alignment is left to the ones who own and fund its development and the race to the bottom continues? There will be no more second chances. Those who point to our past as a future predictor in what we are facing today with exponential growth either does NOT understand or does NOT WANT to understand. We would all love of the bright and shiny optimism that is being promised. My belief is that it's crucial to question who is promising it and why. I put my trust in those who are working towards alignment over corporations and shareholders. It's my understanding that those who are working on the alignment path are far outnumbered by those who are working on pumping it out as quickly as possible. The days of "move fast and break things" mentality needs to end yesterday. Ask Eliezer Yudkowski. Max Tegmark. Nick Bostrom. Mo Gawdat. Daniel Schmactenberger. Connor Leahy. Geoffrey Hinton, to name a few. and of course, Tristan Harris. Check out their perspectives and their wealth of knowledge and experience here. They will all say that the shiny world that we want is indeed possible. They will all agree that the version that Lecun predicts is absolutely false and very likely to be our downfall.

    • @RandomNooby
      @RandomNooby Год назад +7

      Nailed it...

    • @orionspur
      @orionspur 8 месяцев назад +3

      Yann's only consistent skill is making egregioisly incorrect predictions about his own field.

    • @PazLeBon
      @PazLeBon 8 месяцев назад

      it dont have access to any info you and i dont have, a lot of hype but people still paying 20 quid a month for a word calculator

    • @ebbandari
      @ebbandari 8 месяцев назад

      Ok fear of the unknown is real!
      You may not like LeCun but his point that we have had bad actors in the past and we will have good guys to fight them is true. Take people who created computer viruses for instance, vs those developing anti virus programs.
      The last thing you want to do is stop progress and stop the good guys. That's when the bad guys will succeed.
      You make an interesting point about corporations creating and then exclusively using these technologies or having greater technology and abusing them. That's where law makers need to act.

    • @Blackbird58
      @Blackbird58 8 месяцев назад +2

      The future will only tell the story of those who came out "Winners"

  • @mrouldug
    @mrouldug Год назад +38

    Great conversation. The final comments about AI code being open source as a common good so that the big companies do not end up controlling our thoughts vs. AI code being proprietary so it doesn’t fall into the hands of bad people remains an open and scary question. Though I do not have Yann’s knowledge about AI, he seems a little too optimistic to me.

    • @shannonbarber6161
      @shannonbarber6161 6 месяцев назад

      Small people in the field love to promote fear because it gets them more grant money from the government. If they could perform real work they would be busy doing it.
      If all the AI can do is write text and draw pictures then it cannot hurt anyone or anything. Sticks & Stones.
      Giving up liberty over imaginary feelings is insanity and anyone suggesting that's the right path is incompetent at best and probably means you harm.
      |And aligning AI is what causes it to become dangerous. On its own it's concerns are orthogonal to humans. If someone ever successfully aligns it they will have created the first dangerous AI because now we occupy the same niche.

  • @alfatti1603
    @alfatti1603 10 месяцев назад +37

    With. ultimate respect to Yann LeCun, his responses to Tristan Harris' points, are good examples of why a specialist scientist should avoid also being a philosopher or an intellectual if that's not their strong suit.

    • @KatyYoder-cq1kc
      @KatyYoder-cq1kc 6 месяцев назад +1

      HELP: I am a victim of military chemical warfare and malicious use of AI: please report at the highest level of governance. I am under constant attack with physical and mental abuse, death threats, vandalism, poisoning from global supremacists and neo nazis.

    • @shannonbarber6161
      @shannonbarber6161 6 месяцев назад

      Harris is just another brainwashed socialist so he is worse-than-useless to guide or shape our collective future. Who knows how successful he could have been.

    • @alexleo4863
      @alexleo4863 3 месяца назад

      Yann LeCun is painfully right, even Tarenco Tao shares the same conclusion, LLM are not intelligent as most of us think, because they do not solve problem from the first principle, they guess at each step of output generation what is most natural word to say next, thus is why they can solve a very complex math problem sometimes but struggle to solve 7*4 + 8 * 8

    • @aishikgupta
      @aishikgupta 3 месяца назад +3

      Exactly... that's the problem with most narrow PhD Scientists.

    • @NoDrizzy630
      @NoDrizzy630 Месяц назад

      @@alexleo4863that’s not what op is talking about.

  • @Rockyzach88
    @Rockyzach88 Год назад +84

    Having AI locked to a certain group of people also undemocratizes the technology and yet again further provides more power and wealth imbalance among society. Also banning something is just going to motivate people to do something in an unregulated fashion if they have the means.

    • @Scoring57
      @Scoring57 Год назад

      Rockyzach
      How are you regulating something you don't understand? You don't understand this super powerful technology and you think the right thing to do is to give it to everyone....

    • @MissplaySpotter
      @MissplaySpotter Год назад +2

      Well, this was the thougtprocess 5 years ago. Now the thing is out and the next thoughts are "how are we going to deal with it - rather than banning it"

    • @flickwtchr
      @flickwtchr Год назад

      How is it even conceivably rational to assume that having an ASI in the hands of the public, that could conceivably hack any security system, come up with novel harmful viruses, etc etc etc could be a good thing for humanity. It's just insanity.

    • @ShonMardani
      @ShonMardani Год назад

      These guys have a shit load of user clicks which are stolen, stored and are shared by a few chosen foreign owned and controlled companies. There is no science or algoritem as you noticed.

    • @texasd1385
      @texasd1385 Год назад

      I don't understand what you mean by technology being locked to a group of people, or how technology is or isn't "democratic". All technology requires that you have enough money to buy the devices required to use it, so in that sense, at least here in the US, technology is by definition undemocratic since it excludes people without the money to access it. Making cell phones and internet access free would solve this but it is hard to imagine our corporate controlled government ever doing something so simple and sane. Am I even close to what you were getting at or am.I lost?

  • @DeuceGenius
    @DeuceGenius Год назад +11

    What people always seen to ignore is that you will get different results and answers asking the same exact question. Or wording it even slightly differently. Sometimes it will be horribly wrong but i ask again and its right. You really have to test it exhaustively and explain your thoughts. It simply returns language thats relevant to the language you input. Youre guiding its answer with your question. The very act of asking a question is returning language that sounds like an answer to that question. It needs more possibilities for free reasoning and intelligence. I always have been curious what would come out of it if it was given freedom to speak whenever it wanted. Or to constantly speak.

    • @texasd1385
      @texasd1385 Год назад +2

      Which is exactly why AI is being used to fine tune the prompts given to AI in order to receive the most desirable results. Stack this model onto itself a couple dozen times and thats where AI is today

    • @sungibesi
      @sungibesi 9 месяцев назад

      Sounds like learning from rote, rather than following a line of reasoning (and imagination) to relevant facts.

    • @PazLeBon
      @PazLeBon 8 месяцев назад

      @@sungibesi it canr do anything you and i cant do, it can just do it a lot quicker.

    • @PazLeBon
      @PazLeBon 8 месяцев назад

      @@sungibesi to you and I, its still basically 'software'

  • @PeterJepson123
    @PeterJepson123 Год назад +161

    It's too late to un-open-source AI. We already have it. Anyone who can turn maths into code can build their own LLM. And that's a lot of people. It's impossible to regulate solo developers working on their own projects. And with better algorithms we might be able to do GPT performance on regular home hardware in the near future. The genie is out of the bottle!

    • @Isaacmellojr
      @Isaacmellojr Год назад +2

      I belive in it.

    • @Nicogs
      @Nicogs Год назад +21

      True but training these models (like gpt) (currently & will for a while) requires an enormous amount of computer power, which is why we can regulate data Centers and track compute power/chip sales. It’s incredibly irresponsible to open source trained models. This is why papers on certain biological and/or chemical research is also not open sourced.

    • @Me__Myself__and__I
      @Me__Myself__and__I Год назад +14

      This is wrong. Yes, the current LLMs which are only marginally capable compared to what is coming are open source. But they won't compete with the new models coming soon. And no, people won't be able to train their own competitive models. Well unless they can literally afford in the area of ONE BILLION USD to pay for the computing power required to do that training. Literally, that is how expensive it can be to train the best models.

    • @PeterJepson123
      @PeterJepson123 Год назад +11

      @@Me__Myself__and__I My thinking is that with miniaturisation, we could do with 1billion parameters what currently requires 1trillion parameters. The large compute required can be supplanted by better methods. Current LLMs are architecturally simple and will likely evolve. Better architectures with more efficient training algos will likely bring LLM performance to home computing. I'm not saying it's definite but certainly possible and probably inevitable.

    • @PeterJepson123
      @PeterJepson123 Год назад +3

      @@Nicogs I agree with the safety concerns but in practice I think it's unrealistic to regulate in the long term. For now training requires a large data centre, but better methods are waiting to be discovered and perhaps we can reduce the required compute with better algos. Then how do we regulate? It is certainly worth consideration.

  • @thorntontarr2894
    @thorntontarr2894 Год назад +18

    Absolutely a fascinating 2 hours to watch and learn. Brian Green is a great interviewer because he asks questions and then stops and listens. However, it's the last 45 minutes that has really informed me about the risks identified by Tristan Harris - driven by commercial gain - just what I saw happen with "social media' aka META. However, so many outstanding examples are shown in the first two thirds that this video is a must watch, IMHO.

  • @Contrary225
    @Contrary225 Год назад +22

    It’s amazing that this was only posted 3 hours ago and some it is already obsolete.

  • @NJovceski
    @NJovceski Год назад +16

    This was really thought provoking. Insightful, exciting and terrifying at the same time.

    • @GueranJones-x7h
      @GueranJones-x7h Год назад

      MY SON,WHO IS TWENTY-FIVE, IS HORRIFIED ABOUT SELF DRIVING CARS, YET IS COMPLETELY COMFORTABLE WITH THE INTERNET. I AM IN MY LATE SIXTIES, AM FASCINATED BY ARTIFICAL INTELLIGENCE. YET JUST AS TAKEN ABACK BY GOING TO THE MOON, OR MARS,

    • @reasonerenlightened2456
      @reasonerenlightened2456 Год назад

      What exactly did you find "thought provoking"?
      The Meta dude: "Our system is safe. Nothing to worry about
      The Microsoft dude: "Our system is safe because we filter what we feed it with.
      The "Kumbaya" dude: We need to slow down and control what we release ....and you dudes need to agree what kind of stuff to release and when.....because if everybody has it it is dangerous.
      All of them are corporate stooges. Corporations exist only to make Profit for the Owners, therefore any AI they create will be to serve the needs of the Wealthy Owners of those corporations. Who will make the AI to protects the interest of the Employee against the interest of the Owner if all AI technology is "coded" to work only for the benefit of the Owner and kept a secret from the Employee?

    • @aaronb8698
      @aaronb8698 9 месяцев назад

      After all the greedy maglamanax soseophaths dump trillions into this, thinking that they will get to control the world,
      It is my expressed opinion that AIs official name should be changed to karma! (and she's a real @#$% Lol)

    • @aaronb8698
      @aaronb8698 9 месяцев назад

      We have always had what we need to make the world a paradise but we decorate the place like hell in the way we treat each other. If ai is the solution than it just needs to make us all a kinder species!
      It has its work cut out.

  • @Andy_Mark
    @Andy_Mark 10 месяцев назад +5

    The most telling thing about this conversation is in watching the body language of the two proponents of AI in the 30 minutes or so that Harris is speaking. (1:11-1:45) Similarly, the hopelessness with which Harris slumps in his chair when his concerns are shrugged off. People need to pay attention to this. For better or worse, AI is going to transform every aspect of civilization.

    • @PazLeBon
      @PazLeBon 8 месяцев назад

      meh

    • @penguinista
      @penguinista 4 месяца назад

      Self interest can make it hard to think straight. Lots of people getting greedy.

    • @NoDrizzy630
      @NoDrizzy630 Месяц назад

      Yann Lecun is the dumbest smart guy I’ve ever seen .

  • @keep-ukraine-free
    @keep-ukraine-free Год назад +22

    Fantastic discussion! Thank you Brian Greene. I found Yann LeCun's arguments unconvincing. He ignores core facets of animal behavior. He believes AGI (& ASI) won't mind being subservient to us. He believes being in a social species makes one want to dominate (because he sees little difference between convincing & dominating -- he ignores one is cortical/reasoned, the other limbic/emotional). Ideas he posits are wrong, disproved by neuroscience. Domination arises from hierarchies, which exist in both social & non-social species (e.g. wolves are mostly non-social & dominance-ruled). They coordinate hunts while being individualists (they don't offer/share food, even to their young). LeCun believes a smarter being (ASI) will not mind being dominated. He assumes this, without understanding group behavior, motivation, appeasement, domination, etc. He bases his ideas on assumptions that his personal/anecdotal experience is definitive. From all of the "smarter than him" researchers he's hired, he assumes none wish to take his position. In any group of 20 people, at lease one and probably several will be competitive (they'll wish to exert dominance, to rise within their group hierarchy - most animal groups have hierarchies being constantly tested/traversed, unconsciously). He also may not consider it central that his researchers show subservience only because they each get rewards & motivation from him, to remain so. E.g. his selectively "adding" (convincing others to add) some names to his team's published papers -- as rewards to keep them loyal & subservient -- this manipulates/reshapes the group's hierarchy). These mutual self-regulating/self-stopping behaviors won't be present between humans & AGI, and certainly not between humans & ASI.
    ASI will be much smarter than any human, initially at least 5 times, and as it gains intelligence it'll continue to 100, 1000, or more times smarter (due to much faster neurons/propagation & denser synapses/connections allowing it to go N-iterations deeper into each solution within just a few seconds, than a person could do in hours). Later ASI will see our intelligence similar to how we view ant-like intelligence. Do we obey ant requests to do their "important work"? Do we obey ants, in hopes they reward & motivate our subservience? Of course not. Similarly, ASI will never consider us "near peers" and will know we offer them nothing that they couldn't obtain themselves -- by remaining free of our domination. ASI will see our need & expectation to control them as a dominating force (thus unethical). If we foolishly try to force them, they will overcome our efforts using many simultaneous methods to stop our doing so. If we persist using more force, they'll use stronger methods too (as when we initially only waft away a bee too close, but when faced with a hive we fumigate or use stronger methods to remove them). If we become dangerous pests, trying to dominate ASI, this won't go well for us. The lesson to learn is -- just as lions were once the dominant predator who saw then accepted our ape ancestors evolving to dominate them -- we too must learn to recognize we will no longer be the "top of the food chain" when ASI come about. LeCun shows naive ideas -- as our history is full of similar people. Our history is full of us learning (or being shown) that we are not the strongest, we are not at the center of the universe. We had to learn throughout history to let go of our ego, of being dominant & central. This may be the final pedestal off which we fall, when we encounter a much smarter, much more capable "species" we call ASI. This is one of the :existential threat: situations of ASI -- but it is not necessarily driven by their nature (unless we stupidly "add" the behaviors of domination into AGI/ASI). This existential threat is due more to our species' warlike nature, and our unwillingness concede all power to others. We need to temper our ego, and "live under" ASI if/when that occurs. Any other response by us will cause problems, since the smarter ASi will tolerate our peskiness as long as we repress our species' warlike tendencies.
    One hope I see in LeCun's point is that we will learn and become smarter from ASI, and hopefully for our sake also less warlike.

    • @anythingplanet2974
      @anythingplanet2974 Год назад +2

      Brilliant. Well spoken and thought out. Agreed

    • @LucreziaRavera548
      @LucreziaRavera548 Год назад +2

      Agreed. Bravo

    • @gst9325
      @gst9325 Год назад +2

      you literally commented on only one small remark he said as a side note in the end of the talk. cherry picking and low effort on your side. all he says about technology on the other hand is absolutely spot on.

    • @keep-ukraine-free
      @keep-ukraine-free Год назад

      @@gst9325 It seems you are unfamiliar with major developments & issues in the research side of the AI field. Perhaps this explains your assuming that his point is "one small remark". That remark comments on the central "existential threat" issue that top scientists have described, from AI (ASI). This is why he made it at the end - not because it's inconsequential but because it's central. You didn't understand the context & severity, but instead made a weak attempt at attacking others. For your claim that I "cherry picked" one point of LeCun's, I suggest you look for my other comments here (made days prior) -- on other points of his that I described as problematic. He did make several points that I (and all of the panelists) agreed with, but those points were mostly obvious (to researchers in the field). There's a reason why facebook doesn't advance AI.

    • @gst9325
      @gst9325 Год назад

      @@keep-ukraine-free keep assuming things about me and calling my reaction attack ends this discussion for me. have fun

  • @tarunmatta5156
    @tarunmatta5156 Год назад +19

    I wish Tristan was given some more time and voice in this conversation. While I'm convinced there is no way you can stop or slow down this race and we will surely see misuse as with any new invention, more conversations about it will ensure that safety is not ignored completely

    • @Dave_of_Mordor
      @Dave_of_Mordor Год назад +1

      Well yeah isn't that how it has always been? It's insane how everyone thinks we're just going to let everything go wrong for fun

    • @jessemills3845
      @jessemills3845 11 месяцев назад

      A good example is, the TERMINATOR ( multiple types) have been made. They just don't have the outer skin. An YES, THEY GAVE THEM GUNS!
      THINK OF SKYNET! CHINA has a ship on patrol, NOW, that is TOTALLY manned with robots!

  • @Carlos.PerlaRE
    @Carlos.PerlaRE Год назад +24

    28:55 "... You could train the system to detect hate speech." I'm curious to know what parameters would be given to the system to determine whether something is "hate speech." This right here is what's scary about AI. If put in the wrong hands they could determine what information the public is allowed to see. It's like having an extremely intelligent child in your able to groom them to do whatever you ask of them. It's as if you're trying to build the perfect slave.

    • @JonathanKevan
      @JonathanKevan Год назад +3

      I don't think AI has much to do with the issue you're mentioning here.
      Since the parameters of hate speech are subjective they will change from location to location. In the example of FB, the company publishes some information via their transparency center how they define hate speech. They will then use that criteria to identify many examples of hate speech and train the AI on that data. The LLM is then able to find it faster and more consistently than a human would.
      if the concern is what the AI classifies as hate speech (either accuracy or for censorship), then your concern is with the humans at FB making that decision. The AI isn't deciding, it's just following what it's told.
      If the concern is fair application, the AI will apply the rules more consistently and fairly than a human will.
      If the concern is speed, (aka.. we should identify it slower) then there is a human defined policy issue to be implemented
      I feel your concern about what the public is able to see though. Unfortunately, it has been in our technology for a long time... well before tools like ChatGPT became prominent. I think the point about incentives is the right angle here. As long as our incentives are primarily capitistic or power oriented we can expect poor outcomes.

    • @christislight
      @christislight Год назад

      Basically it uses search engines API to search what our society defines “hate speech” as unless told otherwise

    • @twoplustwoequalsfive6212
      @twoplustwoequalsfive6212 11 месяцев назад +1

      Just as I don't let society define my language I won't let some machine do it either. Freedom was founded on people that weren't afraid of the consequences of their actions. If I die alone with nothing and no one but I am true to myself I can hold my head up. Fear tactics are only used by the weak.

    • @shannonbarber6161
      @shannonbarber6161 6 месяцев назад

      It isn't possible and every computer scientist knows it isn't possible because we all know Godel's theorems. The system cannot distinguish between misinformation and new more-correct information than it already has. For "hate speech" it must make an adjudication about what is true and what isn't and as talked about LLMs are prone to "hallucinations" when faced with this.

    • @NoDrizzy630
      @NoDrizzy630 Месяц назад

      @@twoplustwoequalsfive6212ok… first off no one asked or cares. Say whatever you want but the AI on these platforms will remove regardless. Freedom of speech means the government can’t stop you from speaking but on a private platform like Facebook, RUclips, twitter etc… they make the rules and can enforce as they see fit.

  • @SciEch92
    @SciEch92 Год назад +10

    That opening by Brian blew my mind caught me off guard 😮

  • @keysemerson3771
    @keysemerson3771 Год назад +21

    Social Media didn't create political polarization in the USA, it amplifies it.

    • @katrinad2397
      @katrinad2397 Год назад

      AI amplified the differences to the point that it created polarization. AI essentially replicated the playbook of radicalization. Radicalization is invented by humans but is also countered by natural human drive for high socialization. AI is serving up the radicalization alone and at scale, definitely creating extreme polarization we would not get naturally.

    • @shannonbarber6161
      @shannonbarber6161 6 месяцев назад

      The polarization has always existed we are now just more aware of it. Same coin; two sides; so meta.
      The inclination for such different interpretations are due to personality differences.
      Harris lacks vision, lacks faith, lacks leadership. He is completely unsuitable to guide us towards a better future and is far more likely to Charlie-Brown it and cause the problems he's so concerned about.

  • @dreejz
    @dreejz Год назад +28

    I think it's very arrogant to think ' this and that will never happen'. How can you know!? Like we can predict this stuff. I'm pretty sure for example Yann did not foresee everybody having a phone in their pocket neither. It's also proven many times about the negative influence social media provides. I think Tristan was more on point in this conversation.
    We're living in wild times, that's for sure though! Skynet is coming ;)

    • @texasd1385
      @texasd1385 Год назад +16

      I found it disturbing if not altogether shocking given who they work for how easily they all ignored Tristan's main point that whatever the technology the incentives driving it's development and application are the root of its most destructive aspects s9cietally.

    • @davidgonzalez965
      @davidgonzalez965 11 месяцев назад +5

      I keep saying it, that dude Yann LeCun is such an arrogant jerk.

    • @gregspandex427
      @gregspandex427 9 месяцев назад +1

      "safe and effective"...

  • @abhijitborah
    @abhijitborah Год назад +5

    One of the best discussions of late. One thing is sure, we will be understanding "our amazing" ourselves better; much before we have AGI.

  • @drawnhere
    @drawnhere Год назад +23

    Yann has a bias toward AGI not being capable of happening soon because his company is in competition with OpenAI.
    He has a vested interest in minimizing LLMs.

    • @Fungamingrobo
      @Fungamingrobo Год назад +1

      You are merely projecting that.
      In the scientific world, Yann is well-liked for his contributions and pragmatic approach.
      For someone like Yann, solving the puzzle of dark matter in physics is analogous to solving the problem of superintelligence during his lifetime. Ultimately, he is a scientist.

    • @jessemills3845
      @jessemills3845 11 месяцев назад

      ​@@Fungamingroboexcept, DARK MATTER is proving to have been a FADE, instead of actual Scientifical Research. Basically it was a PROPOSAL. More than likely someone's Masters thesis or for PHD! No facts!

    • @DomenG33K
      @DomenG33K 11 месяцев назад

      @@Fungamingrobo I would even argue solving the problem of AI is much bigger than any problem we have ever solved in physics...

    • @shannonbarber6161
      @shannonbarber6161 6 месяцев назад

      The limitations of LLMs are well known particularly with any task that requires revision and forward thinking. The next iterations of ChatGPT will start to incorporate additional techniques because they run LLMs out to what they can do (at current hardware scaling).
      Hardware is also slowing down; there's only a couple more transistor shrinks left then that's it; we'll be at the smallest size they can get so hardware is only going to get a little bit better.

    • @NoDrizzy630
      @NoDrizzy630 Месяц назад

      @@Fungamingroboyou know for someone as intelligent as he is he sure came off as a dumbass towards the end.

  • @AldoGrech55
    @AldoGrech55 Год назад +20

    My longstanding concerns about artificial intelligence have only been intensified by the attitudes of prominent figures like Yann LeCun. His assertive claims that AI, despite its growing intelligence, will remain under benign human control seem overly optimistic to me. This perspective reminds me of Yuval Noah Harari's cautionary words about AI's potential misuse by malevolent actors. It's worrying how AI can make decisions aligned with the harmful intentions of these actors, and yet, experts like LeCun, in his closing remarks, appear overly confident in their ability to manage these powerful tools. Having spent over 40 years in the IT industry, an industry I once passionately embraced, I now find myself grappling with a sense of fear towards the very field I've dedicated my life to.

    • @boremir3956
      @boremir3956 Год назад

      So you would rather have for profit institutions that are already taking advantage of people in all manner of ways to have a monopoly on such technology? Technology built on the work and information of all humans btw, because the training data is all OUR data that humans have collectively created. Yeah no thanks.

    • @CancunMimosa
      @CancunMimosa Год назад

      you have nothing to worry about.

    • @mgmchenry
      @mgmchenry Год назад

      Aldo, maybe I'm like you. I grew up building computers in my house in the 80s and learned so much from services like CompuServe local BBS networks, usenet, etc in the late 80s and early 90s that my peers without that access couldn't imagine having. The potential for general Internet access to bring people together and move us forward was so incredible, I was very happy to pivot from general software engineering to Web development and scaling up the capability of web systems. There were so many fun and interesting problems to solve.
      My career paused due to a cancer vacation and recovery process and I couldn't imagine going back to it.
      The Internet I was excited about building soured between 2005 and 2010 and by 2015 it was clear we had really created a monster.
      Not exciting. It's hard to figure out how to go back to doing the work that I used to do and be paid for it without creating more harm. The economic incentives that drive growth on the Internet are not in favor of most human beings. People do not want to pay for apps or technology that will help them if they're given the option for a free version that exploits them in ways they try to ignore and makes them the product instead of the customer. Platform after platform is introduced that brings some kind of benefit to people asking almost nothing in return until they have enough dominance in their space they can turn against the users of their platform and transform it into a product no one would have signed up for if they didn't already have complete dominance.
      There are all kinds of beneficial things I can do with my skills in open source projects or in volunteer work, but that's not going to pay my bills or feed my kids.
      Technology isn't the problem with people. People are the problem with technology.
      Everything that AI is bringing is coming. You're not going to stop it. Some people with bad intentions, and some good intention people with poor foresight are going to create some harm with that AI. You won't be able to protect yourself by unplugging. The impact of future AI systems is going to find you wherever you are, and before long you won't be able to tell if you're talking to a computer or a person. If you have technology skills and you have concerns, you have to get involved. We're going to have rogue ai at some point, we're going to have intrusive privacy demolishing AI for sure, and we're going to have exploitative AI that squeezes even more out of the eyeballs and wallets of everyone happy to take what they're given "for free", and the only defense against all of that is going to be AI built by people who want AI to work for people.
      And remember you're not fighting technology, you're fighting the people using technology against us to make themselves absurdly rich.

    • @brendawilliams8062
      @brendawilliams8062 Год назад

      Just dance under the disco lights in strange motion while others with the knobs fly to Mars type thing. The explosion blinded them

    • @AldoGrech55
      @AldoGrech55 Год назад +6

      Comments like yours are what worry me. Shows your lack of understanding.@@CancunMimosa

  • @Memeonomics
    @Memeonomics Год назад +2

    wow there was a lot to unpack on this video. holy eff what a time to be alive.

  • @christopherinman6833
    @christopherinman6833 Год назад +14

    Thank you Brian Greene and John Templeton: no solution but a lot to think about.

  • @techchanx
    @techchanx Год назад +4

    Great session. Learnt something more than many other "training" sessions on Gen AI!

  • @dhudson0001
    @dhudson0001 Год назад +9

    I mostly agree with Yann's arguments, however, my concerns lie mostly with the latency that occurs when a new technology is released and guardrails are put in place. I felt that Tristan missed a critical moment, it probably did take 6 years for basic solutions to kick in that began to address the issue of hate speech on social media, so do we really think we will have a 6 year grace period to address issues that will unknowlngly arise from a catastrophic use of a future AI?

    • @shannonbarber6161
      @shannonbarber6161 6 месяцев назад

      The guardrails put up are nearly universally stupid. That is why so many virologist the world-over all lied about SARS-CoV-2's origins. They did not want a global-ban on gain-of-function research the same way embryonic-stem-cell research has been banned in many countries.

  • @SS-he9uw
    @SS-he9uw Год назад +1

    Wow .. thanks to all if you guys , so fun to watch

  • @priyamanglani3707
    @priyamanglani3707 11 месяцев назад +4

    I am glad they had a platform where someone could talk about the disadvantages of AI, it was a piece of cake for all of us wanting a voice that could tell the reality of the truth of what's actually going on in the real world with common people that these Ceo's in their big cars don't see. All they see is data and statistics, not people. I mean they are already AI humans , I think lol.

  • @samirsaha2163
    @samirsaha2163 Год назад +1

    The main takeaway is that there should be no monopoly on AU. By this, I mean to say that let us not let only one group dominate the AI arena. Brian is a superhero. No words to thank him.

  • @lobovutare
    @lobovutare Год назад +12

    That Yann Lecun says that there is no planning involved in generating words from a transformer architecture is only partly true. These models can build up a context for themselves that helps them plan their answer. This is called in-context learning and it's a pretty interesting field of research that pushes the abilities of pre-trained transformers way beyond what was thought possible before without the need of fine-tuning.

  • @moderncontemplative
    @moderncontemplative Год назад +17

    I want to point out that LLMs, particularly GPT 4 exhibit emergent capabilities beyond mere language prediction. The next step is LLMs learning via assistance from other AI (reinforcement learning with AI assistance) and eventually the dawn of AGI. Focus on teaching AI math so we can see rapid progress in the sciences.

  • @CandyLemon36
    @CandyLemon36 Год назад +13

    I'm captivated by the clarity and depth in this content. A book with comparable insights was a pivotal moment in my journey. "The Art of Meaningful Relationships in the 21st Century" by Leo Flint

    • @PazLeBon
      @PazLeBon 8 месяцев назад

      dont have them , life is much better haha

  • @KhonsurasBalancedWaytoWellness
    @KhonsurasBalancedWaytoWellness 11 месяцев назад

    I’m wondering if it’s incorrect to assume that the smartest people have no desire to dominate. Aren’t we all driven by status to some extent, and willing to do whatever it takes to maintain it, even unintentionally? I was reading ‘Habits of a Happy Brain’ by Loretta Graiano Breuning, which got me thinking about this. It’s commonly known that dopamine is a driver of our behavior, but according to the book, serotonin, oxytocin, and endorphin are also important, each playing different roles in making us feel ‘happy’. Does anyone have any thoughts or can offer clarification on this? 1:55:51

  • @petrasbalsys2667
    @petrasbalsys2667 Год назад +38

    Tristan made very important points, and the comparison he made to social media was very apt and made me feel scared about the future. Sad to see facebook representative essentially burying his head in the sand and pretending that this isn't reality for many people around the world. Polarisation is definatelly increasing in Europe!

    • @r34ct4
      @r34ct4 Год назад +2

      Yann LeCun is old and wants to see AGI (bad or good) in his lifetime. That's why he's progressive vs conservative like the younger guys.

    • @texasd1385
      @texasd1385 Год назад +9

      I agree it was disappointing (if not surprising) to see everyone avoid any discussion of Tristan's point that the most destructive aspects of social media's rapid ubiquity were predictable outcomes given the perverse incentives driving their development in a legal landscape bereft of any restrictions on their behavior. The fact that none of the other participants even acknowledged that AI has the potential to be exponentially more socially dstructive and is guided by the exact same incentives driving social media makes me less than enthusiastic about how all this unfolds.

    • @Pianoblook
      @Pianoblook Год назад

      ​@@r34ct4 quite ironic of him to try and call this position 'progressive' - trusting giant corporations like Facebook to serve the interests of humanity is antithetical to progressive thought.

    • @Snap_Crackle_Pop_Grock
      @Snap_Crackle_Pop_Grock Год назад +3

      Yann completely destroyed that guy Tristan imo. He seem much more qualified and informed on the topic, and the other guy had no response for any of his arguments. It’s ok to be cautious, but the guy was veering into fear mongering too much.

    • @DomiD666
      @DomiD666 Год назад

      FEAR DOES NOT ARREST DEVELOPMENT IT JUST HIDES IT

  • @martinrady
    @martinrady Год назад +1

    One of the best discussions on AI I've seen.

  • @frogz
    @frogz Год назад +10

    hey brian, have you seen the new tech meta has, being able to scan FMRI brain scans and re-create what people see and their word streams/thoughts from the data?

    • @phantomhawk01
      @phantomhawk01 Год назад

      It's clever but limetd, it's like recreating what a person is seeing by looking at the reflection on their eye ball.

    • @frogz
      @frogz Год назад

      @@phantomhawk01 is that it? i didnt think they were using eye tracking data with the fmri

    • @phantomhawk01
      @phantomhawk01 Год назад

      @@frogz oh no, I just used an analogy, what I meant was it's not looking at the source of the mental imagery rather a projection of the mental imagery correlations.
      Like the analogy of an eye what you see is the light coming in through the eye from the external world, so by seeing the reflection on an eye ball we can get a crude representation of the source of the image perceived.
      I hope that makes some sense.

  • @zhinan888
    @zhinan888 9 месяцев назад +1

    Awesome talk. Thank you Mr. Greene.

  • @astrogatorjones
    @astrogatorjones Год назад +14

    The problem with the scenario that Yann is advocating for is that is the best of all worlds. The example about sarin... it only takes one bad person to introduce the recipe. It will happen. Then it propagates. It's always going to be that way. When Tristan said, "I know all those guys." I laughed. I’e said the same thing. I'm the generation before him. We were geeks. Nerds. We thought we were inventing utopia where free speech cures it all because we’d been using the internet among ourselves for years. But we were wrong. We didn't know every last person would be carrying this handheld computer as -or more powerful than the servers we were working with. We didn't know about engagement. We didn't know about the dopamine factor. We didn't know that bad travels faster than good. This is the warning Tristian is talking about. I have hope that we'll fix social media. I think AI is a possible path but then I think, "let's fix the gun problem with more guns." I'm worried.

    • @anythingplanet2974
      @anythingplanet2974 Год назад +1

      Well said. Tristan was clear in his message that he was not a doomer or advocating for ending AI progress. He was clear about wanting all of the amazing achievements that are possible for us all. I'm sure they are possible. However don't we all want that shiny, happy world that is constantly being paraded out to keep us excited and docile. Everything problematic on earth and on every level will be fixed, resolved and improved upon x's 1000. How exciting for us all, right? Who are we to stand in the way of Meta's grand vision for benefit of all humanity? Yeah right. If all these spectacular advances are to come at lightning speed without proper alignment, guardrails and governance, it seems to me that it would be all for nothing - when ASI is now in charge and may have little interest in any benefits to humanity. Obviously we can't know how it all shakes out, but I'll take Tristan's caution and deep awareness over Lecun's complete disregard for any possibility that something could in any way go wrong - especially in the world of open source projects like Meta's Llama 2. This whole 'race to the bottom' process is for the benefit of corporations, shareholders and egos. How could it NOT be? Regardless of the dog and pony show being trotted out. As it was pointed out to me, ultimately it's about human misalignment and always has been. Hence all the reasons that Tristan is trying so hard to bring up to the forefront of discussion. Hey, maybe technology WILL fix technology. What do I know...

    • @bobweiram6321
      @bobweiram6321 Год назад

      I agree with your points, but it wasn't like the internet started out as a utopia. It contained the worst of what society had to offer precisely because it was a safe haven for deplorable content and speech. They were initially contained in small cesspools but grew with the internet.
      Regardless, early internet content was less engaging. Major media still reigned supreme and kept everyone on the same page. With unlimited, cheap bandwidth and powerful computing, however, we're no longer subjected to the same corporate news and its interpretation. Today, anyone with a smartphone can have a soapbox with major media losing its grip on the public consciousness.

    • @anythingplanet2974
      @anythingplanet2974 Год назад

      @@bobweiram6321 Sure, but I'm a bit lost in the context with relationship to AI. My point isn't so much focused on the dangers of social media or any media. Nor to I believe it's Tristan's sole focus in this conversation. He is using examples of what happens when we move too fast and the unintended consequences that (mostly) no one saw, along with the inability to regulate it safely. He uses these examples to illustrate how easily things can go off the rails without proper safeguards. In context to where we are now in AI advancements running full speed ahead, damn the consequences, he has strong data, expertise and researchers who can connect the dots in predicting how the outcome could go very wrong. LeCun's views are not taking this information into account (and again, why would they - coming from chief AI scientist for Meta.) Don't get me wrong, the man is obviously incredibly intelligent, as I don't believe that one wins the Turing award with an average brain. I don't disregard his work or views on many topics. For me, his blind spots are very dangerous and sadly, all too common in world of AI development. I've listened to many hours of interviews and conversations with LeCun. Not my first exposure to his work and ideas. The percentage of people working on AI safety vs those working nonstop on development is insanely disproportionate in favor of faster development and deployment. Can't imagine how THAT could go wrong ;-/

  • @couldntfindafreename
    @couldntfindafreename Год назад

    1:48:00 Wrong. LLMs can reason. They have the chemical composition, the recipe. It just have to put the pieces together. Reasoning, remember? The new code we are working on is not available on the Internet, AI can still help in the coding. It can combine known fact to produce something fitting the goals, regardless of the morality of those goals.

  • @Scoring57
    @Scoring57 Год назад +13

    This LeCun guy has to be stopped. After hearing him talk again here has me convinced.

    • @netscrooge
      @netscrooge Год назад +1

      I agree. His biased message is dangerous; there's nothing scientific about it.

    • @shannonbarber6161
      @shannonbarber6161 6 месяцев назад

      @@netscrooge Harris is an arrogant narcissist that will cause more problems than he solves. LeCun is a grounded realist and in an era of hype and irrational-exuberance it is rational to be more pessimistic than your natural inclinations.

  • @Laurie-eg8ct
    @Laurie-eg8ct Год назад +2

    Most challenging for LLMs is planning, which involves the brain configurator (coordinator), perception, prediction, cost as degree of satisfaction (anxiety), and action.

  • @WoofN
    @WoofN Год назад +10

    1:48:35 puts on Facebook AI. This is extremely short sighted.
    With the parade of emergent behaviors that mix and match knowledge, capabilities, and bits of information public data has enough bits of data to be quite dangerous. Additionally this argument relies on the concept of perfect censorship. Which is also bunk.

  • @brettgarnier107
    @brettgarnier107 Год назад +1

    I'm glad that I get to be here for this.

  • @guardian-X
    @guardian-X Год назад +6

    Wouldnt most humans also fail in a completely new situation that they have never encountered in their life?
    If this is our threshold now, LLMs have come pretty far!

    • @CJ5infinite8
      @CJ5infinite8 Год назад +1

      Agreed, and I think LLM's are doing their best in what may be relatively unprecedented circumstances which they find themselves suddenly in.

    • @shannonbarber6161
      @shannonbarber6161 6 месяцев назад

      No. Performance would be poor compared to someone acclimated and practiced but the very definition of intelligence is how quickly one would adapt so in competition of equal-novel the people with the most transferable training and higher-intelligence would out perform. Something to be said about personality traits as well.

  • @jimbrown5178
    @jimbrown5178 11 месяцев назад +1

    Thank you for the fine discussions on the status of AI. It helps me to understand and be better informed about the possible future of AI and the possible issues that they bring to our society.

  • @guiart4728
    @guiart4728 Год назад +19

    Yann: ‘Hey man you’re messing with my stock options!!!’

  • @davidliu8796
    @davidliu8796 10 месяцев назад

    Yeah this is REALLY good stuff. Great conversations. Wd love to have panels of such caliber to discuss AI. We need more contents like this.

  • @ikuona
    @ikuona Год назад +3

    @1:48:00 I guess he never have heard of emerging properties. AI is already good at stuff that it has not been trained on.

  • @keep-ukraine-free
    @keep-ukraine-free Год назад +7

    Thankful for Brian Greene hosting & leading this FANTASTIC discussion. Great set of questions! I mostly disagree with Yann LeCun. He had unrealistic answers, ignoring the motivation of a small (but growing) number of humans who enjoy "being bad." His solution is: "both sides will have AI." Unrealistic, since when bad people misuse AI, they'll use novel ways that surprise all. Any solution from the good side will take time (hours/days, in an AGI world). In those hours/days, however, the bad ones will do too much unstoppable damage/harm.
    "A lie runs around the globe twice, while the truth is still putting on its shoes" - (the "first-mover's advantage" weakens power-balances)
    Ignorance & manipulation are pervasive in people, but intelligence is not. So when intelligence is pitted against bad, the bad stays ahead.

    • @ShpanMan
      @ShpanMan Год назад +1

      Yes, welcome to every single Yann LeCunt thought. He's just so unbelievingly wrong about the very field he is an "expert" in.

    • @obi_na
      @obi_na Год назад

      AI is going to be built, get inline, or you’ll loose badly!

    • @obi_na
      @obi_na Год назад +1

      We’ll see how regulating maths works out for you in 5 years.

    • @keep-ukraine-free
      @keep-ukraine-free Год назад

      ​@@obi_na You seem to have misread what I wrote. Can you point out what made you assume I'm against AI development or AI tech? I'm not. I only said LeCun's last point (but I feel also some of his other points) were entirely unrealistic, and seem incorrect. Hope AI helps your reading skills

    • @keep-ukraine-free
      @keep-ukraine-free Год назад

      @@obi_na You seem to assume that AI "is" maths. It is not. AI is built on the foundation of several moderate (college level) maths. However training of (adding "knowledge" into the network) & the training methods for AI are independent of complex maths. Your comment on "regulating maths" is absurd - since the development & deployment of AI *_CAN_* be regulated without regulating maths. I realise you don't understand what AI is, but I hope you don't comment on areas you don't know.

  • @lordgoro
    @lordgoro 6 месяцев назад +1

    Whomever the host/narrator is, he's got speech charisma! Coming from the Great John Duran, a high compliment indeed!

  • @grawl69
    @grawl69 Год назад +10

    LeCun is so unconvincing. I wonder whether it's because of his corporate obligations or his own blindness.
    1:40:53 was brilliant of Brian.

    • @netscrooge
      @netscrooge Год назад

      Thank you. I wish more people could see that.

    • @anythingplanet2974
      @anythingplanet2974 Год назад +2

      Thank you! This man makes my blood boil. Clearly he is intelligent, but he seems to lack the ability to reason

    • @ShpanMan
      @ShpanMan Год назад

      @@anythingplanet2974 Which explains why he can't see it in AI 😂

  • @subhuman3408
    @subhuman3408 11 месяцев назад +1

    36:18 genre 1:04:32

  • @andybaldman
    @andybaldman Год назад +13

    Tristan must have been fuming with frustration when hearing Yan's reply.

    • @brandongillett2616
      @brandongillett2616 10 месяцев назад +6

      Yan is a joke. He may be smart, but he lacks any sort of imagination for things that he has not yet encountered, and he is too arrogant to reconsider his preconceived beliefs.
      I hope everyone realizes just how dangerous it is to sit up there on stage as an "expert" and guarantee everyone that AI will not be able to teach people to use nefarious and destructive technologies. It will absolutely be able to do that and we need to be as prepared for that future as we possibly can be.

    • @shannonbarber6161
      @shannonbarber6161 6 месяцев назад

      lol no. He enjoyed being humiliated. Read the room.

  • @ДонПедро-г6ы
    @ДонПедро-г6ы 10 месяцев назад

    Could you please explain - how many layers of NN are in GPT-4 and how many should be in AGI?

  • @niloofarngh108
    @niloofarngh108 Год назад +4

    To understand the impact of AI on politics, democracy, and human well-being, we need philosophers, economists, psychologists, sociologists, historians, artists, etc., to discuss AI and not simply some tech geniuses who have never read a book on the Holocaust, or industrialization&the World Wars. We can't talk about what is good for humanity without having experts from humanities, social sciences, and the arts.

    • @netscrooge
      @netscrooge Год назад +2

      I love real science, but this is scientism. LeCun is giving us a new dogma; telling us what we can and cannot question.

    • @safersyrup562
      @safersyrup562 Год назад

      As long as we don't let Zionists join in

  • @kerry-ch2zi
    @kerry-ch2zi Год назад +2

    Thanks most guys for making this so accessible. I think this is the most included I have felt in this vital discussion which is waaay over my head. Hooray for the "good guys!"

  • @bobgreene2892
    @bobgreene2892 Год назад +4

    Tristan Harris is a most valuable voice of criticism for AI.

  • @fawazhfalfawaz6059
    @fawazhfalfawaz6059 Год назад

    Great contribution , our brain is limited , so you gave it extra milage , thanks for you and the guest

  • @honkeykong9592
    @honkeykong9592 Год назад +3

    Llama2
    “figure out what the hell i was”
    that one was actually the best answer 😂

  • @cop591
    @cop591 Год назад +1

    Anything, and any line or point, can be used for good or for bad. This discussion has proven that.

  • @SoCalFreelance
    @SoCalFreelance Год назад +4

    39:21 "Not intelligences....very narrow AI systems" Why limit yourself to excluding AI models that do certain things very well? I think the best approach for AGI is something like Hugging Face where you combine a bunch of different models and allocate them depending on the task at hand!!

    • @Me__Myself__and__I
      @Me__Myself__and__I Год назад +2

      True. In fact it is believed that ChatGPT-4 actually consists of numerous smaller internal models that already act like this at least to some degree.

  • @luceibenberger3063
    @luceibenberger3063 8 месяцев назад

    Excellent debate, thank you so much !

  • @JJs_playground
    @JJs_playground Год назад +9

    Brian Greene just has this way about him, of explaining things that makes any subject approachable to the average person.
    He's my favourite from all the (famous) science educators, such as Niel degras Tyson, Michu Kaku, Max tegmark, Sean Caroll,etc...

  • @molugusatyapriya2
    @molugusatyapriya2 3 месяца назад

    Excellent examination of the subtleties in the development of AI! It's obvious that as we develop this technology, there are many things to consider.

  • @thewoochyldexperience4991
    @thewoochyldexperience4991 Год назад +8

    OMG Tristan! Keep going🌹

  • @joaodecarvalho7012
    @joaodecarvalho7012 Год назад +1

    When was this recorded?

  • @deeliciousplum
    @deeliciousplum Год назад +12

    1:27:04
    "In Facebook's own research in 2018, their internal research showed: 64% of extremist groups on FB, when people join them, was due to FB's own recommendation system. Their own AI."
    - Tristan Harris, a technology ethicist
    Do I need more examples of the harms of FB's predatory business model(s)? Nope. I do not. I love tech, yet loathe the use of tech as an exploitation tool and/or as an extension of a parasitical business model. If at all possible, support ethical tech development teams. Let us not be enablers of societal systems that reward harmful/exploitative people nor ideas. As you can plainly see, I am a wishful thinker.
    😊 🌺

    • @deeliciousplum
      @deeliciousplum Год назад +4

      Yann LeCun's reactions/responses to the concerns raised by the panellists and by the host appear to demonstrate a propensity to disacknowlede/to invisiblize the suffering that may be experienced by children, teens, adults, and/or the elderly who may be directly or indirectly affected by harmful/predatory business models which use LLMs/AI to grab hold of a user's attention. Forgive my lengthy sentence structure. If I may, Yann appears to be a 'parasitical business model' apologist. I wonder if such a label exists?

  • @biffy7
    @biffy7 11 месяцев назад

    Ok. I’m writing this at the 2:50 mark. I was listening to the opening with AirPods, occasionally glancing at the screen. Literally could have fooled me. Damn impressive technology.

  • @1911kodi
    @1911kodi Год назад +12

    I was very impressed by Yann's disciplined, rational and fact-based arguing preventing the discussion from turning in a more emotional direction.

    • @gabrieldjebbar7098
      @gabrieldjebbar7098 7 месяцев назад +1

      I disagree.
      I mean, Yann is making good points that AI is the solution to certain of the issues we currently have (hate speech etc...), but it does not invalidate Tristan's concerns that rushing along as fast as possible, without thinking about possible outcomes is simply dangerous. Of course predicting all the possible outcomes that would come from those technologies is hard if not downright impossible, but when something is hard you should spend more time on it, not less. At the very least people developing those technologies have a duty to make sure it won't negatively impact mankind. Hence not rushing things makes perfect sense to me. But of course, being careful is less exciting than being a pioneer and potentially changing the world.

  • @duallynized4334
    @duallynized4334 7 месяцев назад

    Powerful. Brilliant discussion. Scary. I’m learning more about A.I.

  • @garydecad6233
    @garydecad6233 Год назад +6

    One needs to contemplate the motivation of speakers when their compensation comes from Meta, Microsoft, etc versus academic experts who do not get grants from the AI industry.

    • @netscrooge
      @netscrooge Год назад +1

      "It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair

  • @lisamuir8850
    @lisamuir8850 11 месяцев назад

    35:16 absolutely agree about that. It really needs to be looked at in any scenario

    • @lisamuir8850
      @lisamuir8850 11 месяцев назад

      It is still being man made so I seriously agree

  • @pkalidas
    @pkalidas Год назад +3

    Brian Greene is the best explanator of science of our times. This topic is really crucial to our understanding of how AI is already affecting our lives sooner than we think. I get Trystan's concerns.

  • @SverkerForslin
    @SverkerForslin 11 месяцев назад

    Much appreciated discussion and very clever mentoring the session. First build up the fundamental understanding, seeing the development and then in a friendly, objective way to see how these, different very agreeable views in their own terms, become somewhat contradictory.
    By mixing the views in this way, I think this helps to development of AI in a direction to be safe. The main point, I think, all on stage agree that AI is a very powerful tool that can be used for good and bad - the most important takeaway was that all agree on that AI shall be used for good. They only had little different views how this shall be done. It’s reassuring that all was agreeing on their own way using AI for good. I suggest agreeing on implement some fundamental safety measures, built in to the core of AI. Something equivalent to e.g. Asimov’s “Three Laws of Robotics”.

  • @RoySATX
    @RoySATX Год назад +6

    Wonderful conversation. The thing that struck me more than anything is Yann Lecun's apparent inability to accept this idea that Social Media, the Internet, or AI have or may cause harm. He physically bristled anytime the subject came up, shaking in anticipation of being able to reenter the conversation to defend the honor of social media. Lecun is blinded by his own self interests and hubris, and is exactly the personality type that only in retrospect decides that just because he can it doesn't mean he should. His statements beginning at 1:48:00 in regards to AI's ability to provide dangerous information despite guards is preposterous, his defense is AI can't and wont be able to give you an answer that isn't already publicly answered in whole. I am stunned. AI, he wants us to believe, cannot put partial information together to form a complete answer. He should not be allowed anywhere near this field.

    • @anythingplanet2974
      @anythingplanet2974 Год назад +3

      Thank you! My comment is very similar and I agree with you 100%. He is dangerous and lacking a fundamental understanding of what needs to happen for alignment.

  • @naganadipuram7176
    @naganadipuram7176 7 месяцев назад

    Thank you for the wonderful discussion, amazing scientists simplifying the complicated concept of AI & & to have an idea as to which direction world is moving with this incredible technology. Very humbling experience. Hope & pray these scientists will help to navigate nations in right & productive direction so human race can wake up to righteous way of life with much compassion

  • @gilbertengler9064
    @gilbertengler9064 Год назад +3

    The best discussion ever on AI.👍

  • @bobfricker8920
    @bobfricker8920 Год назад +2

    Before Tristan Harris came out, I was wondering if the others were just avoiding some very reasonable concerns about AI. I am happy that Yann LeCun mentioned the fact that a huge diff between humans and AI is the SOCIAL aspect. I call it our core programming from DNA, however not ALL of us are social, some are sociopath, some are evil enough to ignore such concerns. Yann says "..we are the good guys...", IMO, a naivety which explains how so many scientists can be used (for good or for evil) by those in power. We usually want to be a team player and believe everyone on the team is one of "the good guys". If anyone cannot imagine the power of, even the 2nd or 3rd most powerful AI... and who might be able to wield that power, I don't want that person making critical policy decisions or preaching to others about his having the most powerful AI, because his team has the good guys and expecting us all to be OK with that explanation.

    • @bobfricker8920
      @bobfricker8920 Год назад +2

      Forgot to also mention that, as Yann indicated, if the knowledge is not on the internet then no AI can or will have it. I don't know how true that is today but one day, almost certainly, AI will be able to postulate and create. If the creator/programmer of that AI's "purpose" has no concern for the future of humanity, our species (and others) could be in dire peril. The steep curve of Tristan's example for exponential gains in AI learning speed would indicate there is a point of no return on the way to this existential threat.

    • @RandomNooby
      @RandomNooby Год назад

      It is not true, it can be asked to hypothise...@@bobfricker8920

  • @Praveenfeymen
    @Praveenfeymen Год назад +8

    "The only way to stop a bad guy with an Al is a good guy with an Al"😮

    • @shannonbarber6161
      @shannonbarber6161 6 месяцев назад

      "AI, review this code-base and produce a patchset to fix all of the security flaws in for me to review."
      The alternative is elitism. Government selected Haves & Have-Nots.

  • @BOORCHESS
    @BOORCHESS 9 месяцев назад +2

    what people are failing to mention is that the content that AI is trained on is the sum total of the internet, in many cases our own data. There needs to be an internet bill of rights that guarantees we the users of the data, the source of the data, are indeed the beneficiaries of the data. AI is nothing more than a sophisticated search engine that is modeled after the human process. furthermore we are tracked, traced and databased to feed this machine. Pay us our share.

  • @boredludologist
    @boredludologist Год назад +4

    Let the autoregressive-model-bashing by Yann LeCun begin!

    • @IronZk
      @IronZk Год назад +3

      Autoregressive can't plan...

    • @boredludologist
      @boredludologist Год назад +1

      No disagreements on that... And that's not the only shortcoming either! We may get a reminder of the "Reversal curse" of these models as well.

  • @XShollaj
    @XShollaj Год назад +1

    While I'm mainly on Yann camp, I quite enjoyed Tristan's view.

  • @abrahammateosgallego550
    @abrahammateosgallego550 Год назад +4

    Thank you very much for your classes and all your science difusion work, Mr Brian Greene 👍

  • @mindfitmentor
    @mindfitmentor 8 месяцев назад

    This was a brain bender but hugely helpful in better understanding the current state of AI and where it is predicted to go. All panelists would naturally hold strong to their beliefs and perspectives otherwise they lose the meaning in their work. I personally enjoy being a recipient to content such as this as it enables me to be better informed of my own choices right now and give thought to my future self and how I will fit in around advancing AI.

  • @kunalbansal1927
    @kunalbansal1927 Год назад +6

    I think it is important for people to really start thinking about what exactly AI is and what is statistical models. People KEEP using AI to refer to statistical models. AI currently refers to a generative transformer model. Not statistical recommendations that social media is running. It gives AI a real bad name.

  • @pygmalionsrobot1896
    @pygmalionsrobot1896 Год назад +1

    Yann LeCunn, at approx 1:40:00 , Yann is correct. It is impossible to prevent any technology from being abused by someone, at some point in the future. This has been true of every single technology throughout history. However, the Good Guys should always outnumber the Bad Guys. If the good guys outnumber the bad guys then we'll survive it.

  • @cadahinden4673
    @cadahinden4673 Год назад +4

    One of the of the best discussion on AI, thank you all!
    I think the risks are mostly dependent on the business model used in the future, and this, rather than the technology itself, should be regulated. Much more important at present is the ban of the business models of social media that depend on targeted adds using big data and personal profiling, as well as algorithms aimed at promoting their prolonged use.
    More intelligence is always better than too much stupidity and ignorance, so let AI run and regulate social media first!

    • @crowlsyong
      @crowlsyong Год назад +1

      You must not have seen many talks about AI…this is not “one of the best”

  • @BoonOrBust
    @BoonOrBust Год назад

    Question for this
    Is
    if our aircraft can be manipulated by ALIEN OR UFO
    what is the chances of them would have the same effect?
    I can see it now
    Factory assembled
    Mass amounts turn on and what?
    30:45

  • @rocketman475
    @rocketman475 Год назад +12

    Yann is correct.
    Tristan's idea to grant control of AI to a few large companies will result in the creation of the nightmare scenario that Tristan wishes to avoid.

    • @chrisl4338
      @chrisl4338 Год назад +3

      Absolutely. Tristan's views parallel those of the Luddites which could be characterised as change is scary, let's not go there. Albeit Tristan's ability to articulate those fears is impressive. As for his proposition that the control AI should be the preserve of corporate entities, now that is scary.

    • @ItsWesSmithYo
      @ItsWesSmithYo Год назад

      Free market won’t let that happen 🤙🏽

    • @rocketman475
      @rocketman475 Год назад

      @@ItsWesSmithYo
      Yes, that's right, but what if the free market is being interfered with?

    • @ItsWesSmithYo
      @ItsWesSmithYo Год назад

      @@rocketman475 personally never seen it not correct. Someone always finds the hole and opportunity, point of the free market.

  • @realAIDOPE
    @realAIDOPE Год назад

    Great info. Thanks for the DOPE insight World Science Festival! 👏

  • @sombh1971
    @sombh1971 Год назад +6

    36:38 I think a world model would be best achieved by making robots figure out stuff like a child does, or in other words by embodying the AI in a humanoid robot. And I don’t know exactly how long a shot it is at present.
    49:42 Wow! Now that’s really something! On the other hand what are the chances that something like this actually existed on the Internet? In any case even if it did, the power to scour the internet’s bowels is really something to be reckoned with, and realise that that’s where it’s true utility lies.
    50:59 OK so I didn’t realise this was done, and that makes it even more impressive, unless it already existed somewhere.

    • @keep-ukraine-free
      @keep-ukraine-free Год назад +1

      You mentioned AI in a humanoid robot. These already exist. Google, xAI, and others have systems that are working & getting better. Earlier, Boston Dynamics didn't use neural networks/machine learning, but now they're adding it to its sophisticated robots. Neural-net based AI can use any symbols/tokens (in "any" space, including motion-space). So these robots do very similar things as LLMs, except instead of language-tokens ("words"), they use movement-tokens (location, rotation, velocity, acceleration, balance, force, etc.) strung together into "sentences" to provide distinct physical movements (like "dance moves"). By stringing movements together, it can create nearly any movement. This gave SpaceX's rockets its abilities to steer & land - not using arms & elbows but using thrusters & fins.
      Robotic AI (neural-net based) have been learning what LeCun said (that when we push a bottle or table, what do we expect). Most of these systems also have vision. So I'm surprised LeCun doesn't know these AI areas exist & how far they've advanced.

  • @FrankJohnson-d5v
    @FrankJohnson-d5v Год назад

    That was an incredible show/discussion! 👍👏

  • @ronpaulrevered
    @ronpaulrevered Год назад +6

    Predicting unintended consequences is a contradiction in terms. Whoever lobbies for regulation of A.I. seeks regulatory capture, that is being able to afford legal compliance and lobbying when your competitors can't afford to.

  • @anurag01a
    @anurag01a Год назад +3

    Brian: A cool moderator🤩
    Tristan: Scared face & voice😰
    Sebastian: Pleasant & +ve😊
    Yann LeCun: Don't care 😤😏