This is a Q&A excerpt on the topic of AI from a lecture by Richard Feynman from September 26th, 1985. I found it very interesting and hope you do as well. Watch the full lecture in the description. Subscribe to this channel for more clips.
Heh, i saw a feynman video, immediate flytrap for my mind. and added bonus its you posting it. Thank you. Love your work. Have you looked into the impact of autocorrecting software for text, and how it may actually change the mind of the typist to better suit the corrector. If slow moving object suddenly is closer. Are you sure it was not like that maybe, a month ago. Or what was the median of language before auto correct? Is it better to choose a word suggested or spend more time manually typing your own "words" I think humanity is mistepping in this obscure observation. Auto corrected out of the correct voice that uniquely is you. And eventually the whole of all yous. We, or us if you like. I did not expect to be concerned about this. But I am.
I live in Bangladesh and because of the crappy education system here I'm stuck with studying business studies. I didn't have physics or chemistry in school level but I love physics and Feynman has been a big part of that. His lectures on physics have been a great respite from my pointless and ultimately futile existence. I left my job to study physics by myself and have gotten derailed. But every time I listen to this man talk, I am enamored to pick up a physics or a math book and bang my head against that wall as hard as I can. I hope someday I get to be a physicist of any caliber, even if it means I have to starve to death. Thank you, Mr Feynman, for being the light I wish to touch someday.
If you keep doing what you love, and that progressively improves your abilities, persistence will eventually make you better than the conventional physicist. First, develop the discipline to improve step by step. Then step by step towards your dreams. This comment is two years old to me. How are things going?
It is amazing he has explained how today’s AI (ChatGPT and others) work and also their weaknesses with two questions in 1985. Today, we need him more than anyone else
There are people like him or better than him in top institutions.. people usually don’t recognise them early on.. becoz the latest scientific achievements takes time to be understandable and then be written in books so that schools and institutions can start teaching them.. in 50years we will be teaching physics in high school based on Einstein equations and then time dilation will be obvious to young people.. Richard Feynman works on QED will take some more time to come to that level and so on.. then he will be as common as newton😂
I love richard Feynman but he is sadly wrong in many topics. I wanted to test his arguments and i gave chatgbt a backhead photo of brad pitt. He guessed that it was brad Pitt. Lol i wasnt sure exactly by my self if it was brad pitt or not.
I love richard Feynman but he is sadly wrong in many topics. I wanted to test his arguments and i gave chatgbt a backhead photo of brad pitt. He guessed that it was brad Pitt. Lol i wasnt sure exactly by myself if it was brad pitt or not.
The thing that consistently blows me away, every time I hear him discuss something, is not necessarily his opinions on the matter, or his logic, but how he structures a response. How he makes a case.
Yeah, although he says a computer can remember 50000 numbers and recite them back there are specific devices for a computer to do that - and these work in specific ways, e.g some computer memory isn't really memory at all, it has to be constantly refreshed to keep its contents, and if the power is lost, so are its contents. But we do have other levels of storage so a computer can remember things for longer periods. Asking a computer to reverse a list or give alternate inputs requires a specific algorithm to be coded by a human being. Really what the audience member is asking is whether a computer will ever be able to answer a question like 'give me these numbers back in reverse order, every other one' and do that task without a specific algorithm being given to it. i.e where the algorithm is what is being termed 'artificial general intelligence' rather than a sorting algorithm or a spreadsheet or whatever. Can we create a machine that has intelligence and that can learn. At that point the "memory" of AI isn't as simple necessarily as the memory that a program that lets you input numbers and then will display them would use. Specifically one of the flaws of chatGPT is that it didn't remember bits of your conversation and context from a few seconds earlier - and that has improved - but it's not really the same idea as RAM or disk storage. A human given a pen and a piece of paper could write down and recite the numbers back to him. So he hasn't really made a good argument that computers are remembering more than us - although the argument that the very nature of a computer 'remembering' numbers is so fundamentally different from how we work that his point - that computers will never think like people is still made.
@Mackinstyle Agreed. He just completely gets the idea that if he is going to communicate *anything* to anybody he is going to start from common ground: a common understanding of terms and their definitions, commitment to a shared objective (usually solving a problem or answering a question or teaching/learning something). So he asks questions.
He had no idea or experience of what intelligence is, but perhaps he does now; intelligence is to men (human beings) what flying is to bricks-they cannot experience it as-they-are; asleep-or just dreamers
Indeed, I've worked with AI and this is a point that I think escapes a lot of people. Intelligence, by its nature, is intrinsically flawed. Because to reason at higher and higher levels - which is a characteristic of intelligent thought - we're further and further abstracting away the fine details. Which is good on the one hand, of course, but it's also potentially bad. Because, as the saying reminds us, often "the Devil is in the details". Machines give perfect mathematical results. The more and more we make AI "human-like" in intelligence, the more mistakes it's going to make. And the crucial point is that this is intrinsic to what we're doing, not a failure of hardware or software. But an intrinsic failure of intelligence itself - to detect patterns, I must abstract. Through abstraction, I'm throwing away fine detail. But, you know: Chaos Theory. Fine detail is oftentimes crucial to accurate prediction and results. When the machine is asked to do maths, then it does so perfectly. But when the machine is asked to cast a value judgement over some patterns it's detected in inherently ambiguous language to predict the course of an inherently "fuzzy" real world out there... it'll start making mistakes. It will not be perfect anymore. As it's our great unique ability - and we love to flatter ourselves - humans often miss these subtleties of how our intelligence is a trade-off. "To err is to be human", as the saying goes. Well, I'd revise it to "to err is to be intelligent" and we must expect that the machines, in increasing their intelligence, will become... less trustworthy and reliable in their results. Don't get me wrong. Still incredibly valuable and to be pursued, and will be pursued to good and great effect. But just, you know, "manage your expectations".
@@klaxoncow I think he meant it as a joke about us recognizing computers showing intelligence by means of human laziness thanks to our design. Intelligence trying to use itself to scheme a more efficient or "lazy" way to do or not do something. Or as he put it, "If you want to create an intelligent machine you're going to get all kinds of crazy ways of avoiding labour." The weakness is our schemes to avoid work and its necessity is our relief. Side note, saying intelligence is intrinsically flawed seems like a gigantic philosophically arbitrary statement.
@@jgunther3398 Yeah, I said nothing like what you're trying to abstract it down to, and I never made any reference to anyone or anything being "dumb" or "smart" whatsoever. But I thank you for providing demonstrative proof of what I was saying. Your intelligence has abstracted what you think I said and, unfortunately, lost much of the crucial "fine detail" in actually comprehending what I was really saying. Which has lead you to error, in characterising my position with a strawman. And, no, that's not because you're "dumb", it's actually because you have intelligence. An ability to abstract, summarise, pull the wheat from the chaff, etc. I mean, in this case, I'd challenge you've actually done it incorrectly. But it is a characteristic of intelligence itself that you could do it at all. As AI becomes more human-like, expect it to start failing similar cognitive hurdles as well.
Richard Feynman was born in a world where horses were still the most common mode of transportation in cities and here is he, telling about AI concepts we are still struggling to apply today. Also, he was one of the greatest theoretical physicist in history.
Err.. car was already common that time. And many AI concepts were old, it is only popular now because recent tech is power enough to implement those theory, also partially due to marketing and buzzwords like (Machine learning AI Pro Plus xxx phone AI Blockchain) that make people think these are somehow new idea.
@@steveroger4570 There is actual footage of new york in the year he was born. If you are going to argue with that, I don't want to waste time refuting the rest of your comment.
zenmeister451 I see a few horse carriages in the photo you linked. Model T's mass production was only a few years prior to the year of his birth. So, yes, I would say horse/carriage was still prevalent at the time regardless of the photo you showed. By mid 20s I think would be the time cars take over horse carriages.
@@rpfour4 I never said that horses were not prevalent. That was not the point. I also said that in rural areas and other towns horses were still being used. I was just showing how prevalent cars were in New York at the time. The pic I sent shows a virtual sea of cars.
11:33 The way the audience reacted when he told them he doesn't have time to tell them more is priceless. It marks the difference between the vast majority of teachers and the ones that soak their students with... "the pleasure of finding things out". Too bad we don't hear the often in class. Great man, great educator. Beautiful lecture.
@SteppenWolff100 the interesting question though is why "most highscool and college students" dont see "finding hings out" as a pleasure. Are there really kids who are more curios than others? Maybe, but I'm sure almost everyone has something he is curios about. If, for example, you put one of the best artists at this time in front of the very same audience, would they listen to him with same interest as to Fenyman? I would like to believe that as soon as someone finds his passion, he is just as involved in learning new things as those students are in their respective field.
it is astonishing to me that this was off the cuff and 40 years ago, yet Feynman's comments are unbelievably prescient and resonate still with any AI researcher today 40 years later
We could also interpret that as showing how little we put own effort into human evolution. E.g. when someone says: "This calamity is gonna happen in 40 years" and in 40 years that is what is happening, you could say that's an amazing prophecy, but you could also say that's a shameful example of humankind's folly.
1985 and he was already intimately aware of the alignment problem in A.I. Every time there is a new breakthrough, I always go back to Feynman's lectures and realize he had been saying it all along.
There's actually a technique of explaining named after him ... "The Feynman Technique". The Feynman Technique is a method of learning or studying that was famously used by physicist Richard Feynman. Known for his ability to explain complex topics in simple, intuitive ways, Feynman created a method for learning that involves four basic steps: 1. **Choose a Concept**: Choose the concept or topic you want to understand and start studying it. Once you know what it is about, take a piece of paper and write the name of the concept at the top of the page. 2. **Teach it to a Child**: Write out an explanation of the concept on your page as if you were teaching it to a child. Not just any child, but a child who is old enough to understand basic terms and relationships, but is still a beginner in terms of the topic. Use simple language and avoid jargon. Make sure your explanation is so simple that even a child can understand it. 3. **Identify Gaps and Go Back to The Source Material**: When you pinpoint the areas where you struggle (where you forgot something important, weren't able to explain it, or simply have a shaky understanding), go back to the source material and re-learn it until you have a basic understanding. 4. **Review and Simplify (Optional)**: If you followed the first three steps and are able to explain the concept in simple terms, you’re done. If you want to be sure of your understanding, you can try to simplify your explanation even more or try to explain it to an actual child or a peer. The Feynman Technique exploits the fact that teaching is one of the most powerful ways to learn and solidify your understanding of a concept. By pretending to teach the concept to someone else, you can identify gaps in your understanding. And by simplifying the concept to the level of a child, you're forced to really understand the concept at a deeper level.
I read "Surely you must be joking, Mr. Feynman". What a life!! His physics lecture series is worth more than gold. I actually like his New York accent!!
Feynman was not only a great physicist, thinker in general, but also a showman. There's art in it. The closest to me is a stand up comedian. But he was not telling only jokes, but presenting complicated ideas in a simple way.
You know the age-old question: "If you could bring back someone from the past for a day to have dinner with?" Feynman is one of my answers. His ability to bring complex concepts into an easily understood analogy is a skill I envy. What a beautiful mind.
Great clip. Richard Feynman's work on the Challenger disaster and his criticism of the US educational system are important parts of his public work. He was also part of the Manhattan Project and has some interesting thoughts about that. I wish we had more people like him around today. Of course we stand in awe of his work on QED.
We likely have many people like that today, but the centralization of power of the US-capitalist-imperialist domination over global affairs conditions societal organization into fixating on fewer and fewer individuals, in part as an expression of fear-driven scarcity imposed, and so those few 'preachers' might still be an expression of the problem. If you want more people like that around, you have to realign attention and support onto the many others who are on that level and maybe even beyond because they didn't focus effort on self-promotion. (Selfishness tends to limit holistic intelligence. - Or in simpler terms as my teaching mantra: "Fear makes stupid.")
I have a Calculus prof at my University who teaches just like that. Best professor I ever had and damn near aced all 3 of my Calculus courses cause of him. Having a charismatic professor will literally change your life.
An interviewer once asked Claude Shannon (the creator of Information Theory): "Could a machine think?" He replied: "Well, of course! I'm a machine, and I think, don't I?" The point is that this question has more to do with our definition of "machine" than with any particular assessment of what kinds of systems can possess what kinds of intelligence.
Claude Shannon is one of the most underrated scientists in modern times. At Berkeley, 3 graduate classes were devoted to Shannon’s research at MIT alone on information theory
@@jgunther3398 😄😂Yes, exactly! And that, of course, was Shannon's point: If by "machine" we always mean something that has the same level of internal complexity and interactivity with its environment that a jet engine does, then, of course, a "machine" can never think.
Yeah I was thinking it would be cool to see his reaction to today’s tech - vision processing and machine learning and AI. He’s be proud. But he pretty much predicts it all when he said “it’s really hard to come up with a problem that computers won’t ever solve”’
Putting my one dime to the idea. I think he would have laughed at the idea of using a black box called "Neural Network" to find patterns in a way that the person who built it didn't understand it himself. He seems to be the kind of person who likes well-defined things we understand more than the mess that deep learning is right now!
Reminds me of that scene from I, Robot : Spooner : "Can a machine write a symphony? Can a machine turn a canvas into a beautiful masterpiece?" Sonny : "Can you?"
Just happened to come across this 3 years after the post. Thanks so much. For me, this is a reminder. Brilliant people will always be brilliant, for as long as we have recorded what they said.
because the machine is being programmed by humans.....just making it quicker to calculate only aspects covered. A machine can never invent only a man's inate ingenuity can....
This is pretty much how I picture him being all the time. If you haven’t read it yet you should check out his book “Surely You’re Joking, Mr. Feynman”. It’s an autobiographical look at his shenanigans and it’s hilarious and intriguing.
@@BrandonAdamPhotography I am actually a Feynman geek so those all actually read it quite inside out.. more liking for Feynman as a person as important as a scientist to me..
Provocative thesis: It's nothing special. 😉 Many people have made accurate predictions based on simply understanding the systems of fools. And that's just the people you know about because they compromised with the system. Imagine what realms of understanding people can reach if they don't make their enlightenment dependent on status quo support. ● The Buddha is revered, and so many people who are very revered to a large degree merely repeat what the Buddha said. ● Few people call Karl Marx a prophet. ... Maybe because he expected people to understand what he said instead of just worship it. But he basically explained what would happen, for certain, inevitably, and it's not that difficult to understand why, but it is hard to overcome a belief system that wants to deny that understanding in order to protect itself.
He’s so thoughtful. “You didn’t do that” if i had asked that question and got that answer without that “you didnt do that” i would have felt like Richard didnt like me for the rest of my life.
I’m gonna have to watch this again so I can go back and count how many times he tried to stick his glasses in a pocket that wasn’t there. The trouble with T-shirts
Thank you so much for uploading this...amazing to see him not only beautifully explain ML to a lay audience so accurately but also summarize it so beautifully with wisdom...
Then again, in a different video he speaks about pseudoscience and non-verifiable statements. Machine Learning has A LOT of that. That part, I'm certain, he would not like.
Not sure he would have would love it. ML finds lots of correlations between things but can't explain them and sometimes the connections are not even related just strangely correlated. It can give hints at things but it can't explain anything without human judgement. Seeing that something has a co incidental relation without explanation isn't really science.
@@OffTheBeatenPath_ yes but they didn’t have the speed and computing power we have now. Computers were simply too slow back then to see the benefits which we are now discovering.
Given the current existence of machine learning & deep learning in the field of AI, hearing him talk about pattern recognition around the 5 minute mark is fascinating. We can do that now. We can do that really, really, *really* well now. I bet he would've absolutely loved seeing convolutional neural networks and all of that.
@Michael Lubin Yeah, but don't we as humans simulate pattern recognition ourselves as well? AI/neural networks are just able to run a much, MUCH larger number of simulations simultaneously from which to draw their concepts/conclusions much more quickly than the human brain in it's current state allows us the capacity to run? Isn't the human brain slowly built up through a person's lifetime in the same manner that a neural network is built via a machine learning model? I mean, like, isn't the whole of a human's experience basically logged and framed in our brains as what essentially is nothing more than some form of logic tree or SQL database or something to that effect? It's almost like the only difference between a human and an AI/neural network is in the hardware itself coupled alongside the underlying network architecture that is being built on said hardware over time?
we need teachers with spines. He defines things clearly and in a no nonsense way, which is pretty much the antithesis of the level of discourse in 2020. Everything has to be obfuscated and twisted to fit political narratives now, and anyone that asks questions is a heretic to be burned at the stake
And yet if parents thoughts their children even more than they do today to be decent human beings the progress in one generation would be higher than 10,000 Feynman could do in hundred Generations. Or maybe not. Richard would know the answer :D
The way he repeatedly used the word "present" when describing the computers of his time makes me think that he was smart enough to predict that in the future there may be people watching this who's computers can do some of the things he said are difficult with ease.
I suppose even you 2 years ago when you wrote this comment cannot relate with the exponential tech growth.....i guess you could not imagine what chatpt does today...2024.
This man really was blessed with the whole package: One of the most extraordinary minds of the last centuries but at the same time the necessary witt, charisma and didactical genius to very entertainingly convey any topic of any complexity to any kind of audience. There are no words for how much I worship Richard Feynman.
Can you imagine what he would think today if asked the same question? He hinted at facial recognition and fingerprint comparison, which back then was considered nearly impossible - today these are some of the simpler things that AI does, and much better than humans.
it wasn't considered "impossible". He said it himself - it just takes too long with the computational capacity and memory we have at a time. Human can do this faster. Therefore teaching a machine to do it would be impractical. And he later said the same thing about weather prediction (not much different from facial recognition conceptually) - right now machines are slow; but will probably get a lot faster and will be able to account for more parameters, as technology evolves. This is where we are now today. We have increased our capacity, and we have the algorithms. As a result we see a rise of AI in many fields.
Poor Prof Feynman didnt know then that facial recognition AI Software would be a reallity 3 decades after this early 80s lecture. His greatest skill beyond exceptional scientists then & now, is that he was incredibly imaginative & a damn good communicator. If you have read his book on Quantum Electrodynamics (QED) with Feynman diagrams.... he made it so simple to understand even for average high school kid. I was so impressed that i got hold of his original 1964 Caltech Lecture notes in Physics....it was not easy as i am from 🇲🇾-Malaysia!!!😅
incredible that this was filmed 40 years ago, and he got just about everything right. basically tells us that the fundamental computational theory is still viable in terms of what machines can and cannot do
Tyson is nowhere near as funny as he thinks he is. He has a seemingly inexhaustible supply of common misconceptions which he laboriously deflates well all the while marvelling at his own wit. He doesn't really encourage free thinking by the audience, who usually know where he is going a couple sentences (and a number of his chuckles) before he gets there. I like the man, but he is hard to listen to after a while.
One of the greatest minds. On his physics admission exam to Princeton, he not only scored the highest score ever at the time, one of the professors commented that he should teach instead.
In fact generative models like GANs can be thought of doing some sort of "thinking" through backpropagation - the discriminator and the generator force each other to think in certain way
A lot has happened since 1985, which none of us could have imagined, even Feynman. When I was asked back then whether there could ever be a machine capable of conscious thought, my answer was yes, because it already exists - us.
I'm a Deep Learning algorithm developer, and Its amazing to hear Richrad Feynman discussing all of this so long ago. We made great progress since then, but the problems and challanges are very nicely explained by Feynman!
The clip ended with a beautiful thought by Feynman " I think we are getting close to intelligent machines but they are showing the necessary weaknesses of intelligence"
The problem he is referring to about heuristic #693 is present in today's reinforcement learning systems as well! That begs the question how far have we really come in the quest for intelligence and how much of the progress we see today is dependent on just having more computing power and better pattern matching than 60 years ago?
Most pattern-matching algorithms today (Deep NN, for example) existed back in 1985 but was abandoned as ineffective because computing power at the time wasn't powerful enough. The rise of neural nets today is more of a resurgence than a new discovery.
in terms of procedures, no progress has been made, our progress has been only in terms of computing power, the 20th century gave birth to the smartest people in human history.
Yeah I was about to say isn't heuristic weighting and selection, precisely what we call machine learning. So infact Feyman from the 50s has enough understanding of computer science to formulate and answer these questions with the same skill as we would today. The problem today, seems to me, is that we have become experts at using distracting jargon, and getting awestruck by big numbers.
Also heuristic #693 reminded me of over-complete auto-encoders where the input simply gets copied to some of the hidden units in the hidden layer (when no noise is added to the inputs) and the autoencoder simply avoids to learn anything but the identity function
His point about human intelligence at the beginning is excellent: Humans are not the most intelligent possible things, and there isn't much point in trying to imitate human intelligence, any more than there is a point to building a mechanical cheetah. You can think of things like the youtube algorithm as primitive intelligences with completely different senses than human have, which do things practically impossible for humans. They live in a very different world. This seems to totally disagree with Ray Kurzweil, who seems to think that if you make a computer big enough, human emotions will spontaneously emerge. But human emotions are the result of human evolution, why should they emerge from a machine built in a lab?
AI cannot be a new species unless it is programmed to be. But I have doubts about that programming possibility in the real world out of any simulation. The reason for my doubt is well known: there is not possible silicon life in the universe. About that Feynman "lero lero" (Brazilian expression) of mechanical cheetah etc etc it reminds me an ancient dinner in the MIT when there we've found Asimov, Searle, Minsk who hastily discussed about future trends of AI issues. Minsk have came to be in almost an argument with Asimov due to the Asimovian perspective on the central aspect of (future) AI. It was a memorable dinner that could be labelled General Intelligence Could Be a Mechanical Chee-tah or Whatever? Searle is still alive. He knows why that title! LoL
Pardon me if I think I can infer from your comment, that it's necessary to define "think", to define "intelligence", before making any comparisons. It was an observation I had, as soon as I saw the title of this video-"How can we say that something thinks, without defining first what thinking is?"
Kurzweil doesn't talk about human emotion, just emotion. Many animals with brains, possibly all animals with brains, have emotion. Emotion might be a result of biological brain evolution, or it might be a fundamental component of any optimal solution for general intelligence.
@@theBaron0530 That was an interesting "thought".. How could you possibly define something that is 'first order' that literally defines the 'second order' problem space of linguistics for which you are referring to.. It's like the 'simulation' in totality trying to comprehensively define itself , when it lacks the means to 'look in', from outside itself >> because if it had the means to 'see' from outside then it would be "more than simulation" .. A system space, cannot define itself using only itself.. To think about thining, is a second order operation.. unless you have the ability to 'step outside' your thinking or have something else to reference off, than you cannot define 'thought'.. and if you did find somthing else to "reference off" (sorry about my shitty terminology) then that information is now a product of "your" thinking, therefore unable to define the totality of the system.. I, dunno.. maybe (??)
The mental beauty of Feynman is he can weave the irony into every step of his language display which makes it entertaining and at the same time genius that nobody else equates at the same time. What a wonderful human being he was. His NY accent helps in seemingly creating a stage for standup, which is in fact what he is doing here... an improvised standup routine. I would say he will be missed, but yet these moments in time are captured for eternity for future mentalists. Gr8! Peace ☮💜Love
1. I'm a fan of Feynman and have read most of his books. 2. I think he would be amazed to see how far computer learning has come. The idea of a computer changing its own code was becoming feasible when he gave this lecture. 3. I really wish someone had asked him his thoughts about HAL9000 from the movie 2001 A Space Odyssey.
Computer learning is still just mashing database query results together into various outputs that still may or may NOT be the results we're after.. a computer still has absolutely no way of telling what data is important in the real world.
They’re not close to building in cheetah speed robot with legs that is Multi functioning. I don’t think they broke 20 miles an hour. There maybe at 12 miles an hour. You’re misunderstanding the difficulty.
A great video. It shows that when trying to get an intelligent answer/behavior from a machine, it is always critical to see how you present a real world and its problems to it. And, of course, given more resources machines will start exploiting all the loopholes you leave them.
The machine will never know the real world - the weakest link will always be the human operators who will always be feeding it biased, incomplete data... AI is a myth. Computer programs are getting better at what we would like them to do, but "AI" is just a buzzword.
What a brilliant remark and/or observation. _"I think that we are getting close to intelligent machines but they're showing the necessary weaknesses of intelligence."_
Feynman talked about our inability to build a definite procedure that can “recognize things”… Well that is precisely what the so-called deep learning versions of supervised learning do even better today than humans (i.e., more systematically). Machine learning transcended the whole idea of procedural programming. Nevertheless, it’s amusing how the “heuristics” he mentioned sound like weights in a neural network. Avoiding the collapse or the divergence of these weights became the challenge. Also, we can concur: AI shouldn’t be about writing poems but having a machine want to write a poem.
Very interesting. Almost secretly and philosophically he is standing on for a few ideas: 1) There is no defined general intelligence but different intelligences 2) The raw materials (carbon vs silicon etc etc) can define the type of intelligence de per se (it was a common belief in 60/70/80) 3) Heuristics has a problem: the best what it could do for intelligence design is performance or limited context based (and self limited) type of learning -- obviously aiming contextual performance. Therefore machine cannot fully discover valid abductive inferences due to the lack of intrinsic recontextualisation cognitive capacity. Basically he was arguing for the fundamental concepts of the MIT during that moment of the American (Anglo Saxon) history of science. Sorry for my English. I also like and admire Richard Feynman contents and style.
Dude your English is in the top percentile, most natives would have serious trouble understanding what the fuck any of that meant lol. Besides that, yeah that is generally applicable to what he was saying.
@@GarrettX001 Thank you for your kindness. I am a Brazilian from a mix of different bloods German, Danish (strong) Oriental Jew (strong) Portuguese etc (all mixed) and despite the fact of different origins we basically only speak Portuguese. Recently I was really cogitatum (well, it is Latim: freely thinking about) to make a RUclips bilingual channel of science in this country with Anglo Saxon (English spoken natives, Americans, British) partners but the fact is that my English is poor. I am still studying a lot. Some really good channel here in Portuguese idiom is very difficult but it is not impossible. There are, say, no more than reasonably good channels here. Maybe I could yield a kind of interview framework (only audio with slide show) channel casted with the bright English spoken scientists of the English spoken peoples of countries like it happens in the Event Horizon, I would guess. It is hard. Google translator doesn't solve many barriers, for instance, in math (yes, math is also spoken during lectures). Google also doesn't have equation search, say, using latex. Those types of educational handicaps and others (more common ones, like good local structure) are harder in countries like Brazil.
You are being very helpful and being very useful, when you upload clips like this. Hope you realize the role you are playing. The benefit you are creating.
One thing that is wonderful and worth noting, in my view, is how often and how well he uses QUALIFIERS. Certain things are difficult but not impossible. Understanding why and how things are difficult shows an understanding of the problems being faced and therefore what is required to answer the problems.
I saw this as a university student. I just watched again at 54 years old. He was dead on, but the advances in AI these times are quite evident by his comments. Particularly when he describes the storage and power of our computational power.
This is a Q&A excerpt on the topic of AI from a lecture by Richard Feynman from September 26th, 1985. I found it very interesting and hope you do as well. Watch the full lecture in the description. Subscribe to this channel for more clips.
Tq for uploading this sad lex😁
Appreciate it.
@Vendicar Kahn unfortunately I don't know but am interested so will follow this thread
Chomsky says that's like asking if submarines swim. You wanna call that swimming? Fine.
Very appreciated, tx.
So strange to watch a (science) fiction ... from the future.
Heh, i saw a feynman video, immediate flytrap for my mind. and added bonus its you posting it. Thank you. Love your work. Have you looked into the impact of autocorrecting software for text, and how it may actually change the mind of the typist to better suit the corrector. If slow moving object suddenly is closer. Are you sure it was not like that maybe, a month ago. Or what was the median of language before auto correct? Is it better to choose a word suggested or spend more time manually typing your own "words" I think humanity is mistepping in this obscure observation. Auto corrected out of the correct voice that uniquely is you. And eventually the whole of all yous. We, or us if you like. I did not expect to be concerned about this. But I am.
I live in Bangladesh and because of the crappy education system here I'm stuck with studying business studies. I didn't have physics or chemistry in school level but I love physics and Feynman has been a big part of that. His lectures on physics have been a great respite from my pointless and ultimately futile existence. I left my job to study physics by myself and have gotten derailed. But every time I listen to this man talk, I am enamored to pick up a physics or a math book and bang my head against that wall as hard as I can.
I hope someday I get to be a physicist of any caliber, even if it means I have to starve to death. Thank you, Mr Feynman, for being the light I wish to touch someday.
From Brazil, the otherside of the world, just passing by to say that I am cheering and hoping you make it!
@@Pedro14ceara Thank you for the kind words
Asha kori apni ekhono lege achen, Bhai. Ei level er passion britha jaye na
If you keep doing what you love, and that progressively improves your abilities, persistence will eventually make you better than the conventional physicist. First, develop the discipline to improve step by step. Then step by step towards your dreams. This comment is two years old to me. How are things going?
Dude, don't give up! 👍😎👍
The fact i have access to this man lectures and interviews that i am truly grateful for
RUclips: you're welcome.
@@stinger4712 Thanks RUclips, very intelligent of you.
not intelligent yet
@@diegoptl
Yes
It is amazing he has explained how today’s AI (ChatGPT and others) work and also their weaknesses with two questions in 1985. Today, we need him more than anyone else
More than anyone else?
@@99Gara99 whom do you need jesus?
There are people like him or better than him in top institutions.. people usually don’t recognise them early on.. becoz the latest scientific achievements takes time to be understandable and then be written in books so that schools and institutions can start teaching them.. in 50years we will be teaching physics in high school based on Einstein equations and then time dilation will be obvious to young people.. Richard Feynman works on QED will take some more time to come to that level and so on.. then he will be as common as newton😂
I love richard Feynman but he is sadly wrong in many topics. I wanted to test his arguments and i gave chatgbt a backhead photo of brad pitt. He guessed that it was brad Pitt. Lol i wasnt sure exactly by my self if it was brad pitt or not.
I love richard Feynman but he is sadly wrong in many topics. I wanted to test his arguments and i gave chatgbt a backhead photo of brad pitt. He guessed that it was brad Pitt. Lol i wasnt sure exactly by myself if it was brad pitt or not.
The thing that consistently blows me away, every time I hear him discuss something, is not necessarily his opinions on the matter, or his logic, but how he structures a response. How he makes a case.
Yeah, although he says a computer can remember 50000 numbers and recite them back there are specific devices for a computer to do that - and these work in specific ways, e.g some computer memory isn't really memory at all, it has to be constantly refreshed to keep its contents, and if the power is lost, so are its contents. But we do have other levels of storage so a computer can remember things for longer periods. Asking a computer to reverse a list or give alternate inputs requires a specific algorithm to be coded by a human being. Really what the audience member is asking is whether a computer will ever be able to answer a question like 'give me these numbers back in reverse order, every other one' and do that task without a specific algorithm being given to it. i.e where the algorithm is what is being termed 'artificial general intelligence' rather than a sorting algorithm or a spreadsheet or whatever. Can we create a machine that has intelligence and that can learn. At that point the "memory" of AI isn't as simple necessarily as the memory that a program that lets you input numbers and then will display them would use.
Specifically one of the flaws of chatGPT is that it didn't remember bits of your conversation and context from a few seconds earlier - and that has improved - but it's not really the same idea as RAM or disk storage.
A human given a pen and a piece of paper could write down and recite the numbers back to him. So he hasn't really made a good argument that computers are remembering more than us - although the argument that the very nature of a computer 'remembering' numbers is so fundamentally different from how we work that his point - that computers will never think like people is still made.
@Mackinstyle Agreed. He just completely gets the idea that if he is going to communicate *anything* to anybody he is going to start from common ground: a common understanding of terms and their definitions, commitment to a shared objective (usually solving a problem or answering a question or teaching/learning something).
So he asks questions.
"The necessary weaknesses of intelligence."
Even his throw-away observations and quips can be pure timeless genius.
Thanks for sharing.
He had no idea or experience of what intelligence is, but perhaps he does now; intelligence is to men (human beings) what flying is to bricks-they cannot experience it as-they-are; asleep-or just dreamers
Indeed, I've worked with AI and this is a point that I think escapes a lot of people.
Intelligence, by its nature, is intrinsically flawed.
Because to reason at higher and higher levels - which is a characteristic of intelligent thought - we're further and further abstracting away the fine details.
Which is good on the one hand, of course, but it's also potentially bad. Because, as the saying reminds us, often "the Devil is in the details".
Machines give perfect mathematical results. The more and more we make AI "human-like" in intelligence, the more mistakes it's going to make.
And the crucial point is that this is intrinsic to what we're doing, not a failure of hardware or software. But an intrinsic failure of intelligence itself - to detect patterns, I must abstract. Through abstraction, I'm throwing away fine detail.
But, you know: Chaos Theory. Fine detail is oftentimes crucial to accurate prediction and results.
When the machine is asked to do maths, then it does so perfectly.
But when the machine is asked to cast a value judgement over some patterns it's detected in inherently ambiguous language to predict the course of an inherently "fuzzy" real world out there... it'll start making mistakes. It will not be perfect anymore.
As it's our great unique ability - and we love to flatter ourselves - humans often miss these subtleties of how our intelligence is a trade-off.
"To err is to be human", as the saying goes.
Well, I'd revise it to "to err is to be intelligent" and we must expect that the machines, in increasing their intelligence, will become... less trustworthy and reliable in their results.
Don't get me wrong. Still incredibly valuable and to be pursued, and will be pursued to good and great effect.
But just, you know, "manage your expectations".
@@klaxoncow I think he meant it as a joke about us recognizing computers showing intelligence by means of human laziness thanks to our design. Intelligence trying to use itself to scheme a more efficient or "lazy" way to do or not do something. Or as he put it, "If you want to create an intelligent machine you're going to get all kinds of crazy ways of avoiding labour." The weakness is our schemes to avoid work and its necessity is our relief. Side note, saying intelligence is intrinsically flawed seems like a gigantic philosophically arbitrary statement.
@@klaxoncow a dumb person isn't "more human" than a smart one. the humanlikeness of an ai isn't measured by its mistakes
@@jgunther3398 Yeah, I said nothing like what you're trying to abstract it down to, and I never made any reference to anyone or anything being "dumb" or "smart" whatsoever.
But I thank you for providing demonstrative proof of what I was saying.
Your intelligence has abstracted what you think I said and, unfortunately, lost much of the crucial "fine detail" in actually comprehending what I was really saying.
Which has lead you to error, in characterising my position with a strawman.
And, no, that's not because you're "dumb", it's actually because you have intelligence. An ability to abstract, summarise, pull the wheat from the chaff, etc.
I mean, in this case, I'd challenge you've actually done it incorrectly. But it is a characteristic of intelligence itself that you could do it at all.
As AI becomes more human-like, expect it to start failing similar cognitive hurdles as well.
Richard Feynman was born in a world where horses were still the most common mode of transportation in cities and here is he, telling about AI concepts we are still struggling to apply today. Also, he was one of the greatest theoretical physicist in history.
Err.. car was already common that time. And many AI concepts were old, it is only popular now because recent tech is power enough to implement those theory, also partially due to marketing and buzzwords like (Machine learning AI Pro Plus xxx phone AI Blockchain) that make people think these are somehow new idea.
@@steveroger4570 There is actual footage of new york in the year he was born. If you are going to argue with that, I don't want to waste time refuting the rest of your comment.
Born:
Richard Phillips Feynman, May 11, 1918, New York City, U.S.
zenmeister451 I see a few horse carriages in the photo you linked. Model T's mass production was only a few years prior to the year of his birth. So, yes, I would say horse/carriage was still prevalent at the time regardless of the photo you showed. By mid 20s I think would be the time cars take over horse carriages.
@@rpfour4 I never said that horses were not prevalent. That was not the point. I also said that in rural areas and other towns horses were still being used. I was just showing how prevalent cars were in New York at the time. The pic I sent shows a virtual sea of cars.
11:33 The way the audience reacted when he told them he doesn't have time to tell them more is priceless. It marks the difference between the vast majority of teachers and the ones that soak their students with... "the pleasure of finding things out". Too bad we don't hear the often in class. Great man, great educator. Beautiful lecture.
@SteppenWolff100 Correct!
@SteppenWolff100 the interesting question though is why "most highscool and college students" dont see "finding hings out" as a pleasure. Are there really kids who are more curios than others? Maybe, but I'm sure almost everyone has something he is curios about. If, for example, you put one of the best artists at this time in front of the very same audience, would they listen to him with same interest as to Fenyman? I would like to believe that as soon as someone finds his passion, he is just as involved in learning new things as those students are in their respective field.
@SteppenWolff100 interesting insight from across the world. Thanks.
@Karan K why? Religion teaches nothing of how the world works.
@Karan K That's very nice. Except that none of them do. So there is that. But hey, why bother with facts when you have alternative facts?
it is astonishing to me that this was off the cuff and 40 years ago, yet Feynman's comments are unbelievably prescient and resonate still with any AI researcher today 40 years later
We could also interpret that as showing how little we put own effort into human evolution.
E.g. when someone says: "This calamity is gonna happen in 40 years" and in 40 years that is what is happening, you could say that's an amazing prophecy, but you could also say that's a shameful example of humankind's folly.
A brilliant thinker
@@Dowlphin Capitalism
The man took an encore in a lecture. Extreme charisma and fundamental knowledge of so many different concepts and fields. A true polymath.
1985 and he was already intimately aware of the alignment problem in A.I. Every time there is a new breakthrough, I always go back to Feynman's lectures and realize he had been saying it all along.
At his time he already had thought about perverse instantiation. That's crazy
Last genius
@@_yiannis Pretty sure everyone here knows who Alan Turing is lol, but yes, he broached this subject as well
people, still today, are poo pooing on AI, saying it will never do this and never do that. They never learn XD
@@generichuman_ I'm pretty sure @Yiannis has heard of something called sarcasm.
the way he thinks and explains things makes it so compelling to listen to. almost like he's telling a story. such a legend
There's actually a technique of explaining named after him ... "The Feynman Technique".
The Feynman Technique is a method of learning or studying that was famously used by physicist Richard Feynman. Known for his ability to explain complex topics in simple, intuitive ways, Feynman created a method for learning that involves four basic steps:
1. **Choose a Concept**: Choose the concept or topic you want to understand and start studying it. Once you know what it is about, take a piece of paper and write the name of the concept at the top of the page.
2. **Teach it to a Child**: Write out an explanation of the concept on your page as if you were teaching it to a child. Not just any child, but a child who is old enough to understand basic terms and relationships, but is still a beginner in terms of the topic. Use simple language and avoid jargon. Make sure your explanation is so simple that even a child can understand it.
3. **Identify Gaps and Go Back to The Source Material**: When you pinpoint the areas where you struggle (where you forgot something important, weren't able to explain it, or simply have a shaky understanding), go back to the source material and re-learn it until you have a basic understanding.
4. **Review and Simplify (Optional)**: If you followed the first three steps and are able to explain the concept in simple terms, you’re done. If you want to be sure of your understanding, you can try to simplify your explanation even more or try to explain it to an actual child or a peer.
The Feynman Technique exploits the fact that teaching is one of the most powerful ways to learn and solidify your understanding of a concept. By pretending to teach the concept to someone else, you can identify gaps in your understanding. And by simplifying the concept to the level of a child, you're forced to really understand the concept at a deeper level.
His perspective of looking at universe and life is beautiful
He said that if you can't explain a concept or idea simply you don't understand it. Isn't this a problem with many college and university professors?
Very high charisma to go with that knowledge
@@ziff_1interesting use of chat gpt for this summary
Love Feynman.
Great knowing you watch Lex Fridman channels
Fancy seeing you here.
THIS MATTERS🤣🤣
Not surprised to find you here as well bro!
You r everwhere on my youtube algo dude
I read Feynman's book and his genius is apparent on every page, my biggest takeaway was that he never let his curiosity fade his entire life.
He wrote several books !
And played the bongos
I read "Surely you must be joking, Mr. Feynman". What a life!! His physics lecture series is worth more than gold. I actually like his New York accent!!
Hey, I’m doing physics here!
Feynman was not only a great physicist, thinker in general, but also a showman. There's art in it. The closest to me is a stand up comedian. But he was not telling only jokes, but presenting complicated ideas in a simple way.
You know the age-old question: "If you could bring back someone from the past for a day to have dinner with?" Feynman is one of my answers. His ability to bring complex concepts into an easily understood analogy is a skill I envy. What a beautiful mind.
Minds like his are the ones that should be kept in the jars from Futurama.
@Bob You gotta stop being such a sports guy, man.
You’re assuming he’d want dinner with you. 😉
Steve Jobs isn't even in the same realm as Feynman.
@@mattjames4978 yes he would have dinner, he just couldn't stop talking
Great clip. Richard Feynman's work on the Challenger disaster and his criticism of the US educational system are important parts of his public work. He was also part of the Manhattan Project and has some interesting thoughts about that. I wish we had more people like him around today. Of course we stand in awe of his work on QED.
We likely have many people like that today, but the centralization of power of the US-capitalist-imperialist domination over global affairs conditions societal organization into fixating on fewer and fewer individuals, in part as an expression of fear-driven scarcity imposed, and so those few 'preachers' might still be an expression of the problem.
If you want more people like that around, you have to realign attention and support onto the many others who are on that level and maybe even beyond because they didn't focus effort on self-promotion. (Selfishness tends to limit holistic intelligence. - Or in simpler terms as my teaching mantra: "Fear makes stupid.")
@Dowlphwin show us on the doll where the US-capitalist-imperialists touched you. You're safe here.
I like the fact that he's brilliant but talks like a 70s NYC cab driver.
That’s when you know someone is really smart. They can speak about complex things in simple language.
I have a Calculus prof at my University who teaches just like that.
Best professor I ever had and damn near aced all 3 of my Calculus courses cause of him.
Having a charismatic professor will literally change your life.
That’s part of his charm with his new york accent
Born and bred in Brooklyn!
It’s Colin Quinn.
Hearing Wolfram talk about how smart Feynman was and working on quantum computers with him decades ago was crazy fascinating.
An interviewer once asked Claude Shannon (the creator of Information Theory): "Could a machine think?" He replied: "Well, of course! I'm a machine, and I think, don't I?"
The point is that this question has more to do with our definition of "machine" than with any particular assessment of what kinds of systems can possess what kinds of intelligence.
Claude Shannon is one of the most underrated scientists in modern times. At Berkeley, 3 graduate classes were devoted to Shannon’s research at MIT alone on information theory
feynman gave a solid argument that jet engines will never be able to think 🙂
@@jgunther3398 😄😂Yes, exactly! And that, of course, was Shannon's point: If by "machine" we always mean something that has the same level of internal complexity and interactivity with its environment that a jet engine does, then, of course, a "machine" can never think.
"A machine" is not "50 machines."
Magnificent reply here
I guess Feynman would be really happy to know that we've found the paradigm to solve these computer vision tasks he mentioned using deep learning.
Yeah I was thinking it would be cool to see his reaction to today’s tech - vision processing and machine learning and AI. He’s be proud. But he pretty much predicts it all when he said “it’s really hard to come up with a problem that computers won’t ever solve”’
Or maybe he would be worried ;)
They already had found that back when he gave this lecture, it just took longer.
Putting my one dime to the idea. I think he would have laughed at the idea of using a black box called "Neural Network" to find patterns in a way that the person who built it didn't understand it himself. He seems to be the kind of person who likes well-defined things we understand more than the mess that deep learning is right now!
@@bigphatballllz The concept of a neural network was first described mathematically in 1873, Feynman for sure knew what they were.
We gotta admit some humans are gifted and special. This dude was light years ahead of his times.
Reminds me of that scene from I, Robot :
Spooner : "Can a machine write a symphony? Can a machine turn a canvas into a beautiful masterpiece?"
Sonny : "Can you?"
With the right definitions, it can.
@@papesldjnsjkfjsn Ignorance is a bliss
@@kosmic2615 bruh im so sorry i read it "with the right definitions, *I* can"
Symphony? Yes it can: ruclips.net/video/03xMIcYiB80/видео.html
@@thelifeaquatica Is that hot garbage supposed to be the best AI can do?
Just happened to come across this 3 years after the post. Thanks so much. For me, this is a reminder. Brilliant people will always be brilliant, for as long as we have recorded what they said.
CheeTAH
areemeteek
New York accent lol
*DHOTS*
CHEAT AHHH !
I was wondering what this meant until he said it.
17:49 "We are getting close to intelligent machines but they're showing the necessary weaknesses of intelligence" 👍🤖
@@aRedTalonPro cool
@@aRedTalonPro not the case.
@Karan K *You* are nothing but a JOKE.
because the machine is being programmed by humans.....just making it quicker to calculate only aspects covered. A machine can never invent only a man's inate ingenuity can....
Didn’t Steve Jobs say he payed attention to lazy workers cause they found the most efficient way to do things
Never seen Richard Feynman in a T-shirt before
I want that shirt.
Where can i buy that shirt
@@gokurocks9 he was actually the DUDE of all scientist 😎😎😎
This is pretty much how I picture him being all the time. If you haven’t read it yet you should check out his book “Surely You’re Joking, Mr. Feynman”. It’s an autobiographical look at his shenanigans and it’s hilarious and intriguing.
@@BrandonAdamPhotography I am actually a Feynman geek so those all actually read it quite inside out.. more liking for Feynman as a person as important as a scientist to me..
Prof. Feynman was one of the most brilliant minds and perhaps the greatest teacher of all time.
Thanks for the video.
Prescient. I am an AI researcher and am
marveling at how accurate in his assessment he was so many years ago!
Provocative thesis: It's nothing special. 😉
Many people have made accurate predictions based on simply understanding the systems of fools.
And that's just the people you know about because they compromised with the system. Imagine what realms of understanding people can reach if they don't make their enlightenment dependent on status quo support.
● The Buddha is revered, and so many people who are very revered to a large degree merely repeat what the Buddha said.
● Few people call Karl Marx a prophet. ... Maybe because he expected people to understand what he said instead of just worship it. But he basically explained what would happen, for certain, inevitably, and it's not that difficult to understand why, but it is hard to overcome a belief system that wants to deny that understanding in order to protect itself.
@@Dowlphin it is
Which pictures feature a bridge.
@@Dowlphin hey man try to predict what will happen inthe future and you know intelligent he is
@@DowlphinGo take a shower you filthy Brony. You aren't smart.
Lex, thank you for bringing me ideas that I would have otherwise never had. You the man
Don’t forget to also thank Feinman.
How things have changed. I love this mans mind and his heart, so brilliant.
Richard Feynman was a man who could to see the future
He’s so thoughtful. “You didn’t do that” if i had asked that question and got that answer without that “you didnt do that” i would have felt like Richard didnt like me for the rest of my life.
I noticed that too. Kindness in thought and praxis.
This video makes me realize that I've heard him speak before but never in lecture mode like this. I can see why his lectures were so popular.
What a fantastic person he was.
Such a great gift.
He was one of the best -if not the best- teachers in history... I wish I could be in that room, listening to his lectures... What a great man!
How do you do strike-throughs?
I’m gonna have to watch this again so I can go back and count how many times he tried to stick his glasses in a pocket that wasn’t there. The trouble with T-shirts
Wait until you find out about t-shirts with pockets.
@@efisgpr what? is there such a thing???!!!!
@@efisgpr that has to be illegal
I love this man. His joy in explaining things always makes me smile
Thank you so much for uploading this...amazing to see him not only beautifully explain ML to a lay audience so accurately but also summarize it so beautifully with wisdom...
Feynman would've LOVED modern computing had he still been alive today. Machine learning, neural networks, etc.
I wish I could hear him speak on GPT-3
Then again, in a different video he speaks about pseudoscience and non-verifiable statements. Machine Learning has A LOT of that. That part, I'm certain, he would not like.
Neural networks were used back in the 50's
Not sure he would have would love it. ML finds lots of correlations between things but can't explain them and sometimes the connections are not even related just strangely correlated. It can give hints at things but it can't explain anything without human judgement. Seeing that something has a co incidental relation without explanation isn't really science.
@@OffTheBeatenPath_ yes but they didn’t have the speed and computing power we have now. Computers were simply too slow back then to see the benefits which we are now discovering.
A great man. One of the very greatest.
Given the current existence of machine learning & deep learning in the field of AI, hearing him talk about pattern recognition around the 5 minute mark is fascinating. We can do that now. We can do that really, really, *really* well now. I bet he would've absolutely loved seeing convolutional neural networks and all of that.
What the computer is really recognizing is still completely different from what we recognize. It's more like, they SIMULATE pattern recognition.
@Michael Lubin Yeah, but don't we as humans simulate pattern recognition ourselves as well? AI/neural networks are just able to run a much, MUCH larger number of simulations simultaneously from which to draw their concepts/conclusions much more quickly than the human brain in it's current state allows us the capacity to run?
Isn't the human brain slowly built up through a person's lifetime in the same manner that a neural network is built via a machine learning model? I mean, like, isn't the whole of a human's experience basically logged and framed in our brains as what essentially is nothing more than some form of logic tree or SQL database or something to that effect?
It's almost like the only difference between a human and an AI/neural network is in the hardware itself coupled alongside the underlying network architecture that is being built on said hardware over time?
Not true. You are fooled by hype.
@@ericamann2533 Can machines have emotions and morality ?
@@UTKARSHARJUNdefine emotions and morality.
Never had the pleasure of meeting this man... but love him immensely. Thanks for uploading these.
I love how Lex Fridman recording the questions feels like fulfilling a dream of interviewing Richard Feynman for his podcast
Yep.
"The third year he wasn't allowed to play anymore." LOL!
We need about 10,000 Richard Feynman’s teaching students today.
Right now we have 0, so we need about 10,000 more
Most of the best are on RUclips 🙂
Teaching what ?
we need teachers with spines. He defines things clearly and in a no nonsense way, which is pretty much the antithesis of the level of discourse in 2020. Everything has to be obfuscated and twisted to fit political narratives now, and anyone that asks questions is a heretic to be burned at the stake
And yet if parents thoughts their children even more than they do today to be decent human beings the progress in one generation would be higher than 10,000 Feynman could do in hundred Generations. Or maybe not. Richard would know the answer :D
The way he repeatedly used the word "present" when describing the computers of his time makes me think that he was smart enough to predict that in the future there may be people watching this who's computers can do some of the things he said are difficult with ease.
Exactly! ;)
What things?
I suppose even you 2 years ago when you wrote this comment cannot relate with the exponential tech growth.....i guess you could not imagine what chatpt does today...2024.
This man really was blessed with the whole package: One of the most extraordinary minds of the last centuries but at the same time the necessary witt, charisma and didactical genius to very entertainingly convey any topic of any complexity to any kind of audience. There are no words for how much I worship Richard Feynman.
Professor Feynman, your intelligence is just gated by time. I am an NLP research student and your insights are timeless even till 40 years later.
Thank you so much for posting this. Feynman was visionary in so many things... Respect!
Can you imagine what he would think today if asked the same question? He hinted at facial recognition and fingerprint comparison, which back then was considered nearly impossible - today these are some of the simpler things that AI does, and much better than humans.
Only because we provide near unlimited training data with captcha's and the like.
it wasn't considered "impossible". He said it himself - it just takes too long with the computational capacity and memory we have at a time. Human can do this faster. Therefore teaching a machine to do it would be impractical. And he later said the same thing about weather prediction (not much different from facial recognition conceptually) - right now machines are slow; but will probably get a lot faster and will be able to account for more parameters, as technology evolves. This is where we are now today. We have increased our capacity, and we have the algorithms. As a result we see a rise of AI in many fields.
Poor Prof Feynman didnt know then that facial recognition AI Software would be a reallity 3 decades after this early 80s lecture. His greatest skill beyond exceptional scientists then & now, is that he was incredibly imaginative & a damn good communicator. If you have read his book on Quantum Electrodynamics (QED) with Feynman diagrams.... he made it so simple to understand even for average high school kid. I was so impressed that i got hold of his original 1964 Caltech Lecture notes in Physics....it was not easy as i am from 🇲🇾-Malaysia!!!😅
incredible that this was filmed 40 years ago, and he got just about everything right. basically tells us that the fundamental computational theory is still viable in terms of what machines can and cannot do
16:33 damn, even the bugs are the same 😂
I love how the 2nd question reifies the computer as if it were something seemingly autonomous and distinct from its designer/creator.
Feynman is brilliant and puts wonder into the minds of his audience. I wish we had more like him. No, Neil DeGrasse Tyson doesn't even come close.
while i agree, it's better to have neil then nothing at all
Neil always felt like a showman. Like he's a salesman for science. Feynman is pure passion and excitement.
Tyson is nowhere near as funny as he thinks he is. He has a seemingly inexhaustible supply of common misconceptions which he laboriously deflates well all the while marvelling at his own wit. He doesn't really encourage free thinking by the audience, who usually know where he is going a couple sentences (and a number of his chuckles) before he gets there. I like the man, but he is hard to listen to after a while.
Tyson should stick with chicken 🐓🍗
@@hazardeurIt's better to have truth than nothing at all. He's selling a religion, not a reality.
One of the greatest minds. On his physics admission exam to Princeton, he not only scored the highest score ever at the time, one of the professors commented that he should teach instead.
@Chaotic Amphibian what! That’s amazing! Do you have any stories?
@Chaotic Amphibian for real .?
This is false. Feynman was dumb until he met his wife.
Lol maybe he matured late, wasn't he married at like 19 or 20
I like how open ended Feynman leaves his answers here. He never gives an ultimatum about whether Ai will supersede humans, just interesting anecdotes.
Awesome. Thanks for the upload---staring at my Feynman Lectures book.
Thank you for sharing. Feynman is one of the Greats. I can listen to him all day everyday . Thanks.
1985: The lighting is different, the face is different... 2020: DeepFake "hold my beer"
ew
@@lukejo7994 ok
One of the smartest men to ever live had trouble comprehending exponential progression... What hope do I have...
In fact generative models like GANs can be thought of doing some sort of "thinking" through backpropagation - the discriminator and the generator force each other to think in certain way
Dude, 8:15 the man was ahead of his time in all senses.
Feynmann: Jack's face is different
Convolution Neural Networks: hold my beer
GANs: Now jack looks like Elon Musk.
@@psy_duck8221 lol
These neural networks run on GPUs that can run a large number of algorithms in parallel.
@@anteconfig5391 A large number of the same algorithm
@@anteconfig5391 Forget the GPUs, visual recognition can run on a Raspberry Pi.
A lot has happened since 1985, which none of us could have imagined, even Feynman. When I was asked back then whether there could ever be a machine capable of conscious thought, my answer was yes, because it already exists - us.
Yes, but you need to butcher the word machine first
I watched Feynman a few times a year since they turned up archive on the internet.
Really an incredible mind and a fun teacher.
I'm a Deep Learning algorithm developer, and Its amazing to hear Richrad Feynman discussing all of this so long ago. We made great progress since then, but the problems and challanges are very nicely explained by Feynman!
RF always brilliant, always entertaining, always with interesting prospective. My Hero.
The clip ended with a beautiful thought by Feynman " I think we are getting close to intelligent machines but they are showing the necessary weaknesses of intelligence"
The problem he is referring to about heuristic #693 is present in today's reinforcement learning systems as well! That begs the question how far have we really come in the quest for intelligence and how much of the progress we see today is dependent on just having more computing power and better pattern matching than 60 years ago?
obviously not very much , it is only 30 years
Most pattern-matching algorithms today (Deep NN, for example) existed back in 1985 but was abandoned as ineffective because computing power at the time wasn't powerful enough. The rise of neural nets today is more of a resurgence than a new discovery.
in terms of procedures, no progress has been made, our progress has been only in terms of computing power, the 20th century gave birth to the smartest people in human history.
Yeah I was about to say isn't heuristic weighting and selection, precisely what we call machine learning. So infact Feyman from the 50s has enough understanding of computer science to formulate and answer these questions with the same skill as we would today. The problem today, seems to me, is that we have become experts at using distracting jargon, and getting awestruck by big numbers.
Also heuristic #693 reminded me of over-complete auto-encoders where the input simply gets copied to some of the hidden units in the hidden layer (when no noise is added to the inputs) and the autoencoder simply avoids to learn anything but the identity function
Mr Feynman talks about principles and that’s why this lecture is so up to date and interesting.
Way ahead of his time and really right on so many things that haven't popped up until now.
His point about human intelligence at the beginning is excellent: Humans are not the most intelligent possible things, and there isn't much point in trying to imitate human intelligence, any more than there is a point to building a mechanical cheetah. You can think of things like the youtube algorithm as primitive intelligences with completely different senses than human have, which do things practically impossible for humans. They live in a very different world.
This seems to totally disagree with Ray Kurzweil, who seems to think that if you make a computer big enough, human emotions will spontaneously emerge. But human emotions are the result of human evolution, why should they emerge from a machine built in a lab?
AI cannot be a new species unless it is programmed to be. But I have doubts about that programming possibility in the real world out of any simulation. The reason for my doubt is well known: there is not possible silicon life in the universe. About that Feynman "lero lero" (Brazilian expression) of mechanical cheetah etc etc it reminds me an ancient dinner in the MIT when there we've found Asimov, Searle, Minsk who hastily discussed about future trends of AI issues. Minsk have came to be in almost an argument with Asimov due to the Asimovian perspective on the central aspect of (future) AI. It was a memorable dinner that could be labelled General Intelligence Could Be a Mechanical Chee-tah or Whatever? Searle is still alive. He knows why that title! LoL
@@DumbledoreMcCracken To correct you and preach further on your notion ;) - *You know about - ruclips.net/video/JM77aTk1XyI/видео.html
Pardon me if I think I can infer from your comment, that it's necessary to define "think", to define "intelligence", before making any comparisons. It was an observation I had, as soon as I saw the title of this video-"How can we say that something thinks, without defining first what thinking is?"
Kurzweil doesn't talk about human emotion, just emotion. Many animals with brains, possibly all animals with brains, have emotion. Emotion might be a result of biological brain evolution, or it might be a fundamental component of any optimal solution for general intelligence.
@@theBaron0530 That was an interesting "thought".. How could you possibly define something that is 'first order' that literally defines the 'second order' problem space of linguistics for which you are referring to.. It's like the 'simulation' in totality trying to comprehensively define itself , when it lacks the means to 'look in', from outside itself >> because if it had the means to 'see' from outside then it would be "more than simulation" .. A system space, cannot define itself using only itself..
To think about thining, is a second order operation.. unless you have the ability to 'step outside' your thinking or have something else to reference off, than you cannot define 'thought'.. and if you did find somthing else to "reference off" (sorry about my shitty terminology) then that information is now a product of "your" thinking, therefore unable to define the totality of the system..
I, dunno.. maybe (??)
I was smiling for the whole lecture : )
For me Feynman telling a gamer story from the old days made the video.
Love Feyman! ...the manner in how he thinks...curves...straight lines.... + a sense of humor
The mental beauty of Feynman is he can weave the irony into every step of his language display which makes it entertaining and at the same time genius that nobody else equates at the same time. What a wonderful human being he was. His NY accent helps in seemingly creating a stage for standup, which is in fact what he is doing here... an improvised standup routine. I would say he will be missed, but yet these moments in time are captured for eternity for future mentalists. Gr8! Peace ☮💜Love
1. I'm a fan of Feynman and have read most of his books.
2. I think he would be amazed to see how far computer learning has come. The idea of a computer changing its own code was becoming feasible when he gave this lecture.
3. I really wish someone had asked him his thoughts about HAL9000 from the movie 2001 A Space Odyssey.
I think he'd be tickled that people are watching this lecture on their phones.
Computer learning is still just mashing database query results together into various outputs that still may or may NOT be the results we're after.. a computer still has absolutely no way of telling what data is important in the real world.
Feynman: "we could try to make a machine that runs like a cheetah".
Boston Dynamics: Indeed, we can.
Hmmm...
Boston Dynamics fastest robot It's not faster than a cheetah.. Only a faster than a human.
@@mikebrown354 First of all, it was a joke. Second of all, he said "run like a cheetah" not "run as fast as a cheetah". Peace!
Feynman: "we could try to make a machine that runs like a cheetah".
Boston Dynamics: Hold my beer
They’re not close to building in cheetah speed robot with legs that is Multi functioning. I don’t think they broke 20 miles an hour. There maybe at 12 miles an hour. You’re misunderstanding the difficulty.
@@mikebrown354 how fast are your robot cheetahs?
5:13 Jumpscare! I seriously thought this was coming out of the TV
yeah but... I mean... not bad huh
Stumbled across him explaining atoms and molecules last night the way he explained it brought me so much joy I giggled for like five minutes
Love Richard...something about his delivery always reminded me of Ed Norton from Honeymooners ❤
A great video. It shows that when trying to get an intelligent answer/behavior from a machine, it is always critical to see how you present a real world and its problems to it. And, of course, given more resources machines will start exploiting all the loopholes you leave them.
The machine will never know the real world - the weakest link will always be the human operators who will always be feeding it biased, incomplete data... AI is a myth. Computer programs are getting better at what we would like them to do, but "AI" is just a buzzword.
The students were begging him to continue the lecture , can you duckin imagine ? 😂
that's what you get when you don't have tik-tok
That's what you get when you have real teachers....
They were not students
Say "fucking". No need to censor yourself.
@@Amethyst_Friend RUclips will delete his comment.
13:43 blew my mind... it's the simple things that often skip past us
What a brilliant remark and/or observation.
_"I think that we are getting close to intelligent machines but they're showing the necessary weaknesses of intelligence."_
Never a dull moment with Richard Feyman
Who? Such disrespect
Feynman talked about our inability to build a definite procedure that can “recognize things”… Well that is precisely what the so-called deep learning versions of supervised learning do even better today than humans (i.e., more systematically). Machine learning transcended the whole idea of procedural programming. Nevertheless, it’s amusing how the “heuristics” he mentioned sound like weights in a neural network. Avoiding the collapse or the divergence of these weights became the challenge.
Also, we can concur: AI shouldn’t be about writing poems but having a machine want to write a poem.
He was so intuitive then. If it were possible, I would have loved to ask same questions again today.
Very interesting. Almost secretly and philosophically he is standing on for a few ideas:
1) There is no defined general intelligence but different intelligences
2) The raw materials (carbon vs silicon etc etc) can define the type of intelligence de per se (it was a common belief in 60/70/80)
3) Heuristics has a problem: the best what it could do for intelligence design is performance or limited context based (and self limited) type of learning -- obviously aiming contextual performance. Therefore machine cannot fully discover valid abductive inferences due to the lack of intrinsic recontextualisation cognitive capacity.
Basically he was arguing for the fundamental concepts of the MIT during that moment of the American (Anglo Saxon) history of science.
Sorry for my English. I also like and admire Richard Feynman contents and style.
Never be sorry for your english, you rather should ask for improvements!
Dude your English is in the top percentile, most natives would have serious trouble understanding what the fuck any of that meant lol. Besides that, yeah that is generally applicable to what he was saying.
@@GarrettX001 Thank you for your kindness. I am a Brazilian from a mix of different bloods German, Danish (strong) Oriental Jew (strong) Portuguese etc (all mixed) and despite the fact of different origins we basically only speak Portuguese. Recently I was really cogitatum (well, it is Latim: freely thinking about) to make a RUclips bilingual channel of science in this country with Anglo Saxon (English spoken natives, Americans, British) partners but the fact is that my English is poor. I am still studying a lot. Some really good channel here in Portuguese idiom is very difficult but it is not impossible. There are, say, no more than reasonably good channels here. Maybe I could yield a kind of interview framework (only audio with slide show) channel casted with the bright English spoken scientists of the English spoken peoples of countries like it happens in the Event Horizon, I would guess. It is hard. Google translator doesn't solve many barriers, for instance, in math (yes, math is also spoken during lectures). Google also doesn't have equation search, say, using latex. Those types of educational handicaps and others (more common ones, like good local structure) are harder in countries like Brazil.
You are being very helpful and being very useful, when you upload clips like this.
Hope you realize the role you are playing. The benefit you are creating.
“If you’re trying to make an intelligent machine, it’s going to try all crazy ways of avoiding labour” I love this man so much
A theoretical physicist talks about machines.
And 20 years later or so, his every word becomes true
One thing that is wonderful and worth noting, in my view, is how often and how well he uses QUALIFIERS.
Certain things are difficult but not impossible. Understanding why and how things are difficult shows an understanding of the problems being faced and therefore what is required to answer the problems.
This is a great watch Lex, cheers for uploading.
Happy you promote Dr Feynman´s vision and curiosity
I saw this as a university student. I just watched again at 54 years old. He was dead on, but the advances in AI these times are quite evident by his comments. Particularly when he describes the storage and power of our computational power.
This was so Good! I never heard of this guy before, today.! It was a Bill Gates video, that I noticed this Feynman. I love the way he breaks it down!!
"Silicon Valley seems to think it is intelligence that creates consciousness, but it is consciousness that creates intelligence." - George Gilder
Intelligence can exist without consciousness.
Brilliant man. Great explainer. Wow
Wow this feels really ahead of it’s time. The very last part he essentially described the problem of over-fitting or shortcut-learning
I could listen to this guy talk all day.