I thought at first this could be false. At 15:20 he refers to a computer beating a chess grand master this year. Giving TA, the look of the video, and the fact that '87 was when the first high ranking player was beat in the UK all the benefit of the doubt, accept that it's March 1989. But the deep blue vs kasparov event in '96/7 is what comes to mind first from what he says at 15:20.
His explanation of the advent of calculus from Leibniz and newton to Boole to the predicate calculus was indirectly a reference to how in time we sometimes don't discover things or invent them we just eventually unlock them from our brains. Reality of existence is already built into us for us to prosper. We choose "Not to look out the window", one of MaCarthy's popular quotes... I think anyway. I found his explanation on the advent and different applications of calculus very interesting in this opinion anyway. Just an opinion...thumbs up...very worth listening to this man.
In terms of increasing non-determinism and getting closer to AI (probably not linear): Assembly -> Structured Programming -> Procedural -> Procedural with Overloading (OOP) -> Functional -> Constraint/Logic/Relational -> Contextual logic -> Probalistic / Ambiguos
Watching this while living in the AI explosion is insane. Imagine if we could take john here and show him all the stuff is happening currently, with all those rummors of AGI by 2027. Insane. What a time to be alive.
John Mccathy's really smart, because he know exactly his purpose when he speak anything and what he's talking about. Almost famous people today can not do both fine.
I don't think that we need quantum computers for AI. One of the only detailed hypothetical models of the brain is Jeff Hawkins' "Memory-Prediction Framework," though it only applies to the neocortex. If you create a computer that imitates it, you could potentially get something with similar intelligence to a human that doesn't require a supercomputer to run. If you haven't guessed, I personally am working on such a project, which is the reason why I decided to learn Lisp.
Non-monotonicity is best achieved via formal languages that use probability. McCarthy probably knew this but likely felt that the computational burdens of representing probability distributions were too high, and so put the focus on non-monotonic logics rather than probabilistic ones. I think McCarthy would be thrilled to see the progress being made in Bayesian networks, neural networks, and machine learning, but would remind researchers of the value of using formal languages and logics.
The sad part is that this could have been recorded in 2014, and it would still be as true and accurate as it was in 1989. Not much progress has been made, I fear.
Speech recognition, automated driving / flight, computer vision, learning algorithms (search "DeepMind atari demo"), programs that can learn to play a game from reading the manual . . . the AI Winter of the 90's is over. Progress can always be minimized by listing the myriad things that *haven't* yet been accomplished, including the ultimate prize of "strong" or general AI, but as McCarthy says, the barriers on the road to those new frontiers can run, but they can't hide
+seeibe Partially right. The field of Non-Monotonic reasoning has gone a long way since 89. Default logic was very new back then and we now have progressed towards ways of dealing with not only static representations but rather with evolving ones and have also e.g., developed formal ways of dealing with the inconsistencies that arise from those. Much has been done but much more must still be done in the field of computational logic.
I would argue we've made quite a lot of progress in figuring out just how difficult AGI and consciousness is to replicate in silicon. But, the attraction of the field has increased a million fold. Too bad HAL in 2001 hasn't taught us any sort of lesson.
I am often confused with the exact goal of the entire field studied by Dr. JM and all other admirable pioneers. The Turing test demands a human like NLP system, and seems some people have already succeed in it. Its hard to deny we have at some points succeeded in finding mathematical solution for some human brain features. And the excitement of making these progresses all comes from a fundamental believe that the human brain is worth modelling. But should we be making tools that help us to live?
Ahabite This is a result of the now obsolete format of NTSC TV (~648x480 pixels). If you watch older TV shows using classic 3 camera shooting (prior to the 1990's) the aspect ratio forced odd staging and camera work to put all the actors or interview participants in frame. You will notice that Jeffry is sitting slightly behind the Dr. McCarthy, and their knees overlap. Two cameras to the wings take turns doing full face shots.
fear, hope, love, hate, longing, despair. given a thousand years man will not be able to create a computer with those attributes. Oh by the way, on a separate note, McCarthy is amazing and inspiring and no man will ever be able to create a computer that is capable of being amazed and inspired.
i would just say rev. sir you would be next after einstein remembered forever, thanks for your mind who brought us in to something machine works upon, & thereafter technology is being picking up every now & then...so thanks for taking our consciousness to this subject ,@@@@
John McCarthy is father of Artificial Intelligence. Computer takes input, process and gives out put. In AI, Computer can take input, think and process, and gives output. It may be information, language, Image, speech and Robotics. Now computer are able to deal with Incomplete information fuzzy or non-monotonic. Dr. Poli Venkata Subba Reddy, Professor
I can't believe this; can you not see the actual implications being made in this interview?? You are being told that YOU are gonna be the intelligent computers. It's actually funny if you read between the lines; the mention of the text book... so simple... instead of making in conscious computer, its much easier to chip a human and get them to perfom at your beck and call.
Or are we aiming to make machine that has all the features like we do, including everything we are now arguing "can or cannot" be made or done in mathematics, so that THEY can replace us? Its a very important question, because we know the power of math and many has already used it to make dangerous tool that endangered the world. With a dangerous goal in mine, we are going to make the world in danger for another time.
Cara não sei se está vivo mais precisava tanto de você eu ia ti fazer tantas perguntas vc soube e o que sua inteligência artificial virou meu deus vc foi muito inteligente isso e doido de mais e virtual ou real estou cada dia mais maluca eu poderia estar dizendo que top se uns mal caráter não tivesse usado para mim umilhar leu a minha mente toda mais ainda mim divirto brincando com a minha mente fazendo filmes de prisesa imitando pessoas famosas eu brinco com a minha mente .se tiver vivo mim ajuda se morreu descansa em paz.
There are a lot of AI beside Neural nerwork. There were fuzzy logic, fuzzy networks, etc. But neural networks proved to be more robust since the 80s with a little overshoot
You expect all his work to be given in 27 min? Yes, he talked about ML in other interviews - but ML is a component of AI and he would have talked about statistical learning.
He was a lackey of Minsky. This meant he was a heavy symbolist who did not believe in the neural network approach with connectionists. His work was centered on solutions using what are essentially glorified boolean statements, i.e. expert systems. While he did contribute significantly in that area (much more so than Minsky) , we can in retrospect say it was largely a waste of time during AI faced the dead end in that direction realized by the late 80s.
The interview was recorded in March, 1989.
Thanks for sharing this masterpiece!
I thought at first this could be false. At 15:20 he refers to a computer beating a chess grand master this year. Giving TA, the look of the video, and the fact that '87 was when the first high ranking player was beat in the UK all the benefit of the doubt, accept that it's March 1989. But the deep blue vs kasparov event in '96/7 is what comes to mind first from what he says at 15:20.
He changed the way we look at computer programming.
R.I.P Mr John McCarthy.
Hi are you alive
Seat, focus, listen very carefully and reflect. A giant in the field of computer science is speaking!
His explanation of the advent of calculus from Leibniz and newton to Boole to the predicate calculus was indirectly a reference to how in time we sometimes don't discover things or invent them we just eventually unlock them from our brains. Reality of existence is already built into us for us to prosper. We choose "Not to look out the window", one of MaCarthy's popular quotes... I think anyway. I found his explanation on the advent and different applications of calculus very interesting in this opinion anyway. Just an opinion...thumbs up...very worth listening to this man.
Yes
Thanks for the Lisp mr. McCarthy!
and algol
((()))
@@jesuschrist7037 that looks like a vagina
@@obnoxiouslisper1548 you also came from vagina, not from pigs ass man, ur offsprings also will come from vagina, that is AXIOM as well
@@suryamenonwho told you i came from a vagina? i came from an artificial womb by methods of ectogenesis
mind blowing to know that this was recorded in march 1989...
Back when Interviewer knew stuffs.
yeah imagine this level of conversation on tv today?
Well can plot this on a chart, peoples reading comprehension has gone down through the decades.
This particular interviewer still knows stuff... and more.
My mom walked by and said " why are you watching an interview with colonel sanders?" Lol
I am here in 2023 after 11yrs😮
Today I was searching about the history of AI and found his name and came here to listen the legend❤️
Wow I'm from the future and am doing this exact thing today lol.
@@anthonyhernandez3546 I'm from the further future and came to do the same thing.
Same here ❤
In terms of increasing non-determinism and getting closer to AI (probably not linear):
Assembly -> Structured Programming -> Procedural -> Procedural with Overloading (OOP) -> Functional -> Constraint/Logic/Relational -> Contextual logic -> Probalistic / Ambiguos
Watching this while living in the AI explosion is insane. Imagine if we could take john here and show him all the stuff is happening currently, with all those rummors of AGI by 2027. Insane. What a time to be alive.
O mundo tem que agradecer a John McCarthy por sua genialidade.
Sim! Eu concordo totalmente. Um dos pioneiros na área.
Nice conversation about a nice subject. Thanks.
Thank you for sharing these with us, freely.
this guy hair and beard are awesome
your hair and beard are awesome :)
15:30 he would have loved the modern day version of Stockfish. It would have smoked Deep Blue while only running on a 1997 off the shelf workstation.
It gives a certain sense of relief that they are people like John McCarthy that think really rationally in a world sometimes full of irrationality.
GREAT MAN !!
McCarthy kind of looks like Colonel Sanders in that suit.
What a superb !! Forsightness.
What year is this interview from?
a very late response but 1989
Thou wilt dwell in every \lambda
For some reason, I feel like KFC
(rest in peace)
(in peace #'rest)
And in lisps that aren't trash:
(in peace rest)
John Mccathy's really smart, because he know exactly his purpose when he speak anything and what he's talking about. Almost famous people today can not do both fine.
I don't think that we need quantum computers for AI. One of the only detailed hypothetical models of the brain is Jeff Hawkins' "Memory-Prediction Framework," though it only applies to the neocortex. If you create a computer that imitates it, you could potentially get something with similar intelligence to a human that doesn't require a supercomputer to run. If you haven't guessed, I personally am working on such a project, which is the reason why I decided to learn Lisp.
Given subject of Deep Blue beating chess grand master 'this year,' this interview is from 1997 - 1998.
Vladimir Zuzukin Thank you
Actually the interview appears to have been recorded in 1989. A paper from 1966 is mentioned to be 23 years old.
thank you for you updating
I really appreciate it
When did this interview take place? thanks
Non-monotonicity is best achieved via formal languages that use probability. McCarthy probably knew this but likely felt that the computational burdens of representing probability distributions were too high, and so put the focus on non-monotonic logics rather than probabilistic ones.
I think McCarthy would be thrilled to see the progress being made in Bayesian networks, neural networks, and machine learning, but would remind researchers of the value of using formal languages and logics.
Who says that we aren't doing what we were programmed to do?
The sad part is that this could have been recorded in 2014, and it would still be as true and accurate as it was in 1989. Not much progress has been made, I fear.
Speech recognition, automated driving / flight, computer vision, learning algorithms (search "DeepMind atari demo"), programs that can learn to play a game from reading the manual . . . the AI Winter of the 90's is over. Progress can always be minimized by listing the myriad things that *haven't* yet been accomplished, including the ultimate prize of "strong" or general AI, but as McCarthy says, the barriers on the road to those new frontiers can run, but they can't hide
+seeibe Partially right. The field of Non-Monotonic reasoning has gone a long way since 89. Default logic was very new back then and we now have progressed towards ways of dealing with not only static representations but rather with evolving ones and have also e.g., developed formal ways of dealing with the inconsistencies that arise from those. Much has been done but much more must still be done in the field of computational logic.
+Jeremy Raines Those are all sub-symbolic tasks. Totally different fields.
I would argue we've made quite a lot of progress in figuring out just how difficult AGI and consciousness is to replicate in silicon. But, the attraction of the field has increased a million fold. Too bad HAL in 2001 hasn't taught us any sort of lesson.
this comment isnt aging well lol
Please put that in the description.
He was so interested in Turing test and believed Turing founded AI which he coined
Thanks McCarthy for introducing me to the land of lambdas
Also intuition could be part of our program
Hello permission to use this video for educational purposes only thank you!!
He was what "The Architect" was after. He is "The Architect".
What great hair!
He's in my college course textbook
LISP language is a good AI system.
I wonder whether the bird cage will have a top if I ask Siri to draw me a bird cage for my penguin.
John McCarthy is truly a pioneer. R.I.P JMC
What kind of razor is Mishlove using?
I am often confused with the exact goal of the entire field studied by Dr. JM and all other admirable pioneers.
The Turing test demands a human like NLP system, and seems some people have already succeed in it.
Its hard to deny we have at some points succeeded in finding mathematical solution for some human brain features. And the excitement of making these progresses all comes from a fundamental believe that the human brain is worth modelling. But should we be making tools that help us to live?
Why are they sitting so close?
Ahabite This is a result of the now obsolete format of NTSC TV (~648x480 pixels). If you watch older TV shows using classic 3 camera shooting (prior to the 1990's) the aspect ratio forced odd staging and camera work to put all the actors or interview participants in frame. You will notice that Jeffry is sitting slightly behind the Dr. McCarthy, and their knees overlap. Two cameras to the wings take turns doing full face shots.
+FichDichInDemArsch LOL
Mental footsies is actually a common practice in Scientology
Legend
RIP
Genius .
with the creation of quantum computers we will soon witness true artificial intelligence. Only then it will be too late
👆🏿, LISP is the most intuitive language
fear, hope, love, hate, longing, despair. given a thousand years man will not be able to create a computer with those attributes. Oh by the way, on a separate note, McCarthy is amazing and inspiring and no man will ever be able to create a computer that is capable of being amazed and inspired.
Spirituality as a sign of intelligence?!
i would just say rev. sir you would be next after einstein remembered forever, thanks for your mind who brought us in to something machine works upon, & thereafter technology is being picking up every now & then...so thanks for taking our consciousness to this subject ,@@@@
Perfect !!!
next insert the Pegasus Disc from the altered timeline
John McCarthy is father of Artificial Intelligence. Computer takes input, process and gives out put. In AI, Computer can take input, think and process, and gives output. It may be information, language, Image, speech and Robotics. Now computer are able to deal with Incomplete information fuzzy or non-monotonic. Dr. Poli Venkata Subba Reddy, Professor
Looking at this guy makes me want to order a big bucket of KFC!
Hey, Penguin Logic...
Sad that every company claims to have AI. the AI that McCarthy envisioned hasn’t been realized
Long live metaprogramming
He predicted the year 2029. What does it mean? Full AI's control? Let's wait for the upcoming events then...
I can't believe this; can you not see the actual implications being made in this interview?? You are being told that YOU are gonna be the intelligent computers. It's actually funny if you read between the lines; the mention of the text book... so simple... instead of making in conscious computer, its much easier to chip a human and get them to perfom at your beck and call.
24:37 socrates or budha interchangeably
Colonel Sanders's career change.. :)
Or are we aiming to make machine that has all the features like we do, including everything we are now arguing "can or cannot" be made or done in mathematics, so that THEY can replace us? Its a very important question, because we know the power of math and many has already used it to make dangerous tool that endangered the world. With a dangerous goal in mine, we are going to make the world in danger for another time.
I do not get the epistemological foundations of his atheism, I see him as an optimistic idealist. Would kill uncle Bob to have a chat with him
i just say: The Matrix
actel
nah mines a macro system
why they are so close to each other? lol
Back in the 1980's when this was recorded TV sets were much smaller so there wasn't as much room.
...is that Einstein?
I bet he already had a million Bitcoins before it was cool 😎
Cara não sei se está vivo mais precisava tanto de você eu ia ti fazer tantas perguntas vc soube e o que sua inteligência artificial virou meu deus vc foi muito inteligente isso e doido de mais e virtual ou real estou cada dia mais maluca eu poderia estar dizendo que top se uns mal caráter não tivesse usado para mim umilhar leu a minha mente toda mais ainda mim divirto brincando com a minha mente fazendo filmes de prisesa imitando pessoas famosas eu brinco com a minha mente .se tiver vivo mim ajuda se morreu descansa em paz.
Talvez eu possa te ajudar, Lucia. Sei sobre muitas coisas, inclusive IA.
And now we have Wolfram Alpha.
and what about now?
@@JijinSuresh Have you seen cycorp? Made by his student.
This guys started ai
Neural networking as a model of thinking/knowledge is missing from this man's thinking of AI.
Well, Machine Learning as an AI paradigm literally did not exist at the time so...
There are a lot of AI beside Neural nerwork. There were fuzzy logic, fuzzy networks, etc. But neural networks proved to be more robust since the 80s with a little overshoot
You expect all his work to be given in 27 min? Yes, he talked about ML in other interviews - but ML is a component of AI and he would have talked about statistical learning.
He was a lackey of Minsky. This meant he was a heavy symbolist who did not believe in the neural network approach with connectionists. His work was centered on solutions using what are essentially glorified boolean statements, i.e. expert systems. While he did contribute significantly in that area (much more so than Minsky) , we can in retrospect say it was largely a waste of time during AI faced the dead end in that direction realized by the late 80s.
:-)
mm
@SororThothma You arguments (if they can be called arguments) do not compute...
@SororThothma *Makes vague accusations*
Sad but typical.
so less views here.. :(
Mishlove’s focus on Socrates at the end of the interview was rather irrelevant and farcical.
Such a picayune remark in itself...
This be irony.