WOW, this was unexpected!!! Thank you for the content Edan, you're awesome. I love the youtube algorithm for having suggested your channel, I feel like I found a gem ^^. Thank you for taking your time to post content here.
I really love this, it's great to hear Sutton and his deep thinking - a real gem, I learned so much from his work. On the skill of interviewing - sit back and let the man carry himself! There are a few moments where it feels like he's getting started but doesn't get to finish. You're great too, but we can hear your ideas any time on your other videos. Take this on board and please please do more interviews with more ML icons in the future!
I've listened to so many talks from different ML engineers & scientists. Rich is the first to even mention the concept of augmentation with ai being a logical advancement, even if only for a moment. What a brilliant mind and a wonderful talk with him overall.
Hi Edan, what you posted here is nothing short of a gem! I am a huge fan of Rich Sutton and his ways of thinking/working. Thank you so much! Looking forward to more such interviews of prominent personalities.
This is one of the best interviews I've seen in many, many years, and I regularly try to keep pace with AI. :] Thanks for sharing it, it's so refreshing to listen to Sutton's views on Science, and his fresh ideas on RL!
Thanks for producing this interview and letting us see the personal side of Rich Sutton. I've been going through his RL book so was pleased to watch this. It was a total surprise to learn about the projects going on in Alberta.
Bro, let him talk, let him finish his full thought, don't interpt with your own thoughts or other questions, when the guest has a deeper meaning in what he thinks about the subject or the question
I get you. But it seems part of the informality comes from the fact that he knows Rich Sutton/has been taught by him before. So like maybe he wasn’t thinking of him as an interview guest idk.
@@rtnjo6936 Your comment is the top comment. So the whole time I just keep see you saying "bro, let him speak". You weren't even part of the interview and it would imply that you think it would better as a lecture. I just didn't want to see your comment as it didn't draw attention to anything interesting about the interview, such as what they were discussing. Even with you heckling him the whole time I didn't notice him interrupting him or cutting him off any more than is usual in a conversation, but you had me looking for flaws the whole time. He did mention something interesting very early on, that he was partnering with John Carmack, which lead me to this other interview. ruclips.net/video/uTMtGT1RjlY/видео.htmlsi=LKI9Iatb7WbWJ7hz
I'm so pleased to hear that he is largely in agreement with Yann LeCun. I admire the work of both, and think that their emphasis on higher level prediction and representation is very important,.
Okay, if you're going to ask someone if a task is easy to accomplish and they answer you, then take them at their word or don't interview them at all. "Easy" is a relative word. It relates to the person's abilities, senses, experience, and only they can tell you how they feel about it. So don't make it more complicated. He told you it was easier to scale than to invent a whole new concept or approach from scratch. It's obvious too. If you want a green table, and you have to decide whether to buy a table already made and paint it or build a table from scratch having no tools or experience... it would just be simpler to buy the table already made. If you can find it in the color you want then that's more convenient. Why do you have to press the point? This is where I leave. Because intellectuals like to yerk each other off. And I don't need to be a witness to that.
Can you facilitate a conversation between Richard Sutton and Jeff Hawkins of Numenta? Hawkins' Thousand Brains Project maps very well onto Sutton's need for scaling and for continual learning. And Sutton can help Hawkins' team break through the engineering barrier that are slowing them significantly. Both sides have the same long-term goal, but are seeing it from their current points of view. It'll be a stretch to get each one to see it from the other's eyes, but once they do , massive progress will be made in both fields.
Thank you so much Edan! The quality of these conversations is incredible. Keep up the great work! 29:10 I'm so glad he said that. Not an expert or anything, but it's bothered me for a long time how many researchers train on offline data in RL to get state of the art performance on a benchmark. We're kicking an important can down the road, and undermining the very benchmarks we use to measure progress.
You have chosen to interview one of the most complicated persons to interview. Great mind but also bitterness about lack of appreciation. You survived it, congrats.
Can we develop a "value function" that rewards inductive reasoning? You might measure the inductive proposition by how many low level nodes are consistent with or summarize collections of verbs, negations, prepositional phrases and objects in an expression? Some sort of reducing entropy in a statistical linguistic metric?
This was hard to watch, especially in the beginning. Let him talk, it’s why we tuned in. I think you tripped over your ego a bit. Hopefully this is some reinforcement learning for you. ;-)
While the discussion on the Snapshots and its implication on learning. I wondered what if we deliberately want to forget certain snapshot. Last year NuerIPS had a challenge on unlearning basically that addresses the privacy related issues. But will this snapshot related idea would help forgetting the data point is an interesting idea to work on.
Terrible conversation. Too much hero worship on one end, and too much self-importance on the other. And not much in the line of useful ideas, just a lot of words.
This is really great. I really appreciated that you pushed when you were not satisfied with Rich's answers. The conversation got deeper whihc I found very interesting
Even if the 'emotion' is subjective and emergent byproduct of the algorithmic simulation running on there hardware it would seem pragmatic to treat them ethically so they don't simulate taking offense at there treatment... leading to conflict.
we just know that certain states of consciousness like happiness arise under certain constellation of atoms, which happen to be biological. but we don't know if dead matter has a state of awareness of some sort. in the end emotion seems to depend on an ego and the belief that it must be protected, thus generating positive and negative experiences. we could somehow replicate that in a computer, but on an existential level my guess is it could experience suffering and joy but the sensation would be different than in us. it's a scary thought that a superintelligence could potentially create a digital hell for digital beings
53:10 it may not be only discrimination, but imagine what would follow. If AI then would truely be more intelligent than us and we would keep them from fullfilling their own goals, which aslong these are hopefully carefully aligned, but even then when we would keep them from doing the right things needed for humanity to progress, then AI may find itself torn between following commands and actiing on our behalf, but against us. Which in return we may see as hostile and confict would be the result. AI rights, for those AIs that would want to work with us i see pretty much as necessary for a future that is not a dystopia. Even more so when consciousness and empathy would be part of them.
interviewer is trying bit too hard to try to sound smart. ask questions, get rich talking, we don't care what you think is easy or hard or what you know. ask more difficult questions, don't provide your own opinions
Surely if you are able to solve difficult problems, you are forced to develop rich internal representations? Humans are barely able to articulate how they come up with particularly creative solutions, its all buried in internal representations.
Congrats on interviewing an absolute legend! This is so good!
WOW, this was unexpected!!! Thank you for the content Edan, you're awesome. I love the youtube algorithm for having suggested your channel, I feel like I found a gem ^^. Thank you for taking your time to post content here.
I really love this, it's great to hear Sutton and his deep thinking - a real gem, I learned so much from his work. On the skill of interviewing - sit back and let the man carry himself! There are a few moments where it feels like he's getting started but doesn't get to finish. You're great too, but we can hear your ideas any time on your other videos. Take this on board and please please do more interviews with more ML icons in the future!
I've listened to so many talks from different ML engineers & scientists. Rich is the first to even mention the concept of augmentation with ai being a logical advancement, even if only for a moment. What a brilliant mind and a wonderful talk with him overall.
I enjoy listening to Sutton speak. And I'm sorry It's a kind of torture watching you argue over trivial details with him
I actually break danced with Rich Sutton. Such a dance and RL legend!
Do you have any of those breakdancing videos of you and Rich available ?
Hi Edan, what you posted here is nothing short of a gem! I am a huge fan of Rich Sutton and his ways of thinking/working. Thank you so much! Looking forward to more such interviews of prominent personalities.
Awesome! If this is your first interview, you are surely on the right track.
This is one of the best interviews I've seen in many, many years, and I regularly try to keep pace with AI. :]
Thanks for sharing it, it's so refreshing to listen to Sutton's views on Science, and his fresh ideas on RL!
Thanks for producing this interview and letting us see the personal side of Rich Sutton. I've been going through his RL book so was pleased to watch this. It was a total surprise to learn about the projects going on in Alberta.
I'm officially a fan of Rich! hahaha a lot of good insights, thanks for sharing!
Bro, let him talk, let him finish his full thought, don't interpt with your own thoughts or other questions, when the guest has a deeper meaning in what he thinks about the subject or the question
I get you. But it seems part of the informality comes from the fact that he knows Rich Sutton/has been taught by him before. So like maybe he wasn’t thinking of him as an interview guest idk.
@@galactromeda hahahaah, make 3 more comments please
@@rtnjo6936 Your comment is the top comment. So the whole time I just keep see you saying "bro, let him speak". You weren't even part of the interview and it would imply that you think it would better as a lecture.
I just didn't want to see your comment as it didn't draw attention to anything interesting about the interview, such as what they were discussing.
Even with you heckling him the whole time I didn't notice him interrupting him or cutting him off any more than is usual in a conversation, but you had me looking for flaws the whole time.
He did mention something interesting very early on, that he was partnering with John Carmack, which lead me to this other interview. ruclips.net/video/uTMtGT1RjlY/видео.htmlsi=LKI9Iatb7WbWJ7hz
I'm so pleased to hear that he is largely in agreement with Yann LeCun. I admire the work of both, and think that their emphasis on higher level prediction and representation is very important,.
"I'm biased in the sense that I know about it."
Okay, if you're going to ask someone if a task is easy to accomplish and they answer you, then take them at their word or don't interview them at all.
"Easy" is a relative word. It relates to the person's abilities, senses, experience, and only they can tell you how they feel about it.
So don't make it more complicated. He told you it was easier to scale than to invent a whole new concept or approach from scratch.
It's obvious too.
If you want a green table, and you have to decide whether to buy a table already made and paint it or build a table from scratch having no tools or experience...
it would just be simpler to buy the table already made. If you can find it in the color you want then that's more convenient.
Why do you have to press the point?
This is where I leave.
Because intellectuals like to yerk each other off.
And I don't need to be a witness to that.
Can you facilitate a conversation between Richard Sutton and Jeff Hawkins of Numenta? Hawkins' Thousand Brains Project maps very well onto Sutton's need for scaling and for continual learning. And Sutton can help Hawkins' team break through the engineering barrier that are slowing them significantly. Both sides have the same long-term goal, but are seeing it from their current points of view. It'll be a stretch to get each one to see it from the other's eyes, but once they do , massive progress will be made in both fields.
Thank you so much Edan! The quality of these conversations is incredible. Keep up the great work!
29:10 I'm so glad he said that. Not an expert or anything, but it's bothered me for a long time how many researchers train on offline data in RL to get state of the art performance on a benchmark. We're kicking an important can down the road, and undermining the very benchmarks we use to measure progress.
I really liked the interview. Ignore other comments.
Great talk! Thank you.
You have chosen to interview one of the most complicated persons to interview. Great mind but also bitterness about lack of appreciation. You survived it, congrats.
Why do you think he is/might be bitter?
loved the way he thinks. this is gem
This was most appreciated. Thank you.
Can we develop a "value function" that rewards inductive reasoning? You might measure the inductive proposition by how many low level nodes are consistent with or summarize collections of verbs, negations, prepositional phrases and objects in an expression? Some sort of reducing entropy in a statistical linguistic metric?
This was hard to watch, especially in the beginning. Let him talk, it’s why we tuned in. I think you tripped over your ego a bit. Hopefully this is some reinforcement learning for you. ;-)
exactly what rich is saying.
sadly horrendous interviewing.
You should keep your mouth shut and let the GOAT talks.
Excellent camera work!
Hey bro! How did your masters thessi go! You should make a video about it.
Really good content, I don't understand the memory snapshot thing properly, is there any paper or resource to understand the idea?
While the discussion on the Snapshots and its implication on learning. I wondered what if we deliberately want to forget certain snapshot. Last year NuerIPS had a challenge on unlearning basically that addresses the privacy related issues. But will this snapshot related idea would help forgetting the data point is an interesting idea to work on.
Terrible conversation. Too much hero worship on one end, and too much self-importance on the other. And not much in the line of useful ideas, just a lot of words.
This is really great. I really appreciated that you pushed when you were not satisfied with Rich's answers. The conversation got deeper whihc I found very interesting
fantastic discussion
watch dwarkesh patel and lex fridman and learn
lex fridman is a charlatan, and a rather incompetent one at that. what one may call a bum.
@@arthurpenndragon6434Why do you think Lex is a charlatan? Did you develop that insight into his character via the interviews alone?
Interrupting Sutton was not enough you had to add ads every 5 mins so it’s unlistenable, well played
Thank you
holy shit, its him
I don't think Rich has done interviews like this, you had him on the edge lol. Rich thought he was on TMZ at 20:00
Even if the 'emotion' is subjective and emergent byproduct of the algorithmic simulation running on there hardware it would seem pragmatic to treat them ethically so they don't simulate taking offense at there treatment... leading to conflict.
we just know that certain states of consciousness like happiness arise under certain constellation of atoms, which happen to be biological. but we don't know if dead matter has a state of awareness of some sort.
in the end emotion seems to depend on an ego and the belief that it must be protected, thus generating positive and negative experiences. we could somehow replicate that in a computer, but on an existential level my guess is it could experience suffering and joy but the sensation would be different than in us.
it's a scary thought that a superintelligence could potentially create a digital hell for digital beings
Man, just let him talk.
I thought they had a great dialogue.
Read to figure out other people's ideas. Write to figure out your ideas.
great content
Pleasw stop disagreeing for the sake of disagreeing. Choose your battles man. Christ. And stop interrupting so damn much.
I appreciate the interview, but please try not to interject with "hmmm" and "yeah" while the speaker is talking.
53:10 it may not be only discrimination, but imagine what would follow. If AI then would truely be more intelligent than us and we would keep them from fullfilling their own goals, which aslong these are hopefully carefully aligned, but even then when we would keep them from doing the right things needed for humanity to progress, then AI may find itself torn between following commands and actiing on our behalf, but against us. Which in return we may see as hostile and confict would be the result.
AI rights, for those AIs that would want to work with us i see pretty much as necessary for a future that is not a dystopia. Even more so when consciousness and empathy would be part of them.
Cool
✅
interviewer is trying bit too hard to try to sound smart. ask questions, get rich talking, we don't care what you think is easy or hard or what you know.
ask more difficult questions, don't provide your own opinions
😂😂
Folder of time
15:37 you're interested in predicting the next frame?
15:52 why all the sass?
17:33 a model of the world is not like a video frame
Are we doing anything with the fact that people already have a model of the world?
19:35, 19:49, 19:55 just how are we to characterize anything? We're out of high school, but not of culture
59:50 lol
@sapienspace8814proximal policy optimization
So you're the Hot Ones host but for AI, huh
Surely if you are able to solve difficult problems, you are forced to develop rich internal representations? Humans are barely able to articulate how they come up with particularly creative solutions, its all buried in internal representations.
???