GPT4 now correctly answers the "5 pieces of clothing to dry in the sun" question. It even explains that drying clothes in the sun is not a sequential task but a parallel task. Great video, but already outdated regarding this example from Yejin.
Cognitive psychologist Daniel Kahneman calls "System 1" thinking fast, automatic, and often based on heuristics or 'rules of thumb.' This is in contrast with "System 2" thinking, which is slower, more deliberate, and more logically rigorous. GPT-4 was designed to mimic aspects of human "System 1" thinking: generating responses quickly in a single pass, without a System 2 error correction stage. When I asked GPT-4 the clothesline question, it got it wrong. When I simply asked it to double check its answer, it immediately found its mistake. So go ahead and laugh at GPT-4 for being less intelligent than a young child, but you're laughing at it for being exactly what we designed it to be. Yes, we can create systems with even better System 1 thinking. But no matter how good those get, we will probably be able to improve them greatly by adding a System 2 layer on top.
The RL training is happening very fast now that millions of people are voluntarily training it. This has actually been happening for several years where OpenAI has been using large groups of beta testers and is a big reason why GPT4 can answer a much wider range of questions.
GPT4 now correctly answers the "5 pieces of clothing to dry in the sun" question. It even explains that drying clothes in the sun is not a sequential task but a parallel task. Great video, but already outdated regarding this example from Yejin.
Thanks for checking and reporting, everything is moving so fast, it's amazing
Thank you for these wonderful discussions.
Cognitive psychologist Daniel Kahneman calls "System 1" thinking fast, automatic, and often based on heuristics or 'rules of thumb.' This is in contrast with "System 2" thinking, which is slower, more deliberate, and more logically rigorous. GPT-4 was designed to mimic aspects of human "System 1" thinking: generating responses quickly in a single pass, without a System 2 error correction stage.
When I asked GPT-4 the clothesline question, it got it wrong. When I simply asked it to double check its answer, it immediately found its mistake. So go ahead and laugh at GPT-4 for being less intelligent than a young child, but you're laughing at it for being exactly what we designed it to be.
Yes, we can create systems with even better System 1 thinking. But no matter how good those get, we will probably be able to improve them greatly by adding a System 2 layer on top.
Update: I just found the paper "Tree of Thoughts: Deliberate Problem Solving with Large Language Models" that came out a few days ago. Very relevant.
It looks to me like mll's build a language model based on general data (the web in general) and then require specific training to learn facts.
I have been trying to teach "The Bard" not to say I when it refers to itself. Also trying to get it to say The Bard.
The RL training is happening very fast now that millions of people are voluntarily training it.
This has actually been happening for several years where OpenAI has been using large groups of beta testers and is a big reason why GPT4 can answer a much wider range of questions.
👍🏿
Human intelligence is never about annnn... hmm... eehhh... predicting the next word...
You have Geoff Hinton in the description by mistake. If this is helpful you can delete this comment once you've seen it.
thanks, fixed!
@@PieterAbbeel Thanks so much for another amazing interview.