Here are a few tips from the interview: 1. Give AI a persona because it wants to slot into a role with you. If you don't tell it what that role should be, it may guess incorrectly or give you less than optimal responses. 2. You can give the AI recordings of yourself doing a task (e.g., a specific work assignment) and ask it to critique your performance in order to learn things you could be doing more effectively 3. Different models have different strengths and weaknesses. ChatGPT has the most helpful tone, but Claude is more literary and may be a better writer. Happy cyborging 😊
Great interview though I was troubled by the consistent description of the LLMs as interactive personalities, for example the Sydney Bing description, "subtle level of read by Sydney Bing", "ability to read the person", "the AI wants to make you happy", "what Sydney was revealing about itself". You're applying a lot of human-like complexity and mystery labels on what amounts to statistical complexity and mystery. The way we interact with LLMs induces this kind of interpretation but I don't think it does us any good to reinforce the misconception.
I felt like they were very clear on that topic and the strong temptation to anthropomorphize LLMs. They stated that it's just math, but interesting how that math has created what appears to be a goal-seeking pattern for LLMs to select a narrative plot/scenario to engage with the user.
Ethan Mollick suggested that thinking of AI as a human will prepare you to recognize its aptitudes and shortcomings, which it so happens is how I have learned to recognize when it’s given me wrong answers. Interestingly, when I tell it that it’s given me incorrect, speculative, or unsupported answers, it always acknowledges its error and apologizes. By comparison, I’ve never had a hammer apologize for hitting the wrong nail.
I like the idea of an AI friend, because I go back to my Alan Watts recordings often to help calm my mind. I did this before the movie HER but realized what I was doing after watching the movie. So there is something about that - that is true. Also I love to read fiction, but non-fiction I prefer in audio, and maybe that is everyone....but I find reading a good book still the most satisfying, and wont tolerate a poorly written book but I could not tell you why I consider a book poorly written. I seem to not understand that so I am curios about how I may discover more about that process, yet I don't want to ruin my experience in reading a good book either. So it remains to be seen for me. Maybe I am just slower but on that note I love good podcasts in the same way...which is why I am here. Ezra you're at the top of my podcast list. I love, absolutely love your interview podcasts and essays and am now exploring more of your writings. :)
Based on this, I plan to run my Chatgpt conversations - which have a lot of good info- thru Claude 3 so that the end result will hopefully sound better to the reader. I like my conversations with chatgpt and consider them to be a 2-party brainstorming session. I know way more about the topic than I did before using chatgpt.
The statement about Spark notes gives me this idea. It's 5 years in the future. Every person has their own personal AI that learns the person starting in high school and staying with them throughout their subsequent lives, becoming almost a digital twin of that person's brain. Like having a human twin who finishes your sentences. The thing is, it feels to you that your digital twin is more intelligent than you !
If you're around 60 y o, you'll remember debates as to whether students should be allowed to use calculators in math classes. In the 3rd grade, I recall learning the multiplication table. I struggled with 7, 9 and 12, but finally got through them to past the test. Thankfully, b/c of that experience, I do multiply mentally w/o thinking much about it (mostly in 3's and 5s) generating tips and adding respectively.
Unlike other technologies, working with AI is very ungratifying. You feel no sense of accomplishment. It doesn't give you the rewarding feeling you get working your way through a problem with another person. It may deliver products and productivity but like many other technologies whether it improves our lives is an open question.
I tend to agree but it also depends on how you use it. If you figure out a clever way to utilize an ai tool, that can definitely feel like an accomplishment. It is just another tool in the toolbox, so whether it improves our lives or ends up being a meaningless output machine depends entirely on how you and i use it (or not use it).
I think we will be human and that will save the day; if we are impatient we will tend to consume information in a flash, and take the short cuts...and if we tend to be curious, more patient with the process, we will take the long way home. I think our election results show this too...and I don't think one is bad over the other, just different, with different results. There is a big difference between a person who is naive and someone who is dull and lacking in willingness to learn. In the long run, I see AI being used by the curious, and the dull ones being used up by it. The transfer of knowledge, much the same way we transfer wealth. - Intellectual Struggle with no short cuts - will have value.
I think people need to appreciate the fact that machines have, since the middle centuries of the last millennium, assisted human beings with the PSYCHOMOTOR aspect of writing. However, AI tech represents the first time in the history of writing that some of the COGNITIVE load of writing has been transferred to a machine. The thing is, this cognitive aspect of writing is inextricably linked to THINKING. Therefore, when scientists declare that they are working to make AI better or to "advance" the technology, what exactly does that mean? Does it mean that they are working to transfer ALL the cognitive load of writing to a machine? If that's the case, it means that they are working to transfer all the THINKING LOAD ASSOCIATED WITH WRITING to AI. So, they will actually be working to have machines "think" for us. Just remember that the ultimate form of controlling a human is to remove, to transfer, or to shift the responsibility for thinking FROM that person. That is how slaves are made...
I find it quite entertaining that nowadays an average AI model has more quirks and mental health issues than an average New Yorker and yet somehow all the experts are 100% sure that when these models get bigger and stronger in a very very near future, then they will be perfectly safe and everything will be fine and all the concerns about safety and security are from the Luddites.
I personally do not want my AI to have a personality, just be super helpful. Focus on more helpful things like making the AI more proactive and making use of my data. Not a personality
If you want AI to be useful, you have to build task-specific applications with AI using fine-toned models, AI workflow builders, or agents. Don't worry though, by 2027 these things will be built into human level AI.
That’s silly, Ezra. For most of the time that you have been alive, you have had abundant access to software development tools and you have yet to learn how to use them. It isn’t your thing. And neither is AI. So ask yourself this: why does productivity have no place in your life?
the irony at the heart of this podcast (and i usually adore Ezra's shows and guests) is that we continuously seem to fail to see that we've turned into a societ that focuses on efficiency over the imagined, time-saving vs time savoring...what the tool shapes us...that this writer uses the tool for transitional sentence sticking...im not adverse to AI, nor am i a luddite, and I am also a writer, but it just seems sad that we have fallen so in love with our toys and tools rather than the world outside....7 World Central Kitchen killed by a smart targeting munition and here we are orgasming over the new AI intelligence we've invented.....that we have become enamored and throbbingly in love with ai and focused on it more than the world we're overlooking, is an irony that ai doesnt get, cause the programmers dont get....'building a relationship'....im disappointed Ezra...as former nytimes regional paper writer, i would have expected more.....not back to reality.....best, bb
I love Ezra and his podcast. Part of my daily morning listening. This podcast was definitely not mindless. I suggest it simply failed to address a fundamental issue and never addressed the metaphysics of the fact we live in a world that mindlessly focuses on “efficiency “ over something more fundamental. AI is inevitable, just as most in intractably addicted to their phones and media. I’d love for Ezra to interview Jaron Lanier on the topic. All I’m saying. Cheers, bb
I don't trust that there is no AI writing in his book. How could he possibly prove that? How could he possibly prove that he isn't a complete charlatan who is only relying on the AI.
Having listened to many of Ezra's podcasts now, I'm continually surprised and impressed with the deep thought and different perspectives he brings to the table that I hadn't considered before. As a consumer of books, of podcasts, why would I even care if his insight is sharpened or made clearer through use of AI? It's benefitting me and my understanding of the topic at hand. I work in an industry where people are using AI all the time to make them more productive in their work. The "useful assistant" model is powerful. People who are experts in their field aren't going to have GenAI "do the work". They see where the gaps and flaws are. So they use it to augment their capabilities. In that sense, I've seen those users elevated to an even higher plane of expertise and productivity. I'd classify Ezra in that way. The dangerous users are those that don't know enough about their subject matter to say whether GenAI output is good or bad, and so just dogmatically copy/paste results. If I read a book from that person, I'm sure I'd be disappointed by it, maybe not even understanding why -- but something would be lacking.
Here are a few tips from the interview:
1. Give AI a persona because it wants to slot into a role with you. If you don't tell it what that role should be, it may guess incorrectly or give you less than optimal responses.
2. You can give the AI recordings of yourself doing a task (e.g., a specific work assignment) and ask it to critique your performance in order to learn things you could be doing more effectively
3. Different models have different strengths and weaknesses. ChatGPT has the most helpful tone, but Claude is more literary and may be a better writer.
Happy cyborging 😊
Great interview though I was troubled by the consistent description of the LLMs as interactive personalities, for example the Sydney Bing description, "subtle level of read by Sydney Bing", "ability to read the person", "the AI wants to make you happy", "what Sydney was revealing about itself". You're applying a lot of human-like complexity and mystery labels on what amounts to statistical complexity and mystery. The way we interact with LLMs induces this kind of interpretation but I don't think it does us any good to reinforce the misconception.
I felt like they were very clear on that topic and the strong temptation to anthropomorphize LLMs. They stated that it's just math, but interesting how that math has created what appears to be a goal-seeking pattern for LLMs to select a narrative plot/scenario to engage with the user.
Ethan Mollick suggested that thinking of AI as a human will prepare you to recognize its aptitudes and shortcomings, which it so happens is how I have learned to recognize when it’s given me wrong answers.
Interestingly, when I tell it that it’s given me incorrect, speculative, or unsupported answers, it always acknowledges its error and apologizes.
By comparison, I’ve never had a hammer apologize for hitting the wrong nail.
This is the exact podcast i needed right now
I like the idea of an AI friend, because I go back to my Alan Watts recordings often to help calm my mind. I did this before the movie HER but realized what I was doing after watching the movie. So there is something about that - that is true. Also I love to read fiction, but non-fiction I prefer in audio, and maybe that is everyone....but I find reading a good book still the most satisfying, and wont tolerate a poorly written book but I could not tell you why I consider a book poorly written. I seem to not understand that so I am curios about how I may discover more about that process, yet I don't want to ruin my experience in reading a good book either. So it remains to be seen for me. Maybe I am just slower but on that note I love good podcasts in the same way...which is why I am here. Ezra you're at the top of my podcast list. I love, absolutely love your interview podcasts and essays and am now exploring more of your writings. :)
This was really good and relevant…hope you could do more of these, combining big picture themes with practical ideas…
Based on this, I plan to run my Chatgpt conversations - which have a lot of good info- thru Claude 3 so that the end result will hopefully sound better to the reader. I like my conversations with chatgpt and consider them to be a 2-party brainstorming session. I know way more about the topic than I did before using chatgpt.
Dang, so many ideas here. I'll have to listen again and take notes !
The statement about Spark notes gives me this idea. It's 5 years in the future. Every person has their own personal AI that learns the person starting in high school and staying with them throughout their subsequent lives, becoming almost a digital twin of that person's brain. Like having a human twin who finishes your sentences. The thing is, it feels to you that your digital twin is more intelligent than you !
If you're around 60 y o, you'll remember debates as to whether students should be allowed to use calculators in math classes. In the 3rd grade, I recall learning the multiplication table. I struggled with 7, 9 and 12, but finally got through them to past the test. Thankfully, b/c of that experience, I do multiply mentally w/o thinking much about it (mostly in 3's and 5s) generating tips and adding respectively.
Unlike other technologies, working with AI is very ungratifying. You feel no sense of accomplishment. It doesn't give you the rewarding feeling you get working your way through a problem with another person. It may deliver products and productivity but like many other technologies whether it improves our lives is an open question.
Like always i make the exact same arguments for like every fabrication job most people like me have had for years.
I tend to agree but it also depends on how you use it. If you figure out a clever way to utilize an ai tool, that can definitely feel like an accomplishment. It is just another tool in the toolbox, so whether it improves our lives or ends up being a meaningless output machine depends entirely on how you and i use it (or not use it).
Good podcast!
Use it for everything and you’ll find its limits and abilities
I think we will be human and that will save the day; if we are impatient we will tend to consume information in a flash, and take the short cuts...and if we tend to be curious, more patient with the process, we will take the long way home. I think our election results show this too...and I don't think one is bad over the other, just different, with different results. There is a big difference between a person who is naive and someone who is dull and lacking in willingness to learn. In the long run, I see AI being used by the curious, and the dull ones being used up by it. The transfer of knowledge, much the same way we transfer wealth. - Intellectual Struggle with no short cuts - will have value.
I think people need to appreciate the fact that machines have, since the middle centuries of the last millennium, assisted human beings with the PSYCHOMOTOR aspect of writing. However, AI tech represents the first time in the history of writing that some of the COGNITIVE load of writing has been transferred to a machine. The thing is, this cognitive aspect of writing is inextricably linked to THINKING. Therefore, when scientists declare that they are working to make AI better or to "advance" the technology, what exactly does that mean? Does it mean that they are working to transfer ALL the cognitive load of writing to a machine? If that's the case, it means that they are working to transfer all the THINKING LOAD ASSOCIATED WITH WRITING to AI. So, they will actually be working to have machines "think" for us. Just remember that the ultimate form of controlling a human is to remove, to transfer, or to shift the responsibility for thinking FROM that person. That is how slaves are made...
I find it quite entertaining that nowadays an average AI model has more quirks and mental health issues than an average New Yorker and yet somehow all the experts are 100% sure that when these models get bigger and stronger in a very very near future, then they will be perfectly safe and everything will be fine and all the concerns about safety and security are from the Luddites.
I personally do not want my AI to have a personality, just be super helpful. Focus on more helpful things like making the AI more proactive and making use of my data.
Not a personality
If you want AI to be useful, you have to build task-specific applications with AI using fine-toned models, AI workflow builders, or agents. Don't worry though, by 2027 these things will be built into human level AI.
it uses You
Self Chernobyl
That’s silly, Ezra. For most of the time that you have been alive, you have had abundant access to software development tools and you have yet to learn how to use them. It isn’t your thing. And neither is AI. So ask yourself this: why does productivity have no place in your life?
💙
❤
the irony at the heart of this podcast (and i usually adore Ezra's shows and guests) is that we continuously seem to fail to see that we've turned into a societ that focuses on efficiency over the imagined, time-saving vs time savoring...what the tool shapes us...that this writer uses the tool for transitional sentence sticking...im not adverse to AI, nor am i a luddite, and I am also a writer, but it just seems sad that we have fallen so in love with our toys and tools rather than the world outside....7 World Central Kitchen killed by a smart targeting munition and here we are orgasming over the new AI intelligence we've invented.....that we have become enamored and throbbingly in love with ai and focused on it more than the world we're overlooking, is an irony that ai doesnt get, cause the programmers dont get....'building a relationship'....im disappointed Ezra...as former nytimes regional paper writer, i would have expected more.....not back to reality.....best, bb
It seems like all these kinds of dilemmas were discussed in this video. They weren't just mindlessly drooling over new tech at all.
I love Ezra and his podcast. Part of my daily morning listening. This podcast was definitely not mindless. I suggest it simply failed to address a fundamental issue and never addressed the metaphysics of the fact we live in a world that mindlessly focuses on “efficiency “ over something more fundamental. AI is inevitable, just as most in intractably addicted to their phones and media. I’d love for Ezra to interview Jaron Lanier on the topic. All I’m saying. Cheers, bb
On the contrary, this could in fact be the first intelligent species produced without orgasm.
😂😂😂 without MALE orgasm. Critical distinction. 😂😂😂🙏
Its accelerating at an expontenial rate... Good. I dont have forever to live here, not without the help of future tech anyway
All the problems people have in understanding AI is mostly because of dumb preconceptions. Preconceptions wil be humanity's downfall.
is this a joke?
I don't trust that there is no AI writing in his book. How could he possibly prove that? How could he possibly prove that he isn't a complete charlatan who is only relying on the AI.
Having listened to many of Ezra's podcasts now, I'm continually surprised and impressed with the deep thought and different perspectives he brings to the table that I hadn't considered before. As a consumer of books, of podcasts, why would I even care if his insight is sharpened or made clearer through use of AI? It's benefitting me and my understanding of the topic at hand. I work in an industry where people are using AI all the time to make them more productive in their work. The "useful assistant" model is powerful. People who are experts in their field aren't going to have GenAI "do the work". They see where the gaps and flaws are. So they use it to augment their capabilities. In that sense, I've seen those users elevated to an even higher plane of expertise and productivity. I'd classify Ezra in that way. The dangerous users are those that don't know enough about their subject matter to say whether GenAI output is good or bad, and so just dogmatically copy/paste results. If I read a book from that person, I'm sure I'd be disappointed by it, maybe not even understanding why -- but something would be lacking.
These seem thoroughly unethical, and should be illegal.
oh boy
I think the AI safety researchers want it to get worse too so then they wouldn't have been a useless role getting paid 6 figures
8797 Marvin Greens