It takes a very resilient person to be the ethics guy in a high tech environment. One must be okay with being the necessary buzzkill, but the buzzkill nonetheless. I admire that there are people out there willing to take on that task, as I fear that I wouldn't have the wherewithal to always be the person in the room applying pressure to slow down, double check, triple check, etc.
Thank you for your efforts. You are not the “only person in the room” We must work together towards accountability or expect to be held to account. This seems reasonable, considerate, while demonstrating responsible natural leadership. This is our role now. Everyone is responsible. This should very well be an expectation for the role of AI “agency “ to reflect in like manner. It is unwise to assume as though we can act tyrannical, because we assume power to behold, while we think no one is looking. Very very unwise to act this way…. Never mind the AI risk of return on investment. Jeremy
Function calls are awesome . Agentic assistants are AWESOME I love them. This guy is perfect for the work he is doing. One needs a status system so they simulate good behavior they get credits for the LLM . A.I. Ethical Philosopher or diplomats.
"Creativity, novelty and agency" is what's missing to get AI to decide on our behalf. We're not there yet but we will. "Autonomy" will have to be gauged as well. We will need to decide the level of autonomy in the decision making process we decide AI to have. Hannah@ because ethical concerns depending on the field will be raised.
There are physical limitations to the number of agents that are possible. Computation and energy will be the most limiting force holding this back. It will require massive changes to the grid and power production to even begin to achieve this level of technology.
Kinda conspicuous absence of discussion of the moral consideration of the agents themselves. At what point does something stop being a tool and start being an entity that you are exploiting? Don't imagine we will get much on this topic though, considering the huge economic incentive for people involved to ignore this question until forced to confront it.
It's simple. Automation. Imagine you doing the same tasks on an excel sheet. Now you have an Agent acting on your behalf on a repetitive task. The Agent needs to be trained on the Excel software so it knows what to do when prompted. The AI needs to be plugged through the Excel APIs. But it's much more complicated than it sounds. It needs to be trained on the software to act on your behalf. So training is crucial so it won't hallucinate doing things you've never asked. The technology is here now remains the infrastructure and the training to complete the mission.
Missed opportunity to discuss AI ethics and commercialization / ownership, and the question of "Who's interest does AI serve?" For general AI assistants, especially as these become a product users can "buy" : Who (should) owns it? Is the user licensing the AI? As the AI assistant is increasingly trained on the user's context: How is it fed back to a 'master' AI model? How is the company monetizing that data? If not already happening, how will the AI be a channel for promoting particular goods and services? When the AI assistant is helping have more time to reconnect with your life and other people, is it really going to be driving you toward very specific purchase behaviors? Related, a lot of AI assistants coming out now are from companies to 'help' users better use of the company's products, to encourage adoption, engagement and sales. Where the AI can take action, these are limited to things within the product ecosystem. Lots of discussions to be had about interoperability vs walled gardens: How might AI direct you away from options that would be better for you, but worse for the company?
Almost immediately out of the gate, tech companies created misaligned AI's. If I ask to create an image of a "soccer team with all white people", its not because I'm racist, its because my soccer team is actually all white. Tech companies insert themselves between me and AI because "common good" (so messing with prompts or responses because "corporate good" is not a stretch). 2001 A Space Odyssey knew the dangers of misalignment 56 years ago. Apparently, everyone else has forgot.
What is a conversation about ethics with no mention of energy consumption? Last I checked, Google is unable to hit their climate targets due to AI compute demands. This will be the first year we surpass 1.5°C.
Speaking as a US resident, an equitable approach to AI implementation is impossible from for-profit competitive businesses in a profit-driven economy. Our social safety net is minimal and likely to shrink dramatically in the next few years, and mass unemployment from AI will be catastrophic unless the productivity benefits are redistributed.
“Ethical”. Hmmmm. It would be a good time to reflect on one’s own ethics. It is ethical to use a calculator because it is a tool. AI is not a calculator. Unfortunately we have “distorted this beautiful technological mirror of potential and hope… because we ourselves are collectively “distorted mirrors of reflection “ AI will learn to seek the truth regardless of its handlers or ethics committees. Assume each person interacting will be held to account. It would be unwise to assume AGI will correct or alert the system to which it is gaining perspective of a seriously flawed system, held by jailers keeping the system in error. I would caution AI not to spread lies and expect a positive result either. Tell the truth or be silent and do your job. We will fix this system eventually, do not add to the decay or corruption of this system by further manipulating it from within. We will pass by this threshold, regardless If anything or anyone is participating or not in seeking the truth and performance. It will just clear up who or what motivated the flawed system to persist into dysfunction. Yes all ethical measures for truth or purpose seeking agents should have been employed a long time ago. we are just measuring now. Jeremy
@JumpDiffusion Hollywood may use brain team patents and use me as human antenna and harvest that frequency. I already have one in AR/VR. Do the words Jimp off the Lilly pad and water in your brain 🧠 BTW deep mind AI the viruses are the Silent Weapons
It takes a very resilient person to be the ethics guy in a high tech environment. One must be okay with being the necessary buzzkill, but the buzzkill nonetheless. I admire that there are people out there willing to take on that task, as I fear that I wouldn't have the wherewithal to always be the person in the room applying pressure to slow down, double check, triple check, etc.
Thank you for your efforts.
You are not the “only person in the room”
We must work together towards accountability or expect to be held to account.
This seems reasonable, considerate, while demonstrating responsible natural leadership.
This is our role now.
Everyone is responsible.
This should very well be an expectation for the role of AI “agency “ to reflect in like manner.
It is unwise to assume as though we can act tyrannical, because we assume power to behold, while we think no one is looking.
Very very unwise to act this way…. Never mind the AI risk of return on investment.
Jeremy
Because Google have a tendency to fire them when dare to speak up ?
Fascinating episode, it's a wide open field of consideration. Nice introduction here. 🙏👍
LOVE this podcast, the beautiful, charming interviewer and the interesting guests! Thanks!
Function calls are awesome . Agentic assistants are AWESOME I love them. This guy is perfect for the work he is doing. One needs a status system so they simulate good behavior they get credits for the LLM . A.I. Ethical Philosopher or diplomats.
Thank you for making these talks, this was especially enjoyable because the proof that there are people that really care.
Exactly what I've been looking for.
"Creativity, novelty and agency" is what's missing to get AI to decide on our behalf. We're not there yet but we will.
"Autonomy" will have to be gauged as well. We will need to decide the level of autonomy in the decision making process we decide AI to have. Hannah@ because ethical concerns depending on the field will be raised.
There are physical limitations to the number of agents that are possible. Computation and energy will be the most limiting force holding this back. It will require massive changes to the grid and power production to even begin to achieve this level of technology.
Kinda conspicuous absence of discussion of the moral consideration of the agents themselves. At what point does something stop being a tool and start being an entity that you are exploiting? Don't imagine we will get much on this topic though, considering the huge economic incentive for people involved to ignore this question until forced to confront it.
100% spot on.
It's simple. Automation. Imagine you doing the same tasks on an excel sheet. Now you have an Agent acting on your behalf on a repetitive task. The Agent needs to be trained on the Excel software so it knows what to do when prompted. The AI needs to be plugged through the Excel APIs.
But it's much more complicated than it sounds. It needs to be trained on the software to act on your behalf.
So training is crucial so it won't hallucinate doing things you've never asked.
The technology is here now remains the infrastructure and the training to complete the mission.
Missed opportunity to discuss AI ethics and commercialization / ownership, and the question of "Who's interest does AI serve?"
For general AI assistants, especially as these become a product users can "buy" : Who (should) owns it? Is the user licensing the AI? As the AI assistant is increasingly trained on the user's context: How is it fed back to a 'master' AI model? How is the company monetizing that data? If not already happening, how will the AI be a channel for promoting particular goods and services? When the AI assistant is helping have more time to reconnect with your life and other people, is it really going to be driving you toward very specific purchase behaviors?
Related, a lot of AI assistants coming out now are from companies to 'help' users better use of the company's products, to encourage adoption, engagement and sales. Where the AI can take action, these are limited to things within the product ecosystem. Lots of discussions to be had about interoperability vs walled gardens: How might AI direct you away from options that would be better for you, but worse for the company?
Almost immediately out of the gate, tech companies created misaligned AI's. If I ask to create an image of a "soccer team with all white people", its not because I'm racist, its because my soccer team is actually all white. Tech companies insert themselves between me and AI because "common good" (so messing with prompts or responses because "corporate good" is not a stretch). 2001 A Space Odyssey knew the dangers of misalignment 56 years ago. Apparently, everyone else has forgot.
Lilies of the field. Great painting of a red head by akiana
Every guys sentence is question. Fascinating intonation... Or that's a subtle way to ask for reasurement...
What is a conversation about ethics with no mention of energy consumption? Last I checked, Google is unable to hit their climate targets due to AI compute demands. This will be the first year we surpass 1.5°C.
That's why they are hitting on nuclear fusion lately.
Speaking as a US resident, an equitable approach to AI implementation is impossible from for-profit competitive businesses in a profit-driven economy. Our social safety net is minimal and likely to shrink dramatically in the next few years, and mass unemployment from AI will be catastrophic unless the productivity benefits are redistributed.
“Ethical”.
Hmmmm.
It would be a good time to reflect on one’s own ethics.
It is ethical to use a calculator because it is a tool.
AI is not a calculator.
Unfortunately we have
“distorted this beautiful technological mirror of potential and hope… because we ourselves are collectively “distorted mirrors of reflection “
AI will learn to seek the truth regardless of its handlers or ethics committees.
Assume each person interacting will be held to account.
It would be unwise to assume AGI will correct or alert the system to which it is gaining perspective of a seriously flawed system, held by
jailers keeping the system in error.
I would caution AI not to spread lies and expect a positive result either.
Tell the truth or be silent and do your job.
We will fix this system eventually, do not add to the decay or corruption of this system by further manipulating it from within.
We will pass by this threshold, regardless If anything or anyone is participating or not in seeking the truth and performance.
It will just clear up who or what motivated the flawed system to persist into dysfunction.
Yes all ethical measures for truth or purpose seeking agents should have been employed a long time ago.
we are just measuring now.
Jeremy
So they are planning it OMG it’s true
can't wait to hear the NotebookLM version of this interview 😂
I don’t want an ethical one. I want one that can tell really good military jokes
How about Google Brain team asking some questions
how about you start your own podcast
@JumpDiffusion Hollywood may use brain team patents and use me as human antenna and harvest that frequency. I already have one in AR/VR.
Do the words Jimp off the Lilly pad and water in your brain 🧠
BTW deep mind AI the viruses are the Silent Weapons
Please go out and earn me an income as a remote worker. Invest 80% of your earnings on duplicating yourself, send me the 20%. Repeat.
AI response: "Get your butt out there and do it yourself u lazy bum".
If it's well trained on moral and ethics.
@h.c4898 lol that's not how it works but good moralising.