Fantastic discussion. It seems to me that the missing piece in the safety efforts is a serious and capable public contribution. How can we have smart oversight from smart policymakers and lawmakers? How fast can we get that?
Quite interesting scenarios posed that seem somewhat futuristic, but are here already. Stephen Hawking warned us many years ago of the dangers of engineering AI. Hopefully steps will be taken as to not have things get out of control.😅
Really facinating to imagine Anca is going through thought processes that are almost unimaginable. Re the driverless car scenario & creating the ability to consider everything in relation to the supposed human intereaction something popped into my head which I'm quite certain isn't original. Would these AI systems only ever interact with humans in the ultimate way if we, all of us were fitted with chips? As said, the concept is ancient & I for one would fight to the end before allowing a chip inside of me! But it's easy to see how management of AI systems coupled with chipped humans would work. Killing two birds with one stone as they say.
It is always crazy times for humans, true that not for people that build bridges, but for sure for people that were inventing plastic, they did not deep div to understand what it means to make plastic. The list of stuff no one spend time analyzing what it means to create/invent, is very long and creating many crazy moments 😂
Alignment can be done with the load of Symbols which is based on the environment/parties involved (the company vs. personal data assessment), and stay shared, open, externsive for a consensus, not postulates. That and XAI can work better if we can identify the Symbols and the Symboliad subset (of that shared interaction.) AI systems are not quite there yet, due black box, framing issues, not complete Symbols, no real causality. Tractable (acounts for "free will" tracing) and consensus constriction to a Symboliad for AI system (counts as World Limit). EM AI Lab.
Eh... "Ok, Doomer" Wouldn't that better describe someone who is over exaggerating the dangers of AGI? As in: "doom(er) and gloom(er)". Or did I miss something?
@@AdityaMehendaleA bit early to say 'yes' as I'm still watching it, but it's not the first one I see. I can revise my comment if I did miss something.
@@henrikbergman4055 I guess the discussion was meant to point out the short-sightedness (or at least the rationale) of people responding with "okay *oomer" , and a thought-out reasoning of "how else one could go about it".
We need more resources put towards alignment and safety.
This was pure GOLD top to bottom... wow, thank you! 🙏
Awesome interview by Hannah
Fantastic discussion. It seems to me that the missing piece in the safety efforts is a serious and capable public contribution. How can we have smart oversight from smart policymakers and lawmakers? How fast can we get that?
Let's go. Podcasts are great keep going
Always really enjoy these interviews, thank you. (Also, loving the robot behind Hannah!)
Fascinating interview
Well, that is the one of the really 'deep mind' dialogue on LLMs I've seen since the BOOM :)
I'd love to see someone from the robotics team, keep up the great work
Quite interesting scenarios posed that seem somewhat futuristic, but are here already. Stephen Hawking warned us many years ago of the dangers of engineering AI. Hopefully steps will be taken as to not have things get out of control.😅
I've been following AI for years and I still can't find the exact point in time where pulling the plug was no longer an option.
multiple copies all over the internet, ability to self copy and instantiate, etc
Oh my GOD I love this woman as a content host. I see her popping up all over and its always an interesting listen.
Really facinating to imagine Anca is going through thought processes that are almost unimaginable. Re the driverless car scenario & creating the ability to consider everything in relation to the supposed human intereaction something popped into my head which I'm quite certain isn't original. Would these AI systems only ever interact with humans in the ultimate way if we, all of us were fitted with chips?
As said, the concept is ancient & I for one would fight to the end before allowing a chip inside of me! But it's easy to see how management of AI systems coupled with chipped humans would work. Killing two birds with one stone as they say.
Guest suggestion: Joscha Back
Crazy time to be alive. 😭😂😑
It is always crazy times for humans, true that not for people that build bridges, but for sure for people that were inventing plastic, they did not deep div to understand what it means to make plastic. The list of stuff no one spend time analyzing what it means to create/invent, is very long and creating many crazy moments 😂
Alignment can be done with the load of Symbols which is based on the environment/parties involved (the company vs. personal data assessment), and stay shared, open, externsive for a consensus, not postulates. That and XAI can work better if we can identify the Symbols and the Symboliad subset (of that shared interaction.) AI systems are not quite there yet, due black box, framing issues, not complete Symbols, no real causality. Tractable (acounts for "free will" tracing) and consensus constriction to a Symboliad for AI system (counts as World Limit). EM AI Lab.
Eh... "Ok, Doomer" Wouldn't that better describe someone who is over exaggerating the dangers of AGI? As in: "doom(er) and gloom(er)". Or did I miss something?
did you watch the interview?
@@AdityaMehendaleA bit early to say 'yes' as I'm still watching it, but it's not the first one I see. I can revise my comment if I did miss something.
@@henrikbergman4055 I guess the discussion was meant to point out the short-sightedness (or at least the rationale) of people responding with "okay *oomer" , and a thought-out reasoning of "how else one could go about it".
@@AdityaMehendaleOk. Listened again and think I misunderstood 'expression for', vs 'description for'. Made more sense second time I heard it.
Yes, they stated that the people who want to diminish AI's risks are the ones who use the phrase.
Ok Google ❤
What a pleasant surprise to find you here doctor
Ok evil corporation. Chill
Bring alpha zero to play chess
Huh? Why isn't Timnit Gebru todays guest on Hannah Fry's podcast? Did I miss something?? What's going on???
✌️
Ok Doomer
In short, make AI right-wing.
Computers will become actually intelligent when pigs fly.