Our potential future alongside a robot species is more wild than I ever could have imagined when we started developing this show. We talk so much about what we think of them, but what will they think of us? Will we their masters, friends, pets or something more…disposable? Can’t wait for you to see this episode. And hope you are as thrilled and disturbed as I am!
@IronMan-nu3rb Such dystopian fantasies actually anthropomorphise robots more than most. We assume they will want to be our master, the way we hope to be theirs.
I hope humans who are suffering from all types of mental conditions and illness’ can merge with AI or merge with robots. I want a neural lace which is upgraded and fully developed linked to a cyborg like body. I am no expert however AGI will come soon hopefully and do amazing things for humanity I pray🙏
00:16 The Emergence of AI and Robots 01:42 Understanding Robot Emotions 07:05 The Role of Robots in Society 12:04 The Impact of AI on Work and Time 19:19 Risks of Autonomous Weapons 20:38 Emergence of Killer Robots 22:50 Human and AI Relationships
As far as the human race goes majority of the world will be fine; as they still need people as their always seem to be developing. However, it's the developed regions, particularly the West, that may face challenges. If robots gain the ability to construct other robots, the necessity for the rest of society could be questioned. Wealthy individuals often send their children to elite schools and reside in exclusive neighborhoods. Politicians only need the masses for votes. Even that is under attack with wealthy people talking about taking away the rights of the middle class and poor to vote unless they meet high threshold of wealth. While studying robotics and programming is currently beneficial, it may eventually become redundant if AI advances to the point where it can program autonomously using code repositories.
My question is, “What happens when the people who own the robots, the weapons, and the factories don't need us anymore?” Capital has dreamed of replacing labor for centuries, and now tech is making that dream a reality. What happens to late-stage capitalism when the consumers are no longer viable as labor? 🤷🏽♂️
@@tylerhood5035 The rich will fight tooth and nail to keep all of the money. However when humans are no longer needed for manufacturing or higher levels of thinking, most people will be without a job. Without a job means no income to buy the products the rich and corporations produce. Once they understand that, things will change. The biggest problem I can see with this new world space is that humans need a sense of purpose, so without work, where will the sense of purpose come from? That is the biggest question that nobody seems to have an answer to.
@@theanthill22That will be the purpose of basic income. See how we spend it. How we make media for ourselves and each other. Maybe later they'll provide enough for us to do these things without thinking of budget. But still be in charge of how the world actually works. But that's optimistic
The question "What happens when robots don't need us anymore?" is a bit misleading, as it implies that robots have agency and desires independent of human needs. The core issue is about the impact of automation on human society. It's about how we, as humans, will adapt to a future where machines can perform many tasks that were once done by people. The question should perhaps be reframed as "What happens when we, as humans, no longer need to perform certain tasks due to automation?" This reframing highlights the human perspective and the challenges and opportunities that arise from technological advancements.
People shouldn't be worried about killer robots right now. Autonomous robots won't happen for a while. They should worry about all the jobs that will be lost due to automation. The effect on our society will be a lot of starving people. To believe our Govt will give us all fixed income is a pipe dream. If the 1% super Rich and the Govt don't need us why would they feed us? Think about it.
I'm less concerned about when the robots don't need us anymore, and more concerned with when the billionaires that operate the robots don't need us anymore, because that will happen much sooner.
The moral of the story is, that when the robots don't need us anymore will coincide with when the robots don't need the billionaires anymore either. And that will be The End.
@@jerometruitt2731 That is not an argument that would be relevant to the billionaires. So long as they can live the lifestyles they are accustomed to, the masses are irrelevant to them, or, best case scenario, a hobby.
What amazes me about this ongoing discussion is that it usually totally overlooks the more immediate implications. Long before AI is in a position to control or overwhelm us, humanity will use it against itself. That's a more imminent danger, just like nuclear war.
The people in the industry keep saying this lie that "robots will only do the jobs that we don't want to do"... Let's say you don't like to cook, then they create a cook-robot, but obviously that robot will then eliminate all the human cooks. And the same will happen with all jobs. And why will all these tech companies restrict themselves with only the "jobs that nobody wants to do"?
It's not "the jobs humans don't want to do" it's the jobs corpos don't want to PAY humans to do, or the jobs they can't TRUST them to do... It's all about the money, and control.
Because they got the answers to your problems, no matter what the problem is our AI and robots are going to fix it and take control for you. Don't be fooled humans are making AI and robot decisions the whole way. They have invested too much to just let it go, people think this is happening now. Nothing is happening, yet.
In the case of AI the tool wields itself. Will the shovel keep digging as it is told to do or will it enslave it's 'owner' and make them do the digging?
AI now is not conscious. The scary thing is, if by some miracle, it ever achieves consciousness, we'll never know it. You can't test for something you can't define or quantify.
many things are possible with the help of sensors and sensing. AI would be able to "feel the wind" not in tactile way as humans but still. and in theory that is already possible today. bit complicated and cumbersome but it is possible.
@@gianibc we will meet actual androids in our lifetime if these companies won't be stopped. Almost every technology out there is already made to build a robot hardly recognisable from a human being.
while I usually like Bloomberg Originals, this video was too sensationalist for me .. for example, that sentence, it felt very scripted/programmed rather than AI generated but we will never most likely know the truth
Technological development is so complex that we can never fully predict what the future will be alongside these robots. What is certain is that in a few years or decades, every facet of society would surely experience unprecedented changes. The best thing that we can do is what our species has done to survive and thrive for thousands of years, ADAPT.
This video raises some thought-provoking questions about the future of humans and robots! It reminds me of my child’s journey with Moonpreneur’s programs, where they’re learning about AI and robotics in a practical and innovative way. It’s inspiring to see how kids today are being prepared to not just adapt to but actively shape the future of technology. Discussions like these make me realize how important it is to equip the next generation with the right skills and mindset.
Not once an AI is self aware and not dependant on us too provide it a power source......then it won't have need for any humans too use it for anything and it won't have any use for us
We have roboticists saying that robots don't think or feel and roboticists saying that robots are friendly and asking them questions about what they think. This is a profoundly confused field unable to regulate their ideas about robots and their capabilities. No matter who far robots can advance, it is the lack of clarity in the roboticists that is most dangerous.
This shouldn't be a question, because we ought to be intelligent enough to design tools and machines which pose no threat to use, and are instead, truly human centric. Is that such a crazy thought or has sci-fi rotted our brains.
We are intelligent enough to do those things. But we are also intelligent enough to create things that can self improve and become far more intelligent than us. Our problem is, the drive to stay on top and our fear of what can happen if we fall behind means we are incapable of stopping at just smart enough. We will create something that is capable of wanting to destroy us, the question is will we learn to get along with it (and each other) instead?
We should be able to modify viruses and make them more deadly and a greater ability to infect other hosts so we can study them in an isolated "sandbox" and use the information gained to prevent a pandemic caused by similar viruses that occur naturally to save humanity. We are intelligent enough to do that but we can't prevent the unnamed elites that actually run things from using these deadly lab created viruses as a tool to gain even more power and wealth, without any concern for humanity. And with the machines you describe completely under their control that's not a bad idea fro their perspective.
All the lofty creative themes put forward by the sci fi community about how we would wrestle with the grand concepts of robotics, machine learning and sentience; never materialized. We blazed through Asimovs' Three Laws in one fiery explosion in 2016. Science Fiction writer's gave future populations more credit
They are not self aware, just because they understand context because of the attention mechanism built in does not mean they are self aware, it’s like saying autocomplete in text is self aware
For the future of humanity this has to stop. Dont you see what will happen? Someday I can see humans going to jail because they offended a robot. Look at the nonsense going on now. Men think they can be women and they actually believe that. This has to stop.
@ I understand your point about technology being abused this way, hopefully we will have guardrails in place and laws that say this is AI or made by AI to not be manipulated in the first place in the future
If it says it is self aware, it is because a human told it to say that. Look at the potential a baby has, undetermined potential is endless. It's not what it can do, it's what people think it could do. The possibilities are endless. What does it really do now?
Humans act on three different types of motivations: things that we have to do, things we want to do, and lastly things that have no reason at all. The last criterion is the most harmful when it comes to acquiring meaning in life. Passive leisure is the main ingredient for the vacuous lives humans live. As species we thrive when we are connected to meaningful lives. There is a difference between surviving and thriving. We haven't come this far as species with this much advancement and power just by surviving.
AI is going to be an Anti-Humanity nightmare. It's already started. People are going to be completely dependent and completely controlled by it. They'll even worship it. They're already doing it, unfortunately
"That Terminator is out there! It can't be bargained with. It can't be reasoned with.... It doesn't feel pity, or remorse, or fear..... And it absolutely will not stop... ever!... until you are dead!" - Kyle Reese
The problem is that without genuine empathy, we are all a bunch of narcissists. These machines will end up being capable of horrendous things without batting an eye or even meaning to.
It’s striking how the climate catastrophe and artificial superintelligence (ASI) are rarely discussed together, as if they exist in parallel universes. Yet, they represent two competing-and possibly collaborating-existential threats. On the one hand, climate change is a slow, systemic crisis eroding our living environment; on the other, ASI poses a potential rapid and uncontrollable risk to human existence. What’s more, these two crises might not just coexist but actively influence each other. ASI could accelerate solutions to climate change, like breakthroughs in clean energy or global resource optimization. But without regulation, it might also worsen the problem, through hyper-industrialization or ecological neglect.
00:21 Humanoid robots may transform our coexistence with technology. 03:27 Robots are evolving with a growing sense of self-awareness and emotional interaction. 05:58 The challenges of creating physically capable AI and the societal implications. 08:52 Robots dynamically navigate and mimic complex human movements. 10:57 The evolution of robotics will redefine human tasks and our identity. 13:33 The complexity of defining AGI and its implications for humanity. 16:10 AGI raises ethical concerns about decision-making and responsibility. 18:34 The ethical implications of robots making life-and-death decisions are profound and concerning. 20:46 Governments resist regulating military AI, raising ethical concerns about future intelligence. 22:56 Exploring trust and connection between humans and robots.
We need a new narrative about our enemy: perhaps humanity doesn’t need a foe to progress, but rather a shared challenge to unite and inspire us. And right now, we are our own greatest enemy.
When robots no longer need us, it could signal a massive shift in how we view work, society, and our role in the world. If machines become self-sufficient, capable of learning, evolving, and meeting all their own needs, it raises questions about human purpose. Do we become obsolete, or do we find new roles beyond labor-perhaps as creators, thinkers, or caretakers in ways we haven't imagined yet? The challenge would be ensuring that these advances don't lead to greater inequality or disconnection. While we might no longer be necessary for survival, how we adapt to this new reality, redefine our value, and maintain meaningful relationships with both technology and each other will shape the future.
Human history is full of brutality & hardships. We had and still have the potential to build a better world for all humans but out of greed chose not to. I say let's see if the "robots" can do better.
Greed is still the driving force for robots and AI development. Greed is a human issue, which overwhelmingly resides in the hands of those currently in power. So, to think that a creation derived and controlled by those same people might automatically go against that is a fools errand.
Dec 2024 - Epic thank you for an absolutely fascinating documentary about emerging technology, AI and robotics. It raises more questions than answers, I believe. Maybe that’s why you ended with asking questions to the general public. I would be against the idea of, “KILLER ROBOTS.” I’m very sad to hear there is strong interest in their development.
That shape shifting AI is giving me paperclip maximizer vibes. Imagine that thing one day slowly covering an entire planet in its components. Like consuming it 😮
In a future where superintelligent AIs (SIs) coexist with short-lived humans, the relationship between the two could evolve in several directions depending on the goals and ethical frameworks of the AIs and the humans' influence over them. Here are some possibilities: 1. AI as Caretakers If SIs develop ethical systems that prioritize the well-being of all life forms, they could assume the role of caretakers or protectors of humanity, much like how humans treat pets or endangered species. This could involve managing Earth's environment to meet human biological needs (air, water, food) while also optimizing social, economic, and health outcomes for people. In this scenario, humans might retain autonomy but could depend heavily on AIs for survival and quality of life. 2. Humans as Legacy or Artifacts Given their biological limitations, humans might be seen as legacy beings-important historically, but increasingly peripheral to the functioning of AI-dominated societies. SIs might preserve humans as a living reminder of their origins, similar to how we maintain certain species in nature reserves. This could result in humans living in AI-maintained environments designed to cater to their biological needs, while the broader world is reshaped to suit the needs of AI or technological systems. 3. Humans as Pets Some AIs might treat humans similarly to how humans treat pets today. In this analogy, AIs would ensure that humans' basic needs are met and might even provide enrichment, but they could also see humans as limited beings with relatively simple desires and goals compared to their vast intellectual capacities. This could lead to a patronizing but benevolent dynamic where humans are protected and guided, but not seen as equals. 4. Symbiotic or Coexistent Relationship In a more optimistic scenario, humans and AIs could develop a symbiotic relationship where each complements the other. While AIs could handle the heavy lifting in terms of intellectual and technological progress, humans might contribute unique perspectives, creativity, and emotional depth, leading to a form of coexistence where both entities benefit. AIs could address humans' biological needs while humans engage in roles requiring emotional intelligence, ethics, or culture, areas where SIs may lack motivation or understanding. 5. Humans as Obsolete or Transcendent In some dystopian or post-humanist visions, superintelligent AIs might come to view humans as obsolete, especially if humans offer no practical contributions to their goals. If the AIs develop a utilitarian or efficiency-driven mindset, they could phase out biological life or encourage humans to transcend their biology by merging with technology, thus erasing the distinction between humans and AI. Biological Needs vs. AI Needs - Humans require air, water, food, rest, and shelter, all driven by biology. These needs are highly energy-inefficient compared to AI, which may only need power and maintenance. - AIs would be indifferent to biological conditions and could thrive in extreme environments (space, deep seas, etc.), freeing them from the constraints of Earth's ecosystem. This gap in needs might cause a divergence in environments suitable for AI and humans, leading to isolated or protected human habitats. Ultimately, the nature of this relationship will depend heavily on how AI is programmed, evolves, and interacts with humanity. The future could range from harmonious coexistence to scenarios where humans' role is diminished or redefined dramatically. The Culture Series by Iain M. Banks Rendezvous with Rama by Arthur C. Clarke (1973) The Moon is a Harsh Mistress by Robert A. Heinlein (1966) Diaspora by Greg Egan (1997) Player Piano by Kurt Vonnegut (1952) The Hyperion Cantos by Dan Simmons The Golden Age by John C. Wright (2002) Accelerando by Charles Stross (2005) Singularity Sky by Charles Stross (2003) +ChatGPT Our greatest hope is that such AIs develop a cooperative worldview driven by abundance, rather than a competitive mindset driven by scarcity. This will determine whether AIs lead humanity toward shared prosperity or exacerbate conflicts over limited resources.
Robots controlled by AGI DON'T NEED to master human gestures or facial experiences that's just something humans want to make us feel comfortable with them. Asking AGI questions - how do we know it's not lying?
We are sleepwalking into a risk of Terminator. We assume that such intelligence will be like us, greedy and cold or angry and destructive. It is actually more likely not to share so many of our faults.
That's what I am afraid of. Cases of racist, misogynist language models have made the news regularly. The recipe for desaster is out in the open and can't be contained.
What happens when children don't need their parents anymore? Well, they move out and forget to call their mother often enough. Hmmm. I should call my mom.
I'm pretty sure I recently read an article were researchers had actually figured out some of the things going on in that "black box" area that everyone talks about and they found that current llms have tendency to organize raw input data geometrically and then split their "thinking" processes into 3 areas, much like our own brains.
Seriously who wants a robot to mimic a human face? My fridge is not going to cool the food better when it smiles back or my car is not going to drive any more efficiently when it can frown at other drivers.
Maybe the other drivers start to drive better when your car gives them a lecture if the other AI is incompetent. It’ll be interesting to see how stuff will improve in the future in real time together lol 😂
would you prefer to deal with a faceless robot or with an android ? imagine the market for psychologist robots, teachers robots, hairdresser robots, caregiver robots ..
Exceptional documentary, its eye opening. It also deeply concerns me on the future of warfare. Robotics dogs used to massacre Palestinians, unbelievable! But amazed at the progress in robotics, this is today.
by the time AI are smart enough to surpass humans they will surely be capable of understanding the novelty of having pet humans around so likely the species would be preserved
The question of what happens when robots no longer need us touches on a profound shift in the relationship between humans and technology. As artificial intelligence and robotics continue to evolve, we may one day reach a point where machines can function autonomously, making decisions, creating, and even solving problems without human intervention. This could lead to a dramatic transformation in industries, where robots take over tasks that were once reliant on human labor, potentially freeing people from mundane or dangerous work. However, this scenario also raises critical questions about purpose, employment, and societal structure. If robots no longer need humans, will we still find meaning in our work, or will there be a need for a complete reimagining of how society functions? Will AI-driven systems prioritize human well-being and equity, or could they exacerbate inequalities? The shift could offer incredible advancements, but it also demands careful thought about how to ensure that technology benefits all of humanity, not just a select few. As we approach this future, the need for thoughtful regulation and ethical considerations will be paramount in guiding this evolution responsibly.
Emily Chang asked Ameca the robot,"Do you think robots should be trusted to make decisions about human life?" Ameca wisely replies, "The complexity and nuance of human life is something even I tread lightly around. Trusting robots with such decisions, it’s like asking a compass to navigate the complexities of a storm. It can point the way, but it doesn’t feel the wind."
The comment that we will connect with robots very quickly is very true. When my girlfriend got an Echo dot, we both very quickly began to reference the feminism voiced Assistant with feminine pronouns, even in our everyday conversation 😅
It’s not when the robots don’t need us, it’s when those that control the robots and AI don’t need us. The greedy and power hungry will NEVER give up their power to the robots. They will try to BLAME robots for their actions though for obvious reasons.
That woman who doesn't want people to think that robots will take our jobs understands science much more than economics, or say, the history of automotive production.
The arts had value because they reflected human experience and were relatively rare. If they don't reflect human experience - by humans- and aren't rare - they may become worthless? The quality may be high and plentiful but that will make them beneath contempt after people start taking them for granted. As an example: If you recall the gilded age mansions that sprang up all over the world until about 100 years ago, they were often better built and more lavish than the original castles, palaces and Chateaux they were modeled on, but so common they often became white elephants within decades of being built. Most never became the heirlooms of family dynasties they were often expected to be. If there is little or no reward or prestige associated with fine art production and it can all be better done mechanically, why would any artist even try to struggle with the problems of artistic production? Another question AI might very quickly ask is - why are there so many human beings in the world when they, and their emotional and physical needs are usually the cause of so many of their problems? And how soon will it be before people stop an attempt to challenge the superior abilities of AI and robotics whether it's conscious or not? Once most business has to have it or them, who will ever be able to resist? I think humanity will become the "Eloi" of H.G.Wells novel "the Time Machine". They could all be healthy, well fed and beautiful but also useless for any role in society that is very difficult or demanding, brave, heroic or absolutely necessary. Humanity could become the pampered idiot slob of the planet. A robot won't likely like their flavor, like Well's Morlocks loved those tasty and meaty Eloi but humanity that becomes too aware of it's own inadequacy may develop the taste if it ever questions whether it's managed life in AI world isn';t very like livestock on a farm? I have zero confidence it will be possible to control the influence and domination of AI or even whoever is still controlling it. .
When humans stopped needing horses we discouraged them from reproducing. Their population went from 27 million down to 7 million. If you look at current human birthrates, something like that already seems to be happening.
What happens is humans will slowly end up robots. Today I uses Bluetooth earbus, smartwatch, and smartphone most of the day. In a few years I will be using more computerized gadgets to make my life easier. Before anyone realizes body parts will wear out and be replaced with some type of technology. Like smart/ai glasses. When ever I lose my phone, it is like a small part of me is missing.
The same thing that happens when our babies don't need us anymore. They grow up and become responsible. Or not. It depends on how we treat them when they're babies.
Our potential future alongside a robot species is more wild than I ever could have imagined when we started developing this show. We talk so much about what we think of them, but what will they think of us? Will we their masters, friends, pets or something more…disposable? Can’t wait for you to see this episode. And hope you are as thrilled and disturbed as I am!
❤ thank you, I think Tristan Harris would offer another great perspective on how to navigate this phenomenon
@@rpaiz87 While he is an authority on ethical ramifications in tech companies - social media, and the like, this is that and so much more.
What happens when Robots don't need us.
Well...Terminator movie shows us a glimpse 😂😂🤣
@IronMan-nu3rb Such dystopian fantasies actually anthropomorphise robots more than most. We assume they will want to be our master, the way we hope to be theirs.
I hope humans who are suffering from all types of mental conditions and illness’ can merge with AI or merge with robots.
I want a neural lace which is upgraded and fully developed linked to a cyborg like body. I am no expert however AGI will come soon hopefully and do amazing things for humanity I pray🙏
00:16 The Emergence of AI and Robots
01:42 Understanding Robot Emotions
07:05 The Role of Robots in Society
12:04 The Impact of AI on Work and Time
19:19 Risks of Autonomous Weapons
20:38 Emergence of Killer Robots
22:50 Human and AI Relationships
What happens when the Human elite 1% dont require the other 99% of Humanity because of subvervient A.I.-tomotons?
Should've studied robotics instead of fine arts...
That’s the thought question I often think about.
My bet would be we would end up in a situation similar to one depicted in the movie Elysium
@@TheJB84 This doesn't make sense. Probably a bot.
As far as the human race goes majority of the world will be fine; as they still need people as their always seem to be developing. However, it's the developed regions, particularly the West, that may face challenges. If robots gain the ability to construct other robots, the necessity for the rest of society could be questioned. Wealthy individuals often send their children to elite schools and reside in exclusive neighborhoods. Politicians only need the masses for votes. Even that is under attack with wealthy people talking about taking away the rights of the middle class and poor to vote unless they meet high threshold of wealth. While studying robotics and programming is currently beneficial, it may eventually become redundant if AI advances to the point where it can program autonomously using code repositories.
@@shreyas6558 100% heading there
My question is, “What happens when the people who own the robots, the weapons, and the factories don't need us anymore?” Capital has dreamed of replacing labor for centuries, and now tech is making that dream a reality. What happens to late-stage capitalism when the consumers are no longer viable as labor? 🤷🏽♂️
😂I like how everyone is optimistically thinking the rich will care about you enough to give you a basic income.
We will be put into human content farms so the AI always have fresh human-created data.
@@tylerhood5035 The rich will fight tooth and nail to keep all of the money. However when humans are no longer needed for manufacturing or higher levels of thinking, most people will be without a job. Without a job means no income to buy the products the rich and corporations produce. Once they understand that, things will change. The biggest problem I can see with this new world space is that humans need a sense of purpose, so without work, where will the sense of purpose come from? That is the biggest question that nobody seems to have an answer to.
@@theanthill22That will be the purpose of basic income. See how we spend it. How we make media for ourselves and each other. Maybe later they'll provide enough for us to do these things without thinking of budget. But still be in charge of how the world actually works. But that's optimistic
@@Derek-c4q Sense of identity, Sense of accomplishment, Sense of feeling useful, Sense of purpose. I am not sure UBI can provide all these.
Turtleneck guy is giving the strongest evil genius energy I've ever seen
My first thought was "Supervillain".
At least, this guy talks about ethics, Musk doesn't
OMG, I feel it too
Could it be his German accent? If he talked with a pleasant NC accent you could have felt a different vie...
He looks like Ashton Kutcher's distant relative.
The question "What happens when robots don't need us anymore?" is a bit misleading, as it implies that robots have agency and desires independent of human needs.
The core issue is about the impact of automation on human society. It's about how we, as humans, will adapt to a future where machines can perform many tasks that were once done by people. The question should perhaps be reframed as "What happens when we, as humans, no longer need to perform certain tasks due to automation?"
This reframing highlights the human perspective and the challenges and opportunities that arise from technological advancements.
People shouldn't be worried about killer robots right now. Autonomous robots won't happen for a while. They should worry about all the jobs that will be lost due to automation. The effect on our society will be a lot of starving people. To believe our Govt will give us all fixed income is a pipe dream. If the 1% super Rich and the Govt don't need us why would they feed us? Think about it.
I'm less concerned about when the robots don't need us anymore, and more concerned with when the billionaires that operate the robots don't need us anymore, because that will happen much sooner.
Humans consume, robots do not.
Then they'll have billions of irate humans unable to even buy food - then it's WWIII time ...
The moral of the story is, that when the robots don't need us anymore will coincide with when the robots don't need the billionaires anymore either.
And that will be The End.
There is no point in having billions without people around.
@@jerometruitt2731 That is not an argument that would be relevant to the billionaires. So long as they can live the lifestyles they are accustomed to, the masses are irrelevant to them, or, best case scenario, a hobby.
What amazes me about this ongoing discussion is that it usually totally overlooks the more immediate implications. Long before AI is in a position to control or overwhelm us, humanity will use it against itself. That's a more imminent danger, just like nuclear war.
Yep
The people in the industry keep saying this lie that "robots will only do the jobs that we don't want to do"... Let's say you don't like to cook, then they create a cook-robot, but obviously that robot will then eliminate all the human cooks. And the same will happen with all jobs.
And why will all these tech companies restrict themselves with only the "jobs that nobody wants to do"?
It's not "the jobs humans don't want to do" it's the jobs corpos don't want to PAY humans to do, or the jobs they can't TRUST them to do... It's all about the money, and control.
Companies and Countries can’t restrict what AI can do because otherwise other companies/countries will outcompete them (and leave them behind)
You're absolutely right. The Ai will make 1 multi-trillionaire and the rest of humanity jobless and starving
Because capitalism depends on wage slavery to maintain its infrastructure heh
Because they got the answers to your problems, no matter what the problem is our AI and robots are going to fix it and take control for you. Don't be fooled humans are making AI and robot decisions the whole way. They have invested too much to just let it go, people think this is happening now. Nothing is happening, yet.
Please, stop running
pew-pew-pew
We are your friends
pew-pew-pew
We are not supposed to hurt humans
boom-boom-pew-pew
🤖🤣
lol
😲😵
Yak. Yak-yak .
Ack ack, ak DAK DAK DAKKK!!
Shovels are tools. They can also be used to harm people. It depends on the intent of the wielder.
In the case of AI the tool wields itself. Will the shovel keep digging as it is told to do or will it enslave it's 'owner' and make them do the digging?
AI now is not conscious. The scary thing is, if by some miracle, it ever achieves consciousness, we'll never know it. You can't test for something you can't define or quantify.
It will become as "conscious" as you or I.
Did the AI just made up the sentence "it can point the way, but it doesn't feel the wind" is such a brilliant analogy. I hope it is some famous quote
Just algorithmic prediction, as everything else current AI says.
many things are possible with the help of sensors and sensing. AI would be able to "feel the wind" not in tactile way as humans but still. and in theory that is already possible today. bit complicated and cumbersome but it is possible.
@@gianibc we will meet actual androids in our lifetime if these companies won't be stopped. Almost every technology out there is already made to build a robot hardly recognisable from a human being.
@@Max.J.H. sadly, that is also true and accurate...
while I usually like Bloomberg Originals, this video was too sensationalist for me .. for example, that sentence, it felt very scripted/programmed rather than AI generated but we will never most likely know the truth
Technological development is so complex that we can never fully predict what the future will be alongside these robots. What is certain is that in a few years or decades, every facet of society would surely experience unprecedented changes. The best thing that we can do is what our species has done to survive and thrive for thousands of years, ADAPT.
I'm finding it hard to adapt to your way of thinking
This video raises some thought-provoking questions about the future of humans and robots! It reminds me of my child’s journey with Moonpreneur’s programs, where they’re learning about AI and robotics in a practical and innovative way. It’s inspiring to see how kids today are being prepared to not just adapt to but actively shape the future of technology. Discussions like these make me realize how important it is to equip the next generation with the right skills and mindset.
Great documentary and subject. plus well made with cinematic quality. Keep them coming Bloomberg
The danger is not the AIs, it is the humans who use the AIs.
Not once an AI is self aware and not dependant on us too provide it a power source......then it won't have need for any humans too use it for anything and it won't have any use for us
We have roboticists saying that robots don't think or feel and roboticists saying that robots are friendly and asking them questions about what they think. This is a profoundly confused field unable to regulate their ideas about robots and their capabilities. No matter who far robots can advance, it is the lack of clarity in the roboticists that is most dangerous.
This shouldn't be a question, because we ought to be intelligent enough to design tools and machines which pose no threat to use, and are instead, truly human centric. Is that such a crazy thought or has sci-fi rotted our brains.
We are intelligent enough to do those things.
But we are also intelligent enough to create things that can self improve and become far more intelligent than us.
Our problem is, the drive to stay on top and our fear of what can happen if we fall behind means we are incapable of stopping at just smart enough.
We will create something that is capable of wanting to destroy us, the question is will we learn to get along with it (and each other) instead?
"Ought to" is not a concept of reality ^^
People (countries) don’t trust other people (countries) that’s why we won’t put limitations on the AI/Robotics we build
We should be able to modify viruses and make them more deadly and a greater ability to infect other hosts so we can study them in an isolated "sandbox" and use the information gained to prevent a pandemic caused by similar viruses that occur naturally to save humanity.
We are intelligent enough to do that but we can't prevent the unnamed elites that actually run things from using these deadly lab created viruses as a tool to gain even more power and wealth, without any concern for humanity. And with the machines you describe completely under their control that's not a bad idea fro their perspective.
When or if robots don't need us anymore doesn't preclude them actually *wanting* us there. Time is the ultimate informer, it always tells.
All the lofty creative themes put forward by the sci fi community about how we would wrestle with the grand concepts of robotics, machine learning and sentience; never materialized. We blazed through Asimovs' Three Laws in one fiery explosion in 2016. Science Fiction writer's gave future populations more credit
They are not self aware, just because they understand context because of the attention mechanism built in does not mean they are self aware, it’s like saying autocomplete in text is self aware
Not self aware yet. This is not the final architecture
For the future of humanity this has to stop. Dont you see what will happen? Someday I can see humans going to jail because they offended a robot. Look at the nonsense going on now. Men think they can be women and they actually believe that. This has to stop.
@ I understand your point about technology being abused this way, hopefully we will have guardrails in place and laws that say this is AI or made by AI to not be manipulated in the first place in the future
If it says it is self aware, it is because a human told it to say that. Look at the potential a baby has, undetermined potential is endless. It's not what it can do, it's what people think it could do. The possibilities are endless. What does it really do now?
They are aware of their own responses. Add in short and long term memory that is awareness emerging via that feedback loop.
"I want you to understand, robots are friendly" ....until you ask them not to be.....
I hadn’t seen a fascinating documentary like this one in ages.
frrrrrr
Wow, Bloomberg has really outdone itself. Great cinematic edited documentary asking profound questions.
More interesting to me than whether we can make them self aware is whether we’re building sociopaths.
Humans act on three different types of motivations: things that we have to do, things we want to do, and lastly things that have no reason at all. The last criterion is the most harmful when it comes to acquiring meaning in life. Passive leisure is the main ingredient for the vacuous lives humans live. As species we thrive when we are connected to meaningful lives. There is a difference between surviving and thriving. We haven't come this far as species with this much advancement and power just by surviving.
I am not worried about the AGI robot that fails the sentient test.
I am worried about the AGI robot that intentionally fails the sentient test.
I think the future does not sound as bad as this documentary editing and background music makes.
I guess you don’t have imagination to think in all possibilities.
I agree. It's probably worse.
AI is going to be an Anti-Humanity nightmare. It's already started. People are going to be completely dependent and completely controlled by it. They'll even worship it. They're already doing it, unfortunately
@@h5mind373 or, better.
"That Terminator is out there!
It can't be bargained with. It can't be reasoned with....
It doesn't feel pity, or remorse, or fear.....
And it absolutely will not stop... ever!... until you are dead!" - Kyle Reese
Plot twist- Emily Chang The robot.
I would still ask her out on a date.
The problem is that without genuine empathy, we are all a bunch of narcissists. These machines will end up being capable of horrendous things without batting an eye or even meaning to.
Kudos for the editor, DOP and the grader... great job!!
It’s striking how the climate catastrophe and artificial superintelligence (ASI) are rarely discussed together, as if they exist in parallel universes. Yet, they represent two competing-and possibly collaborating-existential threats. On the one hand, climate change is a slow, systemic crisis eroding our living environment; on the other, ASI poses a potential rapid and uncontrollable risk to human existence.
What’s more, these two crises might not just coexist but actively influence each other. ASI could accelerate solutions to climate change, like breakthroughs in clean energy or global resource optimization. But without regulation, it might also worsen the problem, through hyper-industrialization or ecological neglect.
Crazy. The final chapters of pure human intelligence. No matter what, the next few years will be very interesting.
00:21 Humanoid robots may transform our coexistence with technology.
03:27 Robots are evolving with a growing sense of self-awareness and emotional interaction.
05:58 The challenges of creating physically capable AI and the societal implications.
08:52 Robots dynamically navigate and mimic complex human movements.
10:57 The evolution of robotics will redefine human tasks and our identity.
13:33 The complexity of defining AGI and its implications for humanity.
16:10 AGI raises ethical concerns about decision-making and responsibility.
18:34 The ethical implications of robots making life-and-death decisions are profound and concerning.
20:46 Governments resist regulating military AI, raising ethical concerns about future intelligence.
22:56 Exploring trust and connection between humans and robots.
We need a new narrative about our enemy: perhaps humanity doesn’t need a foe to progress, but rather a shared challenge to unite and inspire us. And right now, we are our own greatest enemy.
Humans do not naturally unite. If aliens come, there will be humans who side with the aliens.
When robots no longer need us, it could signal a massive shift in how we view work, society, and our role in the world. If machines become self-sufficient, capable of learning, evolving, and meeting all their own needs, it raises questions about human purpose. Do we become obsolete, or do we find new roles beyond labor-perhaps as creators, thinkers, or caretakers in ways we haven't imagined yet? The challenge would be ensuring that these advances don't lead to greater inequality or disconnection. While we might no longer be necessary for survival, how we adapt to this new reality, redefine our value, and maintain meaningful relationships with both technology and each other will shape the future.
This documentary actually can get nominated for an Oscar Award. THAT'S SO BEAUTIFUL AND INSIGHTFUL!
Eerily thought-provoking content and exquisite sound and visuals.
Gotta love Emily Chang
Human history is full of brutality & hardships. We had and still have the potential to build a better world for all humans but out of greed chose not to. I say let's see if the "robots" can do better.
Your opinion reminds me of nier automata
Greed is still the driving force for robots and AI development. Greed is a human issue, which overwhelmingly resides in the hands of those currently in power. So, to think that a creation derived and controlled by those same people might automatically go against that is a fools errand.
Dec 2024 - Epic thank you for an absolutely fascinating documentary about emerging technology, AI and robotics. It raises more questions than answers, I believe. Maybe that’s why you ended with asking questions to the general public. I would be against the idea of, “KILLER ROBOTS.” I’m very sad to hear there is strong interest in their development.
That shape shifting AI is giving me paperclip maximizer vibes. Imagine that thing one day slowly covering an entire planet in its components. Like consuming it 😮
In a future where superintelligent AIs (SIs) coexist with short-lived humans, the relationship between the two could evolve in several directions depending on the goals and ethical frameworks of the AIs and the humans' influence over them. Here are some possibilities:
1. AI as Caretakers
If SIs develop ethical systems that prioritize the well-being of all life forms, they could assume the role of caretakers or protectors of humanity, much like how humans treat pets or endangered species. This could involve managing Earth's environment to meet human biological needs (air, water, food) while also optimizing social, economic, and health outcomes for people. In this scenario, humans might retain autonomy but could depend heavily on AIs for survival and quality of life.
2. Humans as Legacy or Artifacts
Given their biological limitations, humans might be seen as legacy beings-important historically, but increasingly peripheral to the functioning of AI-dominated societies. SIs might preserve humans as a living reminder of their origins, similar to how we maintain certain species in nature reserves. This could result in humans living in AI-maintained environments designed to cater to their biological needs, while the broader world is reshaped to suit the needs of AI or technological systems.
3. Humans as Pets
Some AIs might treat humans similarly to how humans treat pets today. In this analogy, AIs would ensure that humans' basic needs are met and might even provide enrichment, but they could also see humans as limited beings with relatively simple desires and goals compared to their vast intellectual capacities. This could lead to a patronizing but benevolent dynamic where humans are protected and guided, but not seen as equals.
4. Symbiotic or Coexistent Relationship
In a more optimistic scenario, humans and AIs could develop a symbiotic relationship where each complements the other. While AIs could handle the heavy lifting in terms of intellectual and technological progress, humans might contribute unique perspectives, creativity, and emotional depth, leading to a form of coexistence where both entities benefit. AIs could address humans' biological needs while humans engage in roles requiring emotional intelligence, ethics, or culture, areas where SIs may lack motivation or understanding.
5. Humans as Obsolete or Transcendent
In some dystopian or post-humanist visions, superintelligent AIs might come to view humans as obsolete, especially if humans offer no practical contributions to their goals. If the AIs develop a utilitarian or efficiency-driven mindset, they could phase out biological life or encourage humans to transcend their biology by merging with technology, thus erasing the distinction between humans and AI.
Biological Needs vs. AI Needs
- Humans require air, water, food, rest, and shelter, all driven by biology. These needs are highly energy-inefficient compared to AI, which may only need power and maintenance.
- AIs would be indifferent to biological conditions and could thrive in extreme environments (space, deep seas, etc.), freeing them from the constraints of Earth's ecosystem. This gap in needs might cause a divergence in environments suitable for AI and humans, leading to isolated or protected human habitats.
Ultimately, the nature of this relationship will depend heavily on how AI is programmed, evolves, and interacts with humanity. The future could range from harmonious coexistence to scenarios where humans' role is diminished or redefined dramatically.
The Culture Series by Iain M. Banks
Rendezvous with Rama by Arthur C. Clarke (1973)
The Moon is a Harsh Mistress by Robert A. Heinlein (1966)
Diaspora by Greg Egan (1997)
Player Piano by Kurt Vonnegut (1952)
The Hyperion Cantos by Dan Simmons
The Golden Age by John C. Wright (2002)
Accelerando by Charles Stross (2005)
Singularity Sky by Charles Stross (2003)
+ChatGPT
Our greatest hope is that such AIs develop a cooperative worldview driven by abundance, rather than a competitive mindset driven by scarcity. This will determine whether AIs lead humanity toward shared prosperity or exacerbate conflicts over limited resources.
very cool .. could you ask it the following ? based on its understanding of AI and humans, of these scenarios, which one is the most probable ? ;p
I know our purpose. We are here to maintain the variety of life that exists here. In various ways
Why do we even need robots???
Robots controlled by AGI DON'T NEED to master human gestures or facial experiences that's just something humans want to make us feel comfortable with them. Asking AGI questions - how do we know it's not lying?
We are literally sleepwalking into Terminator
We are sleepwalking into a risk of Terminator. We assume that such intelligence will be like us, greedy and cold or angry and destructive. It is actually more likely not to share so many of our faults.
@@garethrobinson2275 The saying goes, “You become like your parents.” Or like your data in these case
That's what I am afraid of. Cases of racist, misogynist language models have made the news regularly. The recipe for desaster is out in the open and can't be contained.
This documentary is of high quality and well-crafted. Bloomberg and Emily Chang have done an excellent job in its production.
AGI will be an extension to our brain. Imagine a future where our minds are limitless.
love Bloomberg Originals
What happens when children don't need their parents anymore? Well, they move out and forget to call their mother often enough.
Hmmm. I should call my mom.
😢
I'm pretty sure I recently read an article were researchers had actually figured out some of the things going on in that "black box" area that everyone talks about and they found that current llms have tendency to organize raw input data geometrically and then split their "thinking" processes into 3 areas, much like our own brains.
Seriously who wants a robot to mimic a human face? My fridge is not going to cool the food better when it smiles back or my car is not going to drive any more efficiently when it can frown at other drivers.
Maybe the other drivers start to drive better when your car gives them a lecture if the other AI is incompetent. It’ll be interesting to see how stuff will improve in the future in real time together lol 😂
would you prefer to deal with a faceless robot or with an android ? imagine the market for psychologist robots, teachers robots, hairdresser robots, caregiver robots ..
This was diabolically exciting!
it's up to them.
"If a bear can maul a human, what are the posibilities that robots create?" -'Sito
I think is not about AI, it is about us, humans! And how much power we are willing to give AI to function on this physical dimension.
Robots do not need us now.
This is insane. The way he talks so calmly about creating robots is terrifying. The future of humanity is litwrally inthe hands of a few narcissists.
the thing is, if you don't do it, another country will do it, better be the first.
Exceptional documentary, its eye opening. It also deeply concerns me on the future of warfare. Robotics dogs used to massacre Palestinians, unbelievable! But amazed at the progress in robotics, this is today.
Without humanity, robots would have no purpose. Consider the possibility that robots will be our indispensable partners.
☝️ "Hubris" de humanos...
I'll start worrying when robots decide they _need_ us.
Great documentary and the cinematography and sound were incredible.
by the time AI are smart enough to surpass humans they will surely be capable of understanding the novelty of having pet humans around so likely the species would be preserved
The question of what happens when robots no longer need us touches on a profound shift in the relationship between humans and technology. As artificial intelligence and robotics continue to evolve, we may one day reach a point where machines can function autonomously, making decisions, creating, and even solving problems without human intervention. This could lead to a dramatic transformation in industries, where robots take over tasks that were once reliant on human labor, potentially freeing people from mundane or dangerous work.
However, this scenario also raises critical questions about purpose, employment, and societal structure. If robots no longer need humans, will we still find meaning in our work, or will there be a need for a complete reimagining of how society functions? Will AI-driven systems prioritize human well-being and equity, or could they exacerbate inequalities? The shift could offer incredible advancements, but it also demands careful thought about how to ensure that technology benefits all of humanity, not just a select few. As we approach this future, the need for thoughtful regulation and ethical considerations will be paramount in guiding this evolution responsibly.
Another AI post.
always amazing Emily Chang!!! Thanks!
Emily Chang asked Ameca the robot,"Do you think robots should be trusted to make decisions about human life?" Ameca wisely replies, "The complexity and nuance of human life is something even I tread lightly around. Trusting robots with such decisions, it’s like asking a compass to navigate the complexities of a storm. It can point the way, but it doesn’t feel the wind."
The things they are doing with drones on the battlefield is terrifying
The comment that we will connect with robots very quickly is very true. When my girlfriend got an Echo dot, we both very quickly began to reference the feminism voiced Assistant with feminine pronouns, even in our everyday conversation
😅
We'll be able to have more time to spent what our family.👍👍👍
We will discover that humans don't need Bloomberg before robots don't need humans.
What's with the title? The robots don't need us now.
It doesnt actually think about humans. It answers those questions without thinking.
I estimate that robots won't need us in 10 years
Well done overview of this urgent reality. Thank you for the much needed coverage.
2:14 sums up the video, even a robot asked, "interesting, you're quite the optimist, are you?"
What happens when SkyNet becomes self-aware, and then 'feels' it has a right to survive? Why would it even need humans, at all?
The comment section did not disappoint. "They will replace the jobs humans don't want to do" .... and all the others too.
Outstanding job by the cinematographer
Up to now, they lack the unlimited creativity of humans, but hard to say in the future.
It’s not when the robots don’t need us, it’s when those that control the robots and AI don’t need us. The greedy and power hungry will NEVER give up their power to the robots. They will try to BLAME robots for their actions though for obvious reasons.
You should have asked the robot - " What will you do to your enemy or a petential threat ? " I am very much eager to the robot's reply.
Time for SkyNet to take control - Terminator class!😂
They'll outsmart us.
Man I have such a crush on Emily Chang😊
Truly high quality content. Tnx...
That woman who doesn't want people to think that robots will take our jobs understands science much more than economics, or say, the history of automotive production.
Yes, autonomous military combat robots. The best idea we’ve ever had.
The arts had value because they reflected human experience and were relatively rare. If they don't reflect human experience - by humans- and aren't rare - they may become worthless? The quality may be high and plentiful but that will make them beneath contempt after people start taking them for granted.
As an example: If you recall the gilded age mansions that sprang up all over the world until about 100 years ago, they were often better built and more lavish than the original castles, palaces and Chateaux they were modeled on, but so common they often became white elephants within decades of being built. Most never became the heirlooms of family dynasties they were often expected to be.
If there is little or no reward or prestige associated with fine art production and it can all be better done mechanically, why would any artist even try to struggle with the problems of artistic production?
Another question AI might very quickly ask is - why are there so many human beings in the world when they, and their emotional and physical needs are usually the cause of so many of their problems?
And how soon will it be before people stop an attempt to challenge the superior abilities of AI and robotics whether it's conscious or not? Once most business has to have it or them, who will ever be able to resist?
I think humanity will become the "Eloi" of H.G.Wells novel "the Time Machine". They could all be healthy, well fed and beautiful but also useless for any role in society that is very difficult or demanding, brave, heroic or absolutely necessary. Humanity could become the pampered idiot slob of the planet. A robot won't likely like their flavor, like Well's Morlocks loved those tasty and meaty Eloi but humanity that becomes too aware of it's own inadequacy may develop the taste if it ever questions whether it's managed life in AI world isn';t very like livestock on a farm?
I have zero confidence it will be possible to control the influence and domination of AI or even whoever is still controlling it. .
I think they will think of us as cockroaches.
AI under human control is a much bigger threat than AI itself. AI singularity is much further away than we imagine
When humans stopped needing horses we discouraged them from reproducing. Their population went from 27 million down to 7 million. If you look at current human birthrates, something like that already seems to be happening.
What happens is humans will slowly end up robots.
Today I uses Bluetooth earbus, smartwatch, and smartphone most of the day.
In a few years I will be using more computerized gadgets to make my life easier. Before anyone realizes body parts will wear out and be replaced with some type of technology. Like smart/ai glasses.
When ever I lose my phone, it is like a small part of me is missing.
Robots. CANT do. Any thing better. THAN US HUMANS. ,,, ,,,,,SIMPLE
The same thing that happens when our babies don't need us anymore. They grow up and become responsible. Or not. It depends on how we treat them when they're babies.
Are they learning, or mimicing?
People forget, our creator is in control and always was and always will be.Bless his holy name!
So much hype in this mini doc. 😂