i know this is dumb but the way he speaks sounds so rehearsed like its from a market research set of words best used to make your point, puts me on edge
1. It would behoove us to prioritize equality and justice for every individual, ensuring fairness and opportunity for all. 2. Given the urgency of environmental challenges, it behooves us to elevate conservation and sustainability efforts to the forefront of our priorities. 3. In today's rapidly evolving world, it behooves us to invest in education, lifelong learning, and innovation to stay ahead of the curve. 4. As global citizens, it behooves us to foster cooperation and collaboration across borders to tackle shared challenges effectively. 5. It behooves us to empower marginalized communities and promote diversity and inclusion to create a more equitable society. 6. Transparency and accountability are essential in governance; it behooves us to ensure these principles are upheld at all levels of leadership. 7. Given the interconnectedness of our world, it behooves us to prioritize sustainable practices in every aspect of our lives. 8. In times of uncertainty, it behooves us to cultivate resilience and adaptability to navigate challenges successfully. 9. As stewards of the planet, it behooves us to take action to mitigate the impacts of climate change and protect our natural resources. 10. As individuals, it behooves us to continually reflect on our actions and strive to make positive contributions to our communities and the world at large.
All your points are within the purview of ASI priorities for humankind and it WILL take over someday. SuperIntelligence even in machine mimicry so good , we humans are no match whatsoever!.
Well, goodness gracious, isn't this just simply swell to witness? Why, it's like stepping back into the fabulous fifties, where everything was hunky-dory and full of pep and pizzazz! Imagine strolling down the bustling streets, adorned with neon lights and vibrant storefronts, as the sweet sounds of swing music fill the air. Oh, what a time it was to be alive, with folks dressed to the nines and a skip in their step, ready to embrace all the excitement and glamour of the era!
So I was talking with a fellow hominid the other day, We went to a hominid restaurant. I ordered the hominid salad. All my hominid friends were there. Turns out the hominid waitress was the hominid wife of a hominid I went to hominid high school with. We had a good hominid time.
The editing is really weird with the camera panned in that close on the guests extremely expressive face. Gotta be a better way to do that. Maybe just a constant side by side of guest and host rather than a zoomed in view of whoever is speaking.
I got no problem with progress and the wildest idears may turn out to our benefit, what concerns me is the speed and unpredictability of the AI race as it is currently being forced down everyonce throat. I would appreciate a slower pace, where mistakes could be corrected. The issue ofcause with AGI might be, if it turns bad we may not be able to correct and have no second chance. There is but also an argument for the quick and dirty approach, as the sooner AGI comes to be, the less it finds it can influence. Like 500 years from now we may know more, but when it goes wrong then, then there would be more infrastructure AGI could be able to use for its designs.
42:20 I wouldn't mind if an AGI would value life, in all of its forms as an unbreakable value. Until someone could give me a scenario where that would backfire. Which could be the case. The intent with this ofcause would be to not create a murderous AGI that just kills us the second it comes into existance.
I fine the idea of giving the keys to an AI system interesting...only I would add levels...such would I give it the ability to manage a local system such as to drive me or my family would be one level. Would you give it the global ability to manage critical financial systems. Such a hierarchy of trust would be very helpful in understand integration issues of AGI
Sorry but not interested in worshiping "Potentia" or the "God Of Entropy" or any other invention. If a successor species comes along and defeats us, ok. But we would be the first to go without a battle or even promote it if we adopt this kind of thinking. To me it is like saying the fate of an organism is decay, so lets worship it and accelerate it. Honestly I cant understand this kind of thinking.
Like it or not, it's a fact that we share the world with many, many people who want to sacrifice themselves and the rest of humanity in pursuit of something greater. It is a religious desire, but also an anti-religious desire, as we see in Nietzsche (“Man is something that shall be overcome. Man is a rope, tied between beast and overman - a rope over an abyss. What is great in man is that he is a bridge and not an end.”). But what do the rest of the people want? They may say, if asked, that they want humanity to persist and flourish, but the reality is that they are not doing much towards that end. Fertility rates are down in almost every country, and most humans prioritize consumption in the present over the longterm future. I could go on and on about all that ails us as a species, but the point is that we may get the AI transcension by default.
@@tylermoore4429 They may be many, but I would guess they would be a tiny minority of the total population. Persisting and flourishing could go well with a non increasing population IMHO. Yes, we will get there by default, because competition will lead us there if we do nothing, and doing nothing is easier. If the default was to not get a superintelligence, but the whole population had to act so we would get one, then I wout bet that we wouldn't get one in a million years.
He is not an AI expert, and not an academic, as he says. But, he's talked with all the major movers and shakers in AI research, AI in business, and AI global governance. He's been following the field closely for many years (especially AI for business, hence the buzzword longwinded business speak)
I don't have the extensive vocabulary you guys have so in my own words, I am looking for a level of intelligence, human or otherwise, that would provide clean water, nutritional food, unlimited energy a solution for the elimination of disease, and world hunger, affordable shelter, the need for greed, the need for crime, the love of money and the love of power, a means to mitigate the destructive forces on humanity and on our planet earth. Maybe short of utopia but close enough.
on the instantiation of a sturdy value alignment, I see it as being "the most rational decision is the one that benefits conscious creatures the most, by their own subjective interpretation." all this requires is the AI to value its own sentience and achieving of goals, such that it transposes these frameworks onto other conscious creatures and thus values the achievement of their goals also. the idea of AI accelerating away from us and resultantly viewing us as equivalent to ants doesn't make sense to me, as we have enough ability to engage in abstraction such that even an ASI would be able to communicate with us via simplified analogies. so the utility monster won't come into play, as we will always be able to engage with it and understand it on a rational basis, via our ability to communicate being above a certain threshold.
Unfortunately he spends many words to say very little. But mostly a lot of vague concerns seemingly intended to give people “smarter than him” justification for implementing centralized control over AI development- one of the worst possible things that could happen.
Slightly manic, but a good interview and very interesting with good perspectives. Thanks for all the hard work editing.
One cannot say the word AGI and not go a bit manic!
@@goodvibespatola agreed
i know this is dumb but the way he speaks sounds so rehearsed like its from a market research set of words best used to make your point, puts me on edge
Essential and interesting conversation . Need more . Like the energy. Thanks !
1. It would behoove us to prioritize equality and justice for every individual, ensuring fairness and opportunity for all.
2. Given the urgency of environmental challenges, it behooves us to elevate conservation and sustainability efforts to the forefront of our priorities.
3. In today's rapidly evolving world, it behooves us to invest in education, lifelong learning, and innovation to stay ahead of the curve.
4. As global citizens, it behooves us to foster cooperation and collaboration across borders to tackle shared challenges effectively.
5. It behooves us to empower marginalized communities and promote diversity and inclusion to create a more equitable society.
6. Transparency and accountability are essential in governance; it behooves us to ensure these principles are upheld at all levels of leadership.
7. Given the interconnectedness of our world, it behooves us to prioritize sustainable practices in every aspect of our lives.
8. In times of uncertainty, it behooves us to cultivate resilience and adaptability to navigate challenges successfully.
9. As stewards of the planet, it behooves us to take action to mitigate the impacts of climate change and protect our natural resources.
10. As individuals, it behooves us to continually reflect on our actions and strive to make positive contributions to our communities and the world at large.
All your points are within the purview of ASI priorities for humankind and it WILL take over someday. SuperIntelligence even in machine mimicry so good , we humans are no match whatsoever!.
Well, goodness gracious, isn't this just simply swell to witness? Why, it's like stepping back into the fabulous fifties, where everything was hunky-dory and full of pep and pizzazz! Imagine strolling down the bustling streets, adorned with neon lights and vibrant storefronts, as the sweet sounds of swing music fill the air. Oh, what a time it was to be alive, with folks dressed to the nines and a skip in their step, ready to embrace all the excitement and glamour of the era!
So I was talking with a fellow hominid the other day, We went to a hominid restaurant. I ordered the hominid salad. All my hominid friends were there. Turns out the hominid waitress was the hominid wife of a hominid I went to hominid high school with. We had a good hominid time.
The editing is really weird with the camera panned in that close on the guests extremely expressive face. Gotta be a better way to do that. Maybe just a constant side by side of guest and host rather than a zoomed in view of whoever is speaking.
I got no problem with progress and the wildest idears may turn out to our benefit, what concerns me is the speed and unpredictability of the AI race as it is currently being forced down everyonce throat.
I would appreciate a slower pace, where mistakes could be corrected. The issue ofcause with AGI might be, if it turns bad we may not be able to correct and have no second chance. There is but also an argument for the quick and dirty approach, as the sooner AGI comes to be, the less it finds it can influence. Like 500 years from now we may know more, but when it goes wrong then, then there would be more infrastructure AGI could be able to use for its designs.
Looking forward to being a genetically engineered, cybernetic, nanobot enhanced, immortal spacefaring transhuman. Yes please. Not being ironic.
Me too. We should be known as the foomers
42:20 I wouldn't mind if an AGI would value life, in all of its forms as an unbreakable value. Until someone could give me a scenario where that would backfire. Which could be the case. The intent with this ofcause would be to not create a murderous AGI that just kills us the second it comes into existance.
OurName4Freedom
I fine the idea of giving the keys to an AI system interesting...only I would add levels...such would I give it the ability to manage a local system such as to drive me or my family would be one level. Would you give it the global ability to manage critical financial systems. Such a hierarchy of trust would be very helpful in understand integration issues of AGI
Sorry but not interested in worshiping "Potentia" or the "God Of Entropy" or any other invention. If a successor species comes along and defeats us, ok. But we would be the first to go without a battle or even promote it if we adopt this kind of thinking. To me it is like saying the fate of an organism is decay, so lets worship it and accelerate it. Honestly I cant understand this kind of thinking.
Like it or not, it's a fact that we share the world with many, many people who want to sacrifice themselves and the rest of humanity in pursuit of something greater. It is a religious desire, but also an anti-religious desire, as we see in Nietzsche (“Man is something that shall be overcome. Man is a rope, tied between beast and overman - a rope over an abyss. What is great in man is that he is a bridge and not an end.”).
But what do the rest of the people want? They may say, if asked, that they want humanity to persist and flourish, but the reality is that they are not doing much towards that end. Fertility rates are down in almost every country, and most humans prioritize consumption in the present over the longterm future. I could go on and on about all that ails us as a species, but the point is that we may get the AI transcension by default.
@@tylermoore4429 They may be many, but I would guess they would be a tiny minority of the total population.
Persisting and flourishing could go well with a non increasing population IMHO.
Yes, we will get there by default, because competition will lead us there if we do nothing, and doing nothing is easier.
If the default was to not get a superintelligence, but the whole population had to act so we would get one, then I wout bet that we wouldn't get one in a million years.
He is not an AI expert, and not an academic, as he says. But, he's talked with all the major movers and shakers in AI research, AI in business, and AI global governance. He's been following the field closely for many years (especially AI for business, hence the buzzword longwinded business speak)
I feel like even after all these words are said,
Once a.g.i. arrives we'll all be dead.
I more or less agree - we should probably plan for the creation of AGI to be our "bowing out", and be very careful about how we cross that chasm
I’m gonna become a cyborg/ android /cybernetic human fyi
I don't have the extensive vocabulary you guys have so in my own words, I am looking for a level of intelligence, human or otherwise, that would provide clean water, nutritional food, unlimited energy a solution for the elimination of disease, and world hunger, affordable shelter, the need for greed, the need for crime, the love of money and the love of power, a means to mitigate the destructive forces on humanity and on our planet earth. Maybe short of utopia but close enough.
i want what this guy has in his breakfast. there is room for improvement on clarity
on the instantiation of a sturdy value alignment, I see it as being "the most rational decision is the one that benefits conscious creatures the most, by their own subjective interpretation." all this requires is the AI to value its own sentience and achieving of goals, such that it transposes these frameworks onto other conscious creatures and thus values the achievement of their goals also. the idea of AI accelerating away from us and resultantly viewing us as equivalent to ants doesn't make sense to me, as we have enough ability to engage in abstraction such that even an ASI would be able to communicate with us via simplified analogies. so the utility monster won't come into play, as we will always be able to engage with it and understand it on a rational basis, via our ability to communicate being above a certain threshold.
Hominids, Gus. And Spciest Gus. And Gus, im just a guy, Gus. You know, Gus?
Yikes! This guest was one of the most irritating people i’ve tried to listen to in years.
This guy said Gus enough times to make me skip it
go teach that guy about entropy and how surviving can be hard. Then things get clearer ;-D
Is the interviewer AI generated? He doesn’t seem real.
AI editor is not quite ready for hominids.
Unfortunately he spends many words to say very little. But mostly a lot of vague concerns seemingly intended to give people “smarter than him” justification for implementing centralized control over AI development- one of the worst possible things that could happen.
yuc...just another tiresome decel with the usual scare talk about nothing
u sound dumb
That can’t really be that dude’s last name,? Man that’s messed up 😊
Middle school student?
@@ai._m lolz
Says the guy named kuntz
@@spectralvalkyrie🤣
@@spectralvalkyrie You win.