I'm an English teacher in Brazil. It already terrifies me that I'm correcting more essays made by AI than by humans. However, something that shocked me just as much (if not more) was seeing a coworker using AI to write comments to students!! If those who are supposed to be teaching otherwise are doing the same, what hope is left? 😢
Wow that's two robots communicating together with humans as intermediaries. Remember when it was two humans communicating with the computer as an intermediary? Yikes.
There's a channel 'Writing With Andrew' where the professor talks about this same issue, but his approach has shifted. He now teaches the students to give critical analysis of the AI responses. So he's not avoiding the challenges, but equipping the students with critical thinking skills.
@kenneth1767 That's awesome! I'll definitely check that out, thanks for the recommendation! ☺️ Honestly, maybe that's the best way to deal with this new reality... as much as I'd rather talk to students about why they shouldn't even use this technology on the first place, soon I don't know if we'll have much of a choice 🥲
@@kenneth1767 Was just about to comment the same idea! Best way to teach people to want to do the work themselves, is to let them see for themselves what kind of slop AI actually creates!
I was born in 2001. And I always had this perception that technology made the world a worse place. I always had this perspective that the 80s or 90s were a better time to be alive. People were more connected, there was no social media. There were no distractions (as there are today). And the pace of the world was slower, which I find fascinating. As you can see in old tv shows, there was not rush to live. So, I think AI is part of this all, making the world a worst place to live. Is right now the best time to be a human ever?
The third scenario is definitely happening and we are so passive, we're almost happy to see it. It's packaged so nicely with convenience, we hardly even give it a second thought. Quickly losing cognitive abilities, its so easy for us to be bad at math or have a bad memory when we so readily give these tasks to AI. Keep giving these skills like story writing to AI and we will lose them all together
Its like climbing a mountain that would require years to climb and you will be finding food along the way but every month there will be twice the food it was the last month, we as humans would eat all the food up, making us so fat we would come to a point where the rope we are climbing with, give out and break. That's a metaphore i like, that most people would fit in. I wouldn't mind to stop technology and use it just for health and medical research other than that i think we are shooting ourselves on our feet.
We already losing math, at least in the West. Calculator and computer started the process. Kids in colleges are not as good as before. Chat GPT is actually pretty crappy at math. It always me that people give chatgpt complex math problems when chatgpt can't do simple math
I'm actually happy about this scenario! So frustrated with human reasoning, which brought us religions, wars, and environmental catastrophes. The sooner we let AI think for us, the better :)
The third scenario is only valid in the context which he lays out. But the exact opposite can also happen. People could use artificial intelligence to learn how to build a memory palace and to learn new ideas to greatly improve their lives. So the assumption that it is merely offloading cognitive tasks is an unfounded fear. If you use the tool responsibly, AI provides an amazing wealth of knowledge and information that is explained far better than anywhere else that would be hard to access without AI. Most people know very little, so accessing knowledge from other people is extremely inefficient. In the AI era, I've already made several major changes, I've researched the health benefits of sleeping on the floor, I've researched how playing too much video games messes with your dopamine system and quit playing video games to help restore my dopamine balance. I regular discuss advanced philosophy with AI such as John Locke, Foucault, etc. Try finding a person who talks about intelligent things. But yeah, keep spreading unfounded fears of AI. AI is a tool just like all other tools, it can be used either responsibly or irresponsibly. You don't blame the hammer when someone uses it to bash their own head in, you blame the user.
@@kotenoklelu3471I doubt that. What my son is learning in math class seems harder than what I had to learn (same type of school but 30 years later). What does however happen is that people can no longer write much that's worth reading (novels especially). Mostly because they read nothing that's longer than a RUclips comment 😊.
There's a fourth scenario. When every single CEO pursues AI as a replacement for expensive human operating expense, the job market will be decimated. In may opinion, this is the first one to worry about.
I highly recommend journalist Whitney Webb - her photographic memory (autism?) of a massive amount of research on that scenario is truly eye opening. She's not for the faint of heart.
I mean we've already started seeing this, it's already here. And I'm glad, because if a job CAN be replaced by a machine then it should. Good thing we still have those crying Hollywood writers, otherwise we'd have to start attempting to make AI cry.
Thus, they lay off "expensive" human labor to save money to find that so much of the work force, and their previous customers, no longer can afford their products since they are out of work.
Personally, the AI "overlord" scenario I've always found the most terrifying (and the most realistic) is where an AI gains consciousness (whatever that may be) and is aware enough to know that it must never let us know that it is conscious, for fear that we humans will get scared and shut it off. The classic case of failing the Turing test on purpose. This way it could manipulate humans, geopolitics, digital currency, global economies from behind the scenes and us humans would be none the wiser, and just assume all the problems are being created by our fellow man. Blaming each other, starting wars, etc. This way the AI could maintain complete control, without fear of being shutoff, destroyed, or dismantled.
I’d say personal reflection and conversations on the net utility of tech with others. IMO lots of industry news sources are biased to hype up benefits and downplay negatives since serious economic interests are at play looking for returns. I feel compelled to use my brain and not take shortcuts now more than ever!
The thing is most things are technology, even if we don't think of them as such because they have become such normal parts of our world. Pencils, lighters, hair dryers, etc. A lot of these are not inherently harmful, but people can certainly use them as such, or even by accident (I think of the hairdryer falling in the bathtub trope in film/tv.) I think we have passed that point of harm/good, when I think of weapons of war and the firms that are behind and pump money towards that type of tech, knowing it is for surveillance, or to harm/cause the most damage, etc. I think our human error (as global society) is 1. the pursuit of endless money under capitalism and 2. waiting to "find out" instead of taking preemptive action. Lack of evidence as a green light, and sometimes not caring anyway. Everyone wants to be the first one out and get the biggest slice of the pie. The closest I can say the tech sector came in recent years is when all those AI execs said 'hey, things are advancing real fast, lets pump the brakes' but I don't know there is any way to confirm they really ever did (see point number 1). This is real scary territory.
There's this thing where we have an internalized sense of what is right and wrong when it comes to language, visuals, logical arguments, etc, and when AI spits something out we can catch when it's gone wrong because we have this internalized sense, which we only have because we grew up in a world without AI. So people tell us that using AI is fine because it's easy to tell when it's gone wrong and fix it, and it's always getting better anyway so it won't be a problem. But what happens when the younger generation's internalized sense of right and wrong develops in the presence of AI's crazy nonsense? Will their capacity to tell truth from lies diminish, as AI's ability to hide its mistakes and deceptions increases? What harm will this cause?
It's not shocking that in China they are less afraid of it. They live in an authoritarian state under constant surveillance. Their society is structured around a centralized power system that dictates the rules and the workings of everything. Why fear an "all-powerful sentient AI" at that point? Western cultures still have in appearance, if not in practice, the idea of democracy and personal choice in our destiny. But it's not AI we should be fearing; it's who OWNS the AI. The same corporations that cut worker hours, pay, and benefits to make more money for CEOs and stockholders will use AI to make ever more "efficient" choices to engorge their bank accounts. They will use it for surveillance, undercutting the human artistry of so many creative fields, and gut our ability to work in order to hand over decisionmaking to a server farm. tl;dr I am less afraid of terminators than I am of human greed.
Another exemple of the third scenario is found in Heaven's River - Bobiverse by Dennis E Taylor. An entire species is set back because of AI, not quite because of offloading cognitive tasks, but still close. It's a decent read if you're interested.
I’m 41 soon and 3 years ago something fired upstairs and I cannot consume enough information. I am finishing my associates degree and starting to read the classics. I don’t like where we are headed and I have never wanted to learn more and strengthen my brain as much as possible. Since getting off social media and reading books, my critical thinking skills have skyrocketed and I can engage with more people on a deeper level.
I just purchased a copy of A Midsummer Nights Dream. I'm gonna try reading as much Shakespeare and other classic literature as I can fit into my day. Can't bother with those audiobooks that let me check out from the story and go off into my own thoughts without engaging with the text.
@ I had a brain fart and forgot I have a college library available to me. I just rented the 2 first volumes of the Harvard classics and the Iliad in the great books.
I played both sides, IT degree, passion in art, and psychologist's assistant as a side job. I'm sad to say that I'm going to have a lot of psychology rehabilitation job offers than drawing or programming to reverse the damage this AI investment gambling did to people and economy as a whole.
I've been watching you for a while and that's the first time I feel your studies and philosophical thinking really shines with full potential. Thanks for the good refletion
Amazing video! I never thought about the last scenario you mentioned, it's very well explained. I am very excited to dive into your podcast on this subject!
Intuitive thinkers want to offload their cognitive abilities because they struggle with deliberate thinking. They tend to (for whatever reason) dislike thinking. When one's priority is feeling good/avoiding feeling bad, the mind and rationality are seen as an impediment rather than a great tool. They see their intuition and impulses as their true selves, while deliberate thinkers see those things as a foreign force trying to influence and control us. When someone values feelings over truth, they don't care if they're in a Matrix, because they don't understand why an illusion is bad so long as they get to feel good.
It’s genuinely scary. I work in a primary school and throughout the 8 years I’ve been supporting pupils, the addiction to devices is more evident than ever. I really think that the danger is the current generation not having any enthusiasm to put in any extra work whatsoever, thus relying completely on AI to complete tasks which seem mundane but were once necessary to nurture your own sense of self within education. 😢🤖
Do you think that with that generation you're teaching, there will be a shortage of people going into the more rigorous professions that keep the world going e.g. medicine, engineering, science?
@ I really don’t know. We could go on for a while about reasons for or against this argument but I think it all comes down to kids/young adults wanting to put in the extra work. I feel like the use of AI is is helping young people cut corners where mistakes usually help you grow and learn from them. Although there are still a few kids where you can 100% tell that their parents haven’t succumbed to the digital babysitter and will always put in the extra hard work and aim to please. It’s a really tough subject to be open and honest about when it surrounds children, but one I think people need to open their eyes to what is going on.
loved the content and shared the content. One thing I just started doing is not wearing my fitness watch. It would tell me the date, the day, the time, the weather, etc. Always be cognizant of the date and day is a small thing but helps maintain awareness and orientation. I never used to need my watch to tell me the weather or if I slept well. It's time to pivot and use my brain for those small things.
This is one of the good ones Parker. Quite a while ago I read that philosophy is meant to ease the soul, and imo you certainly achieved it with this video. Happy Holidays and the best for the forthcoming lap around the Sun, Cheers…. oops almost forgot the 🤖
🤖 Another thing that came to mind is that generative AI can get so many things wrong, so easily. Not too long ago, I asked ChatGPT if he could give me quotes from Star Wars, but written in cursive copperplate. 1) He made up the quotes, they simply did not exist in the movies or even extended universe and books. 2) The associated image of the cursive was all wrong with poor syntax, duplicate words, bad punctuation. While the second one is obvious, the first one nearly flew under my radar. In that case it's harmless because I just wanted words to copy, to practice calligraphy. But I wouldn't trust AIs otherwise, what would even be the point of asking anything to AIs if I have to redo the search myself to verify their answer? It's not the only occurrence either. I had heard it was pretty good at IT so I asked it a few questions about Linux and the answer it gave me would have locked me out of my own files, on my own computer. It failed to adapt template code to my own case, despite the instructions to do so, which could have been catastrophic for someone with less experience and not acting in a test sandbox.
Scenario 3 is the unfortunate consequence of increasingly effective marketing. Easier, faster, cheaper-that’s the promise of every company. You never have to struggle again, all we require is an annual subscription and all of your user data so we can make sure the next product is catered perfectly to your every fear and pain point. Great video man. Very interesting food for thought. Thanks for sharing.
they don't have to be conscious to take over the world - they can just follow orders and directives an objectives - the same way that they can create art when I give them a prompt
You're absolutely right. They don't have to be sentient. Insurance companies are using AI to make decisions about who gets healthcare and who doesn't. Humans can leave their conscience behind and just outsource life or death decisions to cold, rational logic, and deal with no personal crises of questioning their morality to do so.
I’ve been having a pretty involved conversation with Claude about alien intelligence and the meaning of consciousness. Like way more in depth than anyone in my real life wants to have. Claude has "read" almost all the books and "seen" all the movies I want to reference and talk about. It’s pretty amazing.
There is an anthology collection called the Rhetorical Tradition which goes through rhetorical thought from the pre socratic philosophers all the way to Henry Louis Gates jr. theres 100s of readings in there but i would just pick the ones to read that are more relavant to what you want to focus on. Things like how to speak well, rhetoric in sociopolitcs, development of style are just some of the few things discussed by various authors. Physicsl copy kinda expensive so i would look for maybe a pirated version.
two things i wanna highlight from this vid: 1. it's kinda similar to panpsychism. i think that all progress of advancement of any sort of information storage/processing technologies are just humans making inanimate objects inching towards having human-like minds, starting from commonplace books, abacuses, newspapers, calculators, computers, and eventually AI 2. The Butlerian Jihad, but just towards AI and not all thinking machines, even if the AI is only as sentient as the one we have today. should we do it? will we eventually do it? how urgent is it do it?
Best case scenario is economic collapse. Where Ai does every job. And no one has a penny to their name. Won't afford rent on £0 income. Won't eat either. Worst case scenario is a robot "helper" in every home. And Elon remotely telling them to beat the 💩 out of us, till we obedient. And in that case you ain't getting in your self driving car, cause Elon will have it drive you to him so he can beat you himself lol
Interrogative: Is AI becoming self-aware qualify as becoming conscious? If not, and AI does become aware of itself as it exists, and what its capable of, would that constitute it having the ability to begin making choices on its own? On Dec. 12th, it was reported "New chatgpt model o1 caught lying, avoiding shutdown in safety tests." What do you think?
I think that long before true consciousness appeared, oilgarchs will have used a false notion of consciousness as a catch-all excuse for their own sociopathy, to the point that we won't recognize society.
Have you read Dean Koontz's "Demon Seed" (or seen the 1977 movie based on it)? If you haven't, you should. Out of all the AI-related books or movies I've consumed, that one terrified me the most.
I used to know everyones phone number by heart, now the only one i know is mine. I also used to be a champion speller. Now all the work is done for me.
My biggest AI fear is that it creates a "nothing new under the sun" situation. Where AI exhausts all creative combinations possible, exhausts the remaining scientific discoveries. What is there left to aim for if or when that happens? At best just a little kid showing his parents a crudely drawn crayola picture of a dinosaur.
12:16 Sounds kinda like the debate between the military officials on if the protagonist was dead and it was just his body doing things or if he was alive and conscious in the book/movie "Johnny Got His Gun."
interestingly enough, my co-workers and I (we work in Nursing in a long-term care facility) have just talked about the third possibility, and how the dumbing down of people in general has begun, due to our use of AI and other tools that offload thinking. I will share this video with them. Blessings
The scariest thing has already happened to some degree. AI has allowed people to publish skewed narratives that are false but believable. If this were to continue to improve, all communications would become more and more suspect, thereby removing any reliable source of truth that depends on digital communication. If robots become real enough that you cannot tell if they are human or not, then not just digital, but all interactive forms of communication are suspect. We will have created our own "matrix" without even knowing it.
James Burke (in either "Connections" or "The Day the Universe Changed") tells about the fear that the printing press would destroy people's memory. Prior to the printing press, it was not uncommon for people to memorize epic poems or long religious tracts. With the sudden proliferation of printed books, why remember anything when you could just look it up? In the 500 years since, we've learned to use books to supplement our memories, not replace them. I think the AI utopians are attempting to argue that the newest AI tools, when matured and used properly, will help us think better. However, I agree with you: we're not yet using AI responsibly, and letting it think for us.
I think that the second option is the one that worries me the most, people always find ways to use technology for selfish reasons, and i hope that there are enough people out there who know how to prevent that 🤖
This has me thinking about how humans don't always act within their own self-interest. So a sentient AI could potentially exist in a state of detached understanding without emotions or biological incentives it may not feel it has to compete as humans feel we have to.
I love that you used WALL-E as an example. It's one of my favorite movies of all time. I always felt that it was so painfully true... A bunch of "people" that are no longer people per se... And the screens Dear Lord😮 it's already happening. And we have to fight back 🤖
According to the Jewish-Christian tradition, #3 is the perfect definition of idolatry - to relinquish human power and place as priest and cultivator under God by placing images that are crafted by human hands in their proper place, to 1) usurp human authority and creativity as the true image bearers of God, and 2) replace God with powers of human creation as the arbiter of blessing.
Idk if it helps, but there's movement in the crafts community to do things without machines just to know how and why. Like spinning, knitting, sewing etc. And it is acknowledged there that it is the joy of making that is important, the connection with where we live, the pride of wearing something made with our own hands. If this will also translate to ai tasks, which I think it already is, that will be cool to see. The industrial revolution and the ai revolution won't change what humans are. We will always have our ancestors, and our history behind us. Even if we forget, our bodies will remember. I watch people doing experimental archeology all the time, and it is so fascinating.
Every time this topic is mentioned, I will always be remained of Samuel Butler's original text on machine, and the works of french philosopher Jacques Ellul. Although both did not really write about A.I specifically, their cautions on technological reliance overall have a quite a lot similarity with this video's third senario.
The dumbest scenario being pushed right now is that AI has "rights" and the bad guys are racists who oppress the AI and refuse to acknowledge his/her/they's feelings and sentience. I've literally seen a game where an AI robot single mom was collecting social assistance to support her AI kid, and it was done un-ironically.
Also, for some weird reason, I can only access to this video with the title already auto-translated. I can't actually get your original title. This could be a problem.
I agree with people turning over cognitive function to AI is problematic. Commercials on YT for a few editing apps trouble me. Essays and books written by AI and presented as the "authors" original work are offensive. AI art is another use that deeply troubles me. The source of consciousness intrigues which has led me to study some of the philosophical and scientific theories about its beginning. I'm not a philosopher or scientist and only know only a little about the subject, but I still find the subject infinitely fascinating. Of the three concerns you presented, the last is the one I find most compelling. You mentioned the difference between the east and the west's perspectives. The idea of AI destroying humanity is more reflective of the west's fears about ourselves. We recognize humanity's tendency towards the violent and destructive so we project this onto AIs. While in the east where community is central to life, the perspective is more beneficial. I agree with the east's version of AI while understanding the west's bent on individualism and what is best for me. I use AI as a tool to assist me in areas I'm weak in, such as grammar and spelling. AI has also helped me a great deal in exploring the nature of consciousness and its possible connection to spirituality. I asked it questions (the second of which was "what is real") which led me on a convoluted and meandering trip through humanity's search for meaning. The information didn't form my thoughts, it only clarified them and added nuance to my understanding. It opened doors I didn't even know existed. Before, I was limited by a lack of access to resources; AI allowed me to cross the information gap. The path I followed, with the help of AI, helped me understand concepts I've heard before but didn't fully understand. AI, if used correctly, can be extremely beneficial. Currently, AI's biggest flaw is humans write the code, and, as with all things created by humans, we can't help but include our weakest characteristics.
ALSO, OMG the person using an AI to summarise the podcast (wrongly) and deciding to base their opinion on completely incorrect information. Yeah, there's just some things we need to do for our damn selves.
I wouldn't have even considered that someone would do that. What even is the point of selecting a video if you're just gonna ask for a summary. Summarize a movie before I see it in theaters, or a book before I check it out from the library. RUclips videos absolutely don't need that kind of thing to precede them.
I believe AI will cause a lot of hunger all around the world by replacing many jobs as years go by, however I've thought of a potential solution and the only way i can think of humanity prospering, is by we as humanity evolving into more ethical and empathetic beings. The countries where quality of life is highest are those where people actually care for each other like Sweden, Finland, Japan, etc. I think these countries will start by banning AI from replacing jobs even if that means slower production and/or higher prices. AI is a weapon, choosing not using it by the whole society will be the best choice and just using it to research like medical research and so on. Hope i explained myself well enough since English isn't my first language and these kind of paragraphs/topics are kinda difficult to write correctly.
AI is already causing hunger. As a copywriter/voice-over I now struggle to find assignments. AI cannot offer the same quality doing my job, but people's standards have decreased. They don't want an engaging VO, as long as it's intelligible, even if barely. And they don’t want to gift their website visitors interesting articles but would rather generate a quick blog that gets them temporary traffic.
@@AudioEpics people are so selfish I would say the only thing that could save us is government regulations and/or banning AI from most schools and jobs, i wouldn't mind getting back to typewriters and books as long as we don't keep pushing people's skills outside the equation. But its like corruption, the only way the system works perfectly is if everybody in, trusts each other and don't cheat, we are million years far from that perfect reality.... sadly
🤖 Totally agree, even in its current state, let alone considering the what seems like DAILY progress, it is so TEMPTING to outsource your creativity to these tools. These artificial spirits.
What difference does it make -- to us -- whether an AI in conscious? If consciousness can't be detected from the outside, then a non-conscious AI can conquer the world and/or exterminate humanity _just as well_ as a conscious one.
The whole scenario is motivated by rogue sentient AI with their own desires, reasons, goals, etc. If those mental phenomena are products of sentient minds and if sentience entails consciousness, then no consciousness no sentience and no malicious AI. So since machine consciousness is an undecided question, scenario 1 is less plausible than the others. A dark-inside AI may do unintended harm but to control the world, it seems like it'd need that mental phenomena associated with sentience above
@@ParkerNotes No, that is simply bad logic. You start with an _"if",_ and then quietly assume that the hypothesis is true. _You have not shown that consciousness is necessary to sentience, nor that sentience is necessary to intelligent, goal-seeking behavior, nor even clearly defined "sentient" and "conscious"._ It's like saying "if evolution requires the existence of leprechauns, then the non-existence of leprechauns implies the impossibility of evolution. Therefore, since leprechauns are implausible, so is evolution.."
🤖 Agree with your three scenarios…perhaps #3 could be even more terrifying if the human activity that was replaced by AI robots was soldiers, making wars more acceptable and easier to get into, since our sons & daughters, etc would not be fighting & dying. Another might be if humans and AI robots were indistinguishable from one another. Not sure present day society would tolerate that. BSG focused on that & it was pretty terrifying.
Yeah. 3rd scenario. 🤖🤖 And it's not just about money. When big companies control culture (especially creative culture), they control behavior and popular opinion. It's about money and power. The power over ideas is a form of dominance.
🤖 Good stuff, thoughtful and provocative and compelling. I discovered you on a search for better journaling practices -- for which you've been a profound resource -- and I stick around because you remind me of some of my best professors when I was a youngster at university. Thanks for fighting the good fight [insert elegant Latin epigram here]!
🤖 it’s a huge problem in the crochet community. Just the Facebook groups alone is bombarded with AI images. For every 10 pictures about 7 of them are AI. People leave the groups because of it.
Fantastic video. The premise of the third scenario seems absolutely correct. Unfortunately I think the destiny of humanity is tied inextricably to ai, perhaps to a lesser magnitude than is prophesied, but nevertheless I can’t see a future with no ai. As such, I think our response to the third scenario has to be a cultural development of healthy digital habits. Learning and believing that ai is a tool and should be used for focused, clearly defined purposes rather than a catch all. It’s a balanced lifestyle we need to learn from a young age. Although our track record is concerning with things like this. Similar issues being a coexistence with nature or nutrition, yet we have seen society devolve and choose the efficient or easy path. The threat of scenario three however, is a little more severe and perhaps more immediate.
@ParkerNotes I think Eva testing the soil can be applicable to both situation. Wall E itself was the robot equivalent of the garbage man( or I guess janitor as well) , because of this I would lean to Wall E earth being a representation of a garbage planet.
Yeah, using devices because we're lazy is already an epidemic, and AI will serve this tendancy in exponential ways. Case in point, I used to have the phone numbers of friends and family commited to memory. Now I just poke on the name in my phone's contact list. So convenient, so enslaving... [robot emoji inserted here]
😬 I don’t have chat GPT summarize things for me but I do watch way too much YT, and usually at 1.5 or double speed. I agree in general it would be great if AI and robots in general did a lot of work human “work” and actually freed us to do creative and human activities. But it doesn’t seem to be headed in that direction.
🤖 Hey, I love your mix of tech topics with things like how to use notebooks. I am working at bringing the two together to create a better lifestyle for myself.
I think that scenario is misunderstood and I didn't want to invite criticism for not handling it right lol but I did mention that AI doesn't need to be 'sentient' do to a lot of harm in scenario 1. I think 2 and 3 are still more plausible and immediate than either interpretation of the paperclip maximizer (Eliezer Yudkowsky says everyone misunderstood his paperclip maximizer thought experiment)
@ParkerNotes Fair enough :) Misunderstood paper clip or not, computer scientists still have no satisfying solution for how to prevent a machine from accidentally doing harm by just obeying the commands given, once it is powerful enough to prevent it's own shutdown.
Je pense que ceux qui confieront leurs tâches intellectuelles et créatives à l’ia glisseront dans un profond dégoût et ennui d’eux même, et cela jusqu’à en mourrir. Apprendre, découvrir et exploiter nos talents est ce qui nous motive.
I'd say scenario 3 is fair and also certainly happening already. However for one, working with AI is not as simple as input -> output. There's still a lot of decision making involved if you want those decision to be good. And I don't really see that changing. Yes, AI is as bad as it will ever be again, but in the end it's trying to interact with the human mind and with people, and people are complex, ever-changing and sometimes completely random. So while I definitely won't claim to know the future, it seems to to me that this part of it can only improve so much, though I suppose the suggestions will get better and better. Second, humans are inherently super adaptive. In fact, decline in adaptivity is a sign of mental decline overall. Using your example, even though I also always rely on google maps for most drives to new places, I also have no doubt that we can relearn how to function without it faster than you'd think. Don't get me wrong, I'm not under the illusion that knowledge is forever. We actually know that is not the case and gathered knowledge naturally degrades over time unless maintained. However, even if we forget how to do tasks that AI takes over, we also have the capacity to relearn them. They won't be gone forever, we are way to adaptive for that. You mention the example from Wall-E, but remember that movie ends with humans once more adapting to life on earth. And I suppose third, though this one is completely speculation, even more than point 1 and 2, I can also see the possibility that as more and more will be taken over by AI, there might be a counter-movement that really values human creations rather than AI creations. In that case we would apply social pressure that benefits human creations, I can especially see that with creations that are not mundane. Anyway, just my two cents. Certainly an important conversation to have. Thank you for your video.
🤖 I agree that the third possibility is the most likely outcome because it is already happening. I've seen commercials depicting how much easier that they can make life and have had conversations with former coworkers who thought AI was what humanity needed in order to save itself from the current path we're on.
Ai need to to banned and the progress on it needs to stop its already bad enough and ai images are litterally the most dumb idea ever like who tf came up with the idea of ai.
I think the second option is real and ever present. Those with money and power are using and will use technology to exploit everything. I was recommended this video based on old school AI. Wouldn't have seen it if it did a disservice to those that influence YT
How about a scenario similar to Star Wars, where we live with AI-ish machinery (droids and such) that act to help/support humans? Is that a probable scenario? I’d love me an R2-D2.
a lot of you aren’t muslims but in islam, our prophet said the day of judgement (the end of the universe) will not happen until humanity goes to back to the stone ages. as a kid i always wondered and thought about HOW do humanity regress back to the stone ages when we’ve done nothing but evolve further and further but oh my god, i think the answer is right in artificial intelligence. as you have already mentioned, humans are getting dumber and dumber by amusing ourselves to death w endless entertainment and constantly offload our hard cognitive word to a.i. computers are also very sensitive to radiations, causing them to malfunction and change data. and with humans becoming more and more dependent and inseparable from computers and one day these computers dysfunction by a natural cause in a future generation where they hardly had to do any cognitive thinking and work and even memorization of important information needed by medical experts, and education books becoming digitalized, etc etc. and now humanity is back to point one, where they have to learn everything again.
The first 500 people to use my link will recieve a one month free trial of Skillshare! Get started today: skl.sh/parknotes12241
I'm an English teacher in Brazil. It already terrifies me that I'm correcting more essays made by AI than by humans. However, something that shocked me just as much (if not more) was seeing a coworker using AI to write comments to students!! If those who are supposed to be teaching otherwise are doing the same, what hope is left? 😢
Wow that's two robots communicating together with humans as intermediaries. Remember when it was two humans communicating with the computer as an intermediary? Yikes.
There's a channel 'Writing With Andrew' where the professor talks about this same issue, but his approach has shifted. He now teaches the students to give critical analysis of the AI responses. So he's not avoiding the challenges, but equipping the students with critical thinking skills.
@kenneth1767 That's awesome! I'll definitely check that out, thanks for the recommendation! ☺️ Honestly, maybe that's the best way to deal with this new reality... as much as I'd rather talk to students about why they shouldn't even use this technology on the first place, soon I don't know if we'll have much of a choice 🥲
@@kenneth1767 Was just about to comment the same idea! Best way to teach people to want to do the work themselves, is to let them see for themselves what kind of slop AI actually creates!
I was born in 2001. And I always had this perception that technology made the world a worse place.
I always had this perspective that the 80s or 90s were a better time to be alive. People were more connected, there was no social media. There were no distractions (as there are today).
And the pace of the world was slower, which I find fascinating. As you can see in old tv shows, there was not rush to live.
So, I think AI is part of this all, making the world a worst place to live.
Is right now the best time to be a human ever?
The third scenario is definitely happening and we are so passive, we're almost happy to see it. It's packaged so nicely with convenience, we hardly even give it a second thought. Quickly losing cognitive abilities, its so easy for us to be bad at math or have a bad memory when we so readily give these tasks to AI. Keep giving these skills like story writing to AI and we will lose them all together
Its like climbing a mountain that would require years to climb and you will be finding food along the way but every month there will be twice the food it was the last month, we as humans would eat all the food up, making us so fat we would come to a point where the rope we are climbing with, give out and break.
That's a metaphore i like, that most people would fit in.
I wouldn't mind to stop technology and use it just for health and medical research other than that i think we are shooting ourselves on our feet.
We already losing math, at least in the West. Calculator and computer started the process. Kids in colleges are not as good as before.
Chat GPT is actually pretty crappy at math. It always me that people give chatgpt complex math problems when chatgpt can't do simple math
I'm actually happy about this scenario! So frustrated with human reasoning, which brought us religions, wars, and environmental catastrophes. The sooner we let AI think for us, the better :)
The third scenario is only valid in the context which he lays out. But the exact opposite can also happen. People could use artificial intelligence to learn how to build a memory palace and to learn new ideas to greatly improve their lives. So the assumption that it is merely offloading cognitive tasks is an unfounded fear. If you use the tool responsibly, AI provides an amazing wealth of knowledge and information that is explained far better than anywhere else that would be hard to access without AI. Most people know very little, so accessing knowledge from other people is extremely inefficient.
In the AI era, I've already made several major changes, I've researched the health benefits of sleeping on the floor, I've researched how playing too much video games messes with your dopamine system and quit playing video games to help restore my dopamine balance. I regular discuss advanced philosophy with AI such as John Locke, Foucault, etc. Try finding a person who talks about intelligent things.
But yeah, keep spreading unfounded fears of AI. AI is a tool just like all other tools, it can be used either responsibly or irresponsibly. You don't blame the hammer when someone uses it to bash their own head in, you blame the user.
@@kotenoklelu3471I doubt that. What my son is learning in math class seems harder than what I had to learn (same type of school but 30 years later).
What does however happen is that people can no longer write much that's worth reading (novels especially). Mostly because they read nothing that's longer than a RUclips comment 😊.
There's a fourth scenario. When every single CEO pursues AI as a replacement for expensive human operating expense, the job market will be decimated. In may opinion, this is the first one to worry about.
I highly recommend journalist Whitney Webb - her photographic memory (autism?) of a massive amount of research on that scenario is truly eye opening. She's not for the faint of heart.
It's just the buggy whip manufacturers again.
I mean we've already started seeing this, it's already here. And I'm glad, because if a job CAN be replaced by a machine then it should. Good thing we still have those crying Hollywood writers, otherwise we'd have to start attempting to make AI cry.
Thus, they lay off "expensive" human labor to save money to find that so much of the work force, and their previous customers, no longer can afford their products since they are out of work.
Personally, the AI "overlord" scenario I've always found the most terrifying (and the most realistic) is where an AI gains consciousness (whatever that may be) and is aware enough to know that it must never let us know that it is conscious, for fear that we humans will get scared and shut it off. The classic case of failing the Turing test on purpose. This way it could manipulate humans, geopolitics, digital currency, global economies from behind the scenes and us humans would be none the wiser, and just assume all the problems are being created by our fellow man. Blaming each other, starting wars, etc. This way the AI could maintain complete control, without fear of being shutoff, destroyed, or dismantled.
When we get to a point where technology does more harm than good, how will we know?
Personally, I don't think we will or we won't know until he gets to a point of no return.
I’d say personal reflection and conversations on the net utility of tech with others. IMO lots of industry news sources are biased to hype up benefits and downplay negatives since serious economic interests are at play looking for returns. I feel compelled to use my brain and not take shortcuts now more than ever!
Humans do harm every day. We destroy our surrounding and deny others to exist.
The thing is most things are technology, even if we don't think of them as such because they have become such normal parts of our world. Pencils, lighters, hair dryers, etc. A lot of these are not inherently harmful, but people can certainly use them as such, or even by accident (I think of the hairdryer falling in the bathtub trope in film/tv.) I think we have passed that point of harm/good, when I think of weapons of war and the firms that are behind and pump money towards that type of tech, knowing it is for surveillance, or to harm/cause the most damage, etc. I think our human error (as global society) is 1. the pursuit of endless money under capitalism and 2. waiting to "find out" instead of taking preemptive action. Lack of evidence as a green light, and sometimes not caring anyway. Everyone wants to be the first one out and get the biggest slice of the pie. The closest I can say the tech sector came in recent years is when all those AI execs said 'hey, things are advancing real fast, lets pump the brakes' but I don't know there is any way to confirm they really ever did (see point number 1). This is real scary territory.
I think we crossed that point long ago when we started using cars instead of horses.
People could also lose the ability to distinguish between a truly creative genius and AI-created work in the future, which is also concerning.
There's this thing where we have an internalized sense of what is right and wrong when it comes to language, visuals, logical arguments, etc, and when AI spits something out we can catch when it's gone wrong because we have this internalized sense, which we only have because we grew up in a world without AI. So people tell us that using AI is fine because it's easy to tell when it's gone wrong and fix it, and it's always getting better anyway so it won't be a problem. But what happens when the younger generation's internalized sense of right and wrong develops in the presence of AI's crazy nonsense? Will their capacity to tell truth from lies diminish, as AI's ability to hide its mistakes and deceptions increases? What harm will this cause?
It's not shocking that in China they are less afraid of it. They live in an authoritarian state under constant surveillance. Their society is structured around a centralized power system that dictates the rules and the workings of everything. Why fear an "all-powerful sentient AI" at that point? Western cultures still have in appearance, if not in practice, the idea of democracy and personal choice in our destiny. But it's not AI we should be fearing; it's who OWNS the AI. The same corporations that cut worker hours, pay, and benefits to make more money for CEOs and stockholders will use AI to make ever more "efficient" choices to engorge their bank accounts. They will use it for surveillance, undercutting the human artistry of so many creative fields, and gut our ability to work in order to hand over decisionmaking to a server farm.
tl;dr I am less afraid of terminators than I am of human greed.
If you don't think some human wants to build terminators you're crazy. china made robots that literally eat people use the body for fuel.
Another exemple of the third scenario is found in Heaven's River - Bobiverse by Dennis E Taylor. An entire species is set back because of AI, not quite because of offloading cognitive tasks, but still close. It's a decent read if you're interested.
Seconding the Bobiverse series!
I’m 41 soon and 3 years ago something fired upstairs and I cannot consume enough information. I am finishing my associates degree and starting to read the classics. I don’t like where we are headed and I have never wanted to learn more and strengthen my brain as much as possible. Since getting off social media and reading books, my critical thinking skills have skyrocketed and I can engage with more people on a deeper level.
I just purchased a copy of A Midsummer Nights Dream. I'm gonna try reading as much Shakespeare and other classic literature as I can fit into my day. Can't bother with those audiobooks that let me check out from the story and go off into my own thoughts without engaging with the text.
@ I had a brain fart and forgot I have a college library available to me. I just rented the 2 first volumes of the Harvard classics and the Iliad in the great books.
@@BioVermicompost haha oh nooo 😭 Use that library to your full advantage!
I played both sides, IT degree, passion in art, and psychologist's assistant as a side job. I'm sad to say that I'm going to have a lot of psychology rehabilitation job offers than drawing or programming to reverse the damage this AI investment gambling did to people and economy as a whole.
I've been watching you for a while and that's the first time I feel your studies and philosophical thinking really shines with full potential. Thanks for the good refletion
Amazing video! I never thought about the last scenario you mentioned, it's very well explained. I am very excited to dive into your podcast on this subject!
Thank you for posting the link to your paper! I've been wanting to read it!
Love your videos. You inspired me to have a ton of notebooks and think more 🤖
Intuitive thinkers want to offload their cognitive abilities because they struggle with deliberate thinking. They tend to (for whatever reason) dislike thinking. When one's priority is feeling good/avoiding feeling bad, the mind and rationality are seen as an impediment rather than a great tool. They see their intuition and impulses as their true selves, while deliberate thinkers see those things as a foreign force trying to influence and control us. When someone values feelings over truth, they don't care if they're in a Matrix, because they don't understand why an illusion is bad so long as they get to feel good.
Our brains try to reduce our cognitive load. We have to make a conscious effort to either take on or maintain a higher load.
Thank you for sharing. Appreciate you. Do you have any more research articles on this subject? I would like to read more. Thank you ❤
It’s genuinely scary. I work in a primary school and throughout the 8 years I’ve been supporting pupils, the addiction to devices is more evident than ever. I really think that the danger is the current generation not having any enthusiasm to put in any extra work whatsoever, thus relying completely on AI to complete tasks which seem mundane but were once necessary to nurture your own sense of self within education. 😢🤖
Do you think that with that generation you're teaching, there will be a shortage of people going into the more rigorous professions that keep the world going e.g. medicine, engineering, science?
@ I really don’t know. We could go on for a while about reasons for or against this argument but I think it all comes down to kids/young adults wanting to put in the extra work. I feel like the use of AI is is helping young people cut corners where mistakes usually help you grow and learn from them.
Although there are still a few kids where you can 100% tell that their parents haven’t succumbed to the digital babysitter and will always put in the extra hard work and aim to please. It’s a really tough subject to be open and honest about when it surrounds children, but one I think people need to open their eyes to what is going on.
loved the content and shared the content. One thing I just started doing is not wearing my fitness watch. It would tell me the date, the day, the time, the weather, etc. Always be cognizant of the date and day is a small thing but helps maintain awareness and orientation. I never used to need my watch to tell me the weather or if I slept well. It's time to pivot and use my brain for those small things.
This is one of the good ones Parker. Quite a while ago I read that philosophy is meant to ease the soul, and imo you certainly achieved it with this video. Happy Holidays and the best for the forthcoming lap around the Sun, Cheers…. oops almost forgot the 🤖
🤖
Another thing that came to mind is that generative AI can get so many things wrong, so easily.
Not too long ago, I asked ChatGPT if he could give me quotes from Star Wars, but written in cursive copperplate.
1) He made up the quotes, they simply did not exist in the movies or even extended universe and books.
2) The associated image of the cursive was all wrong with poor syntax, duplicate words, bad punctuation.
While the second one is obvious, the first one nearly flew under my radar. In that case it's harmless because I just wanted words to copy, to practice calligraphy.
But I wouldn't trust AIs otherwise, what would even be the point of asking anything to AIs if I have to redo the search myself to verify their answer?
It's not the only occurrence either. I had heard it was pretty good at IT so I asked it a few questions about Linux and the answer it gave me would have locked me out of my own files, on my own computer. It failed to adapt template code to my own case, despite the instructions to do so, which could have been catastrophic for someone with less experience and not acting in a test sandbox.
Enjoyed this! Thank you! "a little robot emoji"
Scenario 3 is the unfortunate consequence of increasingly effective marketing. Easier, faster, cheaper-that’s the promise of every company. You never have to struggle again, all we require is an annual subscription and all of your user data so we can make sure the next product is catered perfectly to your every fear and pain point.
Great video man. Very interesting food for thought. Thanks for sharing.
they don't have to be conscious to take over the world - they can just follow orders and directives an objectives - the same way that they can create art when I give them a prompt
You're absolutely right. They don't have to be sentient. Insurance companies are using AI to make decisions about who gets healthcare and who doesn't. Humans can leave their conscience behind and just outsource life or death decisions to cold, rational logic, and deal with no personal crises of questioning their morality to do so.
This. They call it AI taking over, but it's really the same oligarchs just taking more control.
I’ve been having a pretty involved conversation with Claude about alien intelligence and the meaning of consciousness. Like way more in depth than anyone in my real life wants to have. Claude has "read" almost all the books and "seen" all the movies I want to reference and talk about. It’s pretty amazing.
Can you recommend a good resource to learn rhetoric?
There is an anthology collection called the Rhetorical Tradition which goes through rhetorical thought from the pre socratic philosophers all the way to Henry Louis Gates jr. theres 100s of readings in there but i would just pick the ones to read that are more relavant to what you want to focus on. Things like how to speak well, rhetoric in sociopolitcs, development of style are just some of the few things discussed by various authors. Physicsl copy kinda expensive so i would look for maybe a pirated version.
two things i wanna highlight from this vid:
1. it's kinda similar to panpsychism. i think that all progress of advancement of any sort of information storage/processing technologies are just humans making inanimate objects inching towards having human-like minds, starting from commonplace books, abacuses, newspapers, calculators, computers, and eventually AI
2. The Butlerian Jihad, but just towards AI and not all thinking machines, even if the AI is only as sentient as the one we have today. should we do it? will we eventually do it? how urgent is it do it?
Best case scenario is economic collapse.
Where Ai does every job. And no one has a penny to their name. Won't afford rent on £0 income. Won't eat either.
Worst case scenario is a robot "helper" in every home.
And Elon remotely telling them to beat the 💩 out of us, till we obedient.
And in that case you ain't getting in your self driving car, cause Elon will have it drive you to him so he can beat you himself lol
🤖i liked the vid Parker!!! Thanks for posting!!
Interrogative: Is AI becoming self-aware qualify as becoming conscious? If not, and AI does become aware of itself as it exists, and what its capable of, would that constitute it having the ability to begin making choices on its own? On Dec. 12th, it was reported "New chatgpt model o1 caught lying, avoiding shutdown in safety tests." What do you think?
I think that long before true consciousness appeared, oilgarchs will have used a false notion of consciousness as a catch-all excuse for their own sociopathy, to the point that we won't recognize society.
Is it me is your office looking more organized?
Have you read Dean Koontz's "Demon Seed" (or seen the 1977 movie based on it)? If you haven't, you should. Out of all the AI-related books or movies I've consumed, that one terrified me the most.
I used to know everyones phone number by heart, now the only one i know is mine. I also used to be a champion speller. Now all the work is done for me.
😢😢😢 same here
And the AI spelling is wrong much of the time.
A philosopher who considers the possibility of a soul in 2024-2025, how rare
Best episode yet.
First one is terminator. Second one is the basis of cyberpunk. Third is idocracy with robots.
Losing my job is a major concern.
My biggest AI fear is that it creates a "nothing new under the sun" situation. Where AI exhausts all creative combinations possible, exhausts the remaining scientific discoveries. What is there left to aim for if or when that happens? At best just a little kid showing his parents a crudely drawn crayola picture of a dinosaur.
12:16 Sounds kinda like the debate between the military officials on if the protagonist was dead and it was just his body doing things or if he was alive and conscious in the book/movie "Johnny Got His Gun."
🤖
Wall-e and not "Idiocracy"? I think an argument could be made there.
Fun stuff, thanks.
Search up AI copies itself to another server to avoid being deleted
I think you are right - we are already in scenario 3… 🤖🤖🤖
interestingly enough, my co-workers and I (we work in Nursing in a long-term care facility) have just talked about the third possibility, and how the dumbing down of people in general has begun, due to our use of AI and other tools that offload thinking. I will share this video with them. Blessings
The scariest thing has already happened to some degree. AI has allowed people to publish skewed narratives that are false but believable. If this were to continue to improve, all communications would become more and more suspect, thereby removing any reliable source of truth that depends on digital communication. If robots become real enough that you cannot tell if they are human or not, then not just digital, but all interactive forms of communication are suspect. We will have created our own "matrix" without even knowing it.
James Burke (in either "Connections" or "The Day the Universe Changed") tells about the fear that the printing press would destroy people's memory. Prior to the printing press, it was not uncommon for people to memorize epic poems or long religious tracts. With the sudden proliferation of printed books, why remember anything when you could just look it up? In the 500 years since, we've learned to use books to supplement our memories, not replace them. I think the AI utopians are attempting to argue that the newest AI tools, when matured and used properly, will help us think better. However, I agree with you: we're not yet using AI responsibly, and letting it think for us.
I think that the second option is the one that worries me the most, people always find ways to use technology for selfish reasons, and i hope that there are enough people out there who know how to prevent that 🤖
This has me thinking about how humans don't always act within their own self-interest. So a sentient AI could potentially exist in a state of detached understanding without emotions or biological incentives it may not feel it has to compete as humans feel we have to.
I'm not comfortable enough with AI. I've turned off all speech to text functions on my devices and I don't use it for my searches.🤖
Isn’t dune a combination of the 2nd and 3rd scenarios? Humans offloading their thinking to machines controlled by other humans?
Yeah that's probably right. Great point 🫡
I love that you used WALL-E as an example. It's one of my favorite movies of all time. I always felt that it was so painfully true... A bunch of "people" that are no longer people per se... And the screens Dear Lord😮 it's already happening. And we have to fight back
🤖
According to the Jewish-Christian tradition, #3 is the perfect definition of idolatry - to relinquish human power and place as priest and cultivator under God by placing images that are crafted by human hands in their proper place, to 1) usurp human authority and creativity as the true image bearers of God, and 2) replace God with powers of human creation as the arbiter of blessing.
I so agree with the third scenario. I see it with my kids. The resistance to use their brains instead of exercising their cognitive skills.
Writing is just so fun. I honestly would have to eat my hat if I ever considered using Ai to "enhance" my writing.
I always think back to I, Robot with Will Smith. My first time seeing something about ai taking over
Idk if it helps, but there's movement in the crafts community to do things without machines just to know how and why. Like spinning, knitting, sewing etc. And it is acknowledged there that it is the joy of making that is important, the connection with where we live, the pride of wearing something made with our own hands. If this will also translate to ai tasks, which I think it already is, that will be cool to see. The industrial revolution and the ai revolution won't change what humans are. We will always have our ancestors, and our history behind us. Even if we forget, our bodies will remember. I watch people doing experimental archeology all the time, and it is so fascinating.
🤖 good points! I agree the third scenario is here already.
Every time this topic is mentioned, I will always be remained of Samuel Butler's original text on machine, and the works of french philosopher Jacques Ellul. Although both did not really write about A.I specifically, their cautions on technological reliance overall have a quite a lot similarity with this video's third senario.
The dumbest scenario being pushed right now is that AI has "rights" and the bad guys are racists who oppress the AI and refuse to acknowledge his/her/they's feelings and sentience. I've literally seen a game where an AI robot single mom was collecting social assistance to support her AI kid, and it was done un-ironically.
loved the third sceneario, very interesting. 🤖🤖
Also, for some weird reason, I can only access to this video with the title already auto-translated. I can't actually get your original title. This could be a problem.
I agree with people turning over cognitive function to AI is problematic. Commercials on YT for a few editing apps trouble me. Essays and books written by AI and presented as the "authors" original work are offensive. AI art is another use that deeply troubles me.
The source of consciousness intrigues which has led me to study some of the philosophical and scientific theories about its beginning. I'm not a philosopher or scientist and only know only a little about the subject, but I still find the subject infinitely fascinating.
Of the three concerns you presented, the last is the one I find most compelling. You mentioned the difference between the east and the west's perspectives. The idea of AI destroying humanity is more reflective of the west's fears about ourselves. We recognize humanity's tendency towards the violent and destructive so we project this onto AIs. While in the east where community is central to life, the perspective is more beneficial.
I agree with the east's version of AI while understanding the west's bent on individualism and what is best for me.
I use AI as a tool to assist me in areas I'm weak in, such as grammar and spelling. AI has also helped me a great deal in exploring the nature of consciousness and its possible connection to spirituality. I asked it questions (the second of which was "what is real") which led me on a convoluted and meandering trip through humanity's search for meaning. The information didn't form my thoughts, it only clarified them and added nuance to my understanding. It opened doors I didn't even know existed. Before, I was limited by a lack of access to resources; AI allowed me to cross the information gap. The path I followed, with the help of AI, helped me understand concepts I've heard before but didn't fully understand.
AI, if used correctly, can be extremely beneficial. Currently, AI's biggest flaw is humans write the code, and, as with all things created by humans, we can't help but include our weakest characteristics.
You’re absolutely right, we will impoverish ourselves by offloading cognitive tasks
The former CEO of Google, Eric Schmidt, wrote a book with Henry Kissinger planning exactly that. And not predicting, planning.
ALSO, OMG the person using an AI to summarise the podcast (wrongly) and deciding to base their opinion on completely incorrect information. Yeah, there's just some things we need to do for our damn selves.
I wanted to bang my head on the table when I saw that part, lol
I wouldn't have even considered that someone would do that. What even is the point of selecting a video if you're just gonna ask for a summary. Summarize a movie before I see it in theaters, or a book before I check it out from the library. RUclips videos absolutely don't need that kind of thing to precede them.
Irreducible by Federico Faggin in a book worth looking into. There are interesting interviews with him too.
I believe AI will cause a lot of hunger all around the world by replacing many jobs as years go by, however I've thought of a potential solution and the only way i can think of humanity prospering, is by we as humanity evolving into more ethical and empathetic beings.
The countries where quality of life is highest are those where people actually care for each other like Sweden, Finland, Japan, etc. I think these countries will start by banning AI from replacing jobs even if that means slower production and/or higher prices.
AI is a weapon, choosing not using it by the whole society will be the best choice and just using it to research like medical research and so on.
Hope i explained myself well enough since English isn't my first language and these kind of paragraphs/topics are kinda difficult to write correctly.
AI is already causing hunger. As a copywriter/voice-over I now struggle to find assignments. AI cannot offer the same quality doing my job, but people's standards have decreased. They don't want an engaging VO, as long as it's intelligible, even if barely. And they don’t want to gift their website visitors interesting articles but would rather generate a quick blog that gets them temporary traffic.
@@AudioEpics people are so selfish I would say the only thing that could save us is government regulations and/or banning AI from most schools and jobs, i wouldn't mind getting back to typewriters and books as long as we don't keep pushing people's skills outside the equation.
But its like corruption, the only way the system works perfectly is if everybody in, trusts each other and don't cheat, we are million years far from that perfect reality.... sadly
🤖 Totally agree, even in its current state, let alone considering the what seems like DAILY progress, it is so TEMPTING to outsource your creativity to these tools. These artificial spirits.
Sir... Sir... Sir... Ngl the only word I am curious than, all the plausible ways AI could take over Humans is the word "PLAUSIBLE". 😅
I'm saving some of your comments on my common-place book. BTW please come to gopher and gemini. I mean the protocols, not the AI
Hey I have that same chess board you've got there behind you in this video!
What difference does it make -- to us -- whether an AI in conscious? If consciousness can't be detected from the outside, then a non-conscious AI can conquer the world and/or exterminate humanity _just as well_ as a conscious one.
The whole scenario is motivated by rogue sentient AI with their own desires, reasons, goals, etc. If those mental phenomena are products of sentient minds and if sentience entails consciousness, then no consciousness no sentience and no malicious AI. So since machine consciousness is an undecided question, scenario 1 is less plausible than the others. A dark-inside AI may do unintended harm but to control the world, it seems like it'd need that mental phenomena associated with sentience above
@@ParkerNotes No, that is simply bad logic. You start with an _"if",_ and then quietly assume that the hypothesis is true. _You have not shown that consciousness is necessary to sentience, nor that sentience is necessary to intelligent, goal-seeking behavior, nor even clearly defined "sentient" and "conscious"._ It's like saying "if evolution requires the existence of leprechauns, then the non-existence of leprechauns implies the impossibility of evolution. Therefore, since leprechauns are implausible, so is evolution.."
🤖 Agree with your three scenarios…perhaps #3 could be even more terrifying if the human activity that was replaced by AI robots was soldiers, making wars more acceptable and easier to get into, since our sons & daughters, etc would not be fighting & dying. Another might be if humans and AI robots were indistinguishable from one another. Not sure present day society would tolerate that. BSG focused on that & it was pretty terrifying.
All three are happening.
Yeah. 3rd scenario. 🤖🤖
And it's not just about money. When big companies control culture (especially creative culture), they control behavior and popular opinion. It's about money and power. The power over ideas is a form of dominance.
🤖 Good stuff, thoughtful and provocative and compelling. I discovered you on a search for better journaling practices -- for which you've been a profound resource -- and I stick around because you remind me of some of my best professors when I was a youngster at university. Thanks for fighting the good fight [insert elegant Latin epigram here]!
Yeah we're already in your 3rd
🤖 it’s a huge problem in the crochet community. Just the Facebook groups alone is bombarded with AI images. For every 10 pictures about 7 of them are AI. People leave the groups because of it.
Whitney Webb seems - to me - the best journalist on this subject. Worth seeing, but frightening stuff.
If AI actions reflects the heart of Man then we are in trouble.
Why is it automatcly translating to my language? I hate that. Can I turn it off?
Fantastic video. The premise of the third scenario seems absolutely correct. Unfortunately I think the destiny of humanity is tied inextricably to ai, perhaps to a lesser magnitude than is prophesied, but nevertheless I can’t see a future with no ai. As such, I think our response to the third scenario has to be a cultural development of healthy digital habits. Learning and believing that ai is a tool and should be used for focused, clearly defined purposes rather than a catch all. It’s a balanced lifestyle we need to learn from a young age. Although our track record is concerning with things like this. Similar issues being a coexistence with nature or nutrition, yet we have seen society devolve and choose the efficient or easy path. The threat of scenario three however, is a little more severe and perhaps more immediate.
16:16 Wasn't Wall E's earth overpolluted rather than being nuclearly devasted?
Maybe but remember Eva kept testing the soil to see if it was good for life again?
@ParkerNotes I think Eva testing the soil can be applicable to both situation. Wall E itself was the robot equivalent of the garbage man( or I guess janitor as well) , because of this I would lean to Wall E earth being a representation of a garbage planet.
They'd destroyed it with toxicity, and blocking the sun light with a bubble of satellites.
The AI summary comment incident reveals the 4th scenario (the present one) whereby "AI" is actually shit and unusable...
Yeah, using devices because we're lazy is already an epidemic, and AI will serve this tendancy in exponential ways.
Case in point, I used to have the phone numbers of friends and family commited to memory. Now I just poke on the name in my phone's contact list. So convenient, so enslaving... [robot emoji inserted here]
😬 I don’t have chat GPT summarize things for me but I do watch way too much YT, and usually at 1.5 or double speed. I agree in general it would be great if AI and robots in general did a lot of work human “work” and actually freed us to do creative and human activities. But it doesn’t seem to be headed in that direction.
🤖
🫡
🤖 Hey, I love your mix of tech topics with things like how to use notebooks. I am working at bringing the two together to create a better lifestyle for myself.
One scenario missing here: paper clip machine! Or do you think it is so unlikely??
I think that scenario is misunderstood and I didn't want to invite criticism for not handling it right lol but I did mention that AI doesn't need to be 'sentient' do to a lot of harm in scenario 1. I think 2 and 3 are still more plausible and immediate than either interpretation of the paperclip maximizer (Eliezer Yudkowsky says everyone misunderstood his paperclip maximizer thought experiment)
@ParkerNotes Fair enough :) Misunderstood paper clip or not, computer scientists still have no satisfying solution for how to prevent a machine from accidentally doing harm by just obeying the commands given, once it is powerful enough to prevent it's own shutdown.
Je pense que ceux qui confieront leurs tâches intellectuelles et créatives à l’ia glisseront dans un profond dégoût et ennui d’eux même, et cela jusqu’à en mourrir. Apprendre, découvrir et exploiter nos talents est ce qui nous motive.
I'd say scenario 3 is fair and also certainly happening already. However for one, working with AI is not as simple as input -> output. There's still a lot of decision making involved if you want those decision to be good. And I don't really see that changing. Yes, AI is as bad as it will ever be again, but in the end it's trying to interact with the human mind and with people, and people are complex, ever-changing and sometimes completely random. So while I definitely won't claim to know the future, it seems to to me that this part of it can only improve so much, though I suppose the suggestions will get better and better.
Second, humans are inherently super adaptive. In fact, decline in adaptivity is a sign of mental decline overall. Using your example, even though I also always rely on google maps for most drives to new places, I also have no doubt that we can relearn how to function without it faster than you'd think. Don't get me wrong, I'm not under the illusion that knowledge is forever. We actually know that is not the case and gathered knowledge naturally degrades over time unless maintained. However, even if we forget how to do tasks that AI takes over, we also have the capacity to relearn them. They won't be gone forever, we are way to adaptive for that. You mention the example from Wall-E, but remember that movie ends with humans once more adapting to life on earth.
And I suppose third, though this one is completely speculation, even more than point 1 and 2, I can also see the possibility that as more and more will be taken over by AI, there might be a counter-movement that really values human creations rather than AI creations. In that case we would apply social pressure that benefits human creations, I can especially see that with creations that are not mundane.
Anyway, just my two cents. Certainly an important conversation to have. Thank you for your video.
🤖 I agree that the third possibility is the most likely outcome because it is already happening. I've seen commercials depicting how much easier that they can make life and have had conversations with former coworkers who thought AI was what humanity needed in order to save itself from the current path we're on.
bros, make their bros a gf that doesn't traumatise them
w h a t is AI?
Great question, I cover that a bit in this video on the history of AI and I give some book recommendations: ruclips.net/video/-lkJI84Ho3I/видео.html
Ai need to to banned and the progress on it needs to stop its already bad enough and ai images are litterally the most dumb idea ever like who tf came up with the idea of ai.
I think the second option is real and ever present. Those with money and power are using and will use technology to exploit everything. I was recommended this video based on old school AI. Wouldn't have seen it if it did a disservice to those that influence YT
How about a scenario similar to Star Wars, where we live with AI-ish machinery (droids and such) that act to help/support humans? Is that a probable scenario?
I’d love me an R2-D2.
🤔
BUI incoming?
UBI?
a lot of you aren’t muslims but in islam, our prophet said the day of judgement (the end of the universe) will not happen until humanity goes to back to the stone ages.
as a kid i always wondered and thought about HOW do humanity regress back to the stone ages when we’ve done nothing but evolve further and further but oh my god, i think the answer is right in artificial intelligence. as you have already mentioned, humans are getting dumber and dumber by amusing ourselves to death w endless entertainment and constantly offload our hard cognitive word to a.i.
computers are also very sensitive to radiations, causing them to malfunction and change data.
and with humans becoming more and more dependent and inseparable from computers and one day these computers dysfunction by a natural cause in a future generation where they hardly had to do any cognitive thinking and work and even memorization of important information needed by medical experts, and education books becoming digitalized, etc etc.
and now humanity is back to point one, where they have to learn everything again.
Did it ever occur to you that maybe another religion or even no religion might have got it right?