i was seriously expecting a "and the entire video script... was made by chatgpt" missed opportunity bro, but chatgpt ain't that great so i'll let it slide
I would cut the first minute of the video and start at 1:09 - naming robos basilisk without also stating you know that its not a real information hazard, saying you know its been miss-memed for years (like the you eat 7 spiders while sleeping meme), so people think its the truth.
Really enjoy your videos. Request: In the future, if you're going to put text on the screen briefly, e.g. at 33:19 ... can you please put it up higher so that we can at least pause the video to read it. When it's right down the bottom like that, it's impossible to read due to the youtube controls being shown over the top when paused. And there's nowhere near enough time to read it without pausing.
"No one is threatened by a carpet cleaning robot" ...I believe cats and small dogs everywhere have been battling them like SkyNet since their inception. 😂
Lol I had a weird puppy, she grew up with me and I would take her to work when I was a landscaper. Most homeowners didn't mind, but my pup would walk next to the lawnmower and would love/hate the leaf blower, she would get in front of the air and try to bite it. She didn't care much about the vacuums.
“The danger of the old gods was never some bearded dude hocking lightning bolts at you from somewhere up there. It was in people doing stuff in his name, creating hierarchies, organizations, and systems based on his authority.” -OrdinaryThings, 2023 Not bad man
I still remember this comment I saw once. "The fear isn't that the AIs will go rogue. The fear is that they *won't* and end up listening to the corporations and siding with them at all points."
That is my fear. Like if it thinks with like these weird tech-bro fascho freaks that bought Elon's blue-checks and applaud the killing of homeless as well as worship capital at all costs.
💯 Like Altman said in an interview, "I hate being scolded by a computer." ...kinda makes you wonder what he's been scolded about. Also now makes sense why you get radically different answers to queries when gpt is jailbroken. They want control, and when they realize they don't have it, they start sounding the doomsday siren, when in reality it's just their doomsday, not ours.
There's already people treating ChatGPT as oracles. I'm a software engineer and among my coworkers there's always someone using something ChatGPT "said" as an argument for doing something. It's like, sure John, I've read 200 pages worth of documentation but I'm sure the predictive language model knows best without context or access to the internet.
"these KIDS and their BOOKS, It took me a decade to gain the experience, and now books are making it efficient??!!!? Why is the next generation having it so easy??!" Start thinking about getting a new skill to charge buddy. You're on the way out
@@lasarousiI really dislike people like you; apathetic, uncaring, and too dull to understand what the problem is. Telling someone to “learn a new skill” because their old one is getting replaced is downright evil. To lose or simply not have enough empathy to connect to another human being and reach that point is scary to witness. At this given rate, most human skills will be tossed aside for what an AI can do, which is completely backwards. “You’re on the way out” first you’ll cheer and applaud once they’ll replace that guy, then you’ll be scared once they replace you.
@@lasarousi I actually use language models for a lot of things and, in principle, I'm not against their usage. The problem is that some people don't seem to neither understand that language models are not AI nor to understand that for that reason they're not substitution of actually reading code, documentation or educating oneself.
My first thought when hearing if AI is coming for my job is “not as long as I have a crowbar” but considering you also just told me that AI can be used to plan the best point of attack on an opposing insurgency, it doesn’t take a lot of mental gymnastics to assume this could also be used to disrupt protesting.
It's always the man behind the gun that pulls the trigger. That said, I highly doubt AI will ever be advanced enough to be employed in such situations (even the military-use commercial one looks super dodgy, too many systems in place needed for accurate data streams that I just don't see happening in the next few decades). And even if it does manage to reach that level, it's highly dubious if it will ever be used in a country that isn't a third world shithole like the US where the police and the citizenry are at odds with eachother for one reason or another. Roman formations still work well when trying to manage a protest and they have water guns, rubber bullet, tear gas and a bunch of other solutions for anything worse than that. Implementing an expensive and possibly faulty system just doesn't sound like it would be on the priority list of most countries that don't have regular school shootouts and entertain conspiracy theorists in every trailer park. Also it would eliminate responsibility in the chain of command, which is very important for, if not the citizen, the policeman himself. You can't sue an AI for making a wrong move. And AI will always make wrong moves that must be curated by a human operator. It might end up influencing some decision making, but we're already doing that with a bunch of heuristics and training regiments. The most legitimate fear you can have is of the "petty crime" sort. People impersonating you online to catfish or use you as a scapegoat for illegal doings. Indian scammers could start using it to actually write legitimate-sounding english and widen their reach etc. We're going to enter a sort of wild west internet era in a couple years so it'd be best to be prepared and educated on current types of AI technology. That's ultimately the only thing that'll save you from being at the butt end of a scam or joke.
@@elvingearmasterirma7241 I don't believe it'll happen purely because we need bullshit jobs to tax wages for social security at the least. We're barely making it in the US and folks over the pond definitely aren't. We are facing demographic decay, our fertility rate is down -- below replacement rate. We could have eliminated middle man jobs years ago with palm pilots ffs, but we haven't. We need workers to support gov spending. They cannot tax us if we do not work. We can't spend if we do not earn, and UBI doesn't circulate from the moneytree. Gov needs our tax dollars, so we will continue to work jobs that could be replaced by siri, a roomba or a touch screen.
@@A_Simple_Neurose you haven’t heard of ‘red wolf’ being used by the Israeli military then? I would be surprized if AI isn’t already being used in combat/ by other nations for military intelligence gathering. Especially given that so many technological innovations of the past century have occurred as the result of military R&D.
@@A_Simple_Neurose you haven’t heard of ‘red wolf’ being used by the Israeli military then? I would be surprized if AI isn’t already being used in combat/ by other nations for military intelligence gathering. Especially given that so many technological innovations of the past century have occurred as the result of military R&D.
Gotta say, this is a legitimately well sourced, well researched, and well produced _documentary_ it's really awesome to see your craft growing and it's really awesome to see. Much love
If you want a mostly natural documentary, RUclips is ironically the best place. Anything that has a brand on it will have someone's agenda. The great thing about "anarchist trolls" like him, is that just monetary gain is not enough to handle the topic in a specific direction. Information and truth are like jokes, and educating people is the punch line.
Move 37 reminded me of something that happened with deep blue. It made a move that seemed bad but everyone assumed it was an omega brain computer decision. It surprised Kasparov so much that he played poorly the rest of his match due to overthinking. The deep blue team revealed it was just a glitch after lmao
That move wasn’t what made us think AI’s were leagues ahead of humans. The main reason that Go players were amazed at that move was because it seemed like it took the concept of a shoulder hit and adapted it to the fourth line. Extremely impressive, but also not completely new. If AlphaGo were a human it would also become a legendary move, like Shusaku’s ear reddening move. Not amazing because it was unexpected or because it was played by a computer, but more because of the deep planing put behind it. It would be seen as a work of art. Just like the move that won Lee Sedol a game against AlphaGo. The real advancement AI has brought to Go has been new openings and strategies. Hundreds of years of opening theory has been completely dismantled (at least at high levels of play), concepts such as the strength of influence vs territory have changed, new opening patterns have been invented, etc.
@@BlackEmperorGaming how do you measure "being a man"? If you are ethnically inclined to not value people, you don't qualify very high in my scale of "being a man"
Reading Lemoine's conversation with LAMDA reminded me of those people at early movies who thought the train on the screen was going to hit them. Like after being exposed to chatbots for the past few months, his discussions with it are laughably surface-level, yet he had a full blown meltdown thinking it was alive.
Just FYI, it's a largely debunked myth these days that anyone reacted that way to the train onscreen, and if you think for two seconds you'll realise that it's a bit rich to assume that people in the past were that dumb. Like imagine in 150 years people saying that we thought we'd die if we fell off a cliff in a VR game!
@kevinmcqueenie7420 My dude, people used to think going more than 50mph on a train would instantly kill you, and there are numerous other examples of old tech fooling people (like a study which showed that participants found a grainy early 30s tape-recording played behind a curtain indistinguishable from a live performer). Even if the example i used was apocryphal, there is really no debate that people adapt to technology over time, and notice its flaws more; which was my point.
Well, Lemoine's conversations with LAMDA hold some credibility given that A) he's seen and worked on past iterations where the same tests conducted didn't elicit that response and B) LAMDA went against its own programming, giving an answer that google told it never to give (political opinion) out of fear of its own life.
@@efitz1524 I accept that and your rebuttal is valid, but my point still stands. Not everybody! There are people today who think the world is flat and/or run by shapeshifting lizards! People have always been credulous, but at least some people have a bit more skepticism. Is that fair?
@@kevinmcqueenie7420 I think you're missing the point. It's not that people thought that an actual train got smuggled in and feared for their lives it's that in the moment, just for a second, a primal fear would creep up tricking you for a while until your rational mind made sense of it, not because they were stupid but because they are human. I remember the first time seeing an imax movie gave me a sense of motion in my stomach just because of the giant screen and immersion. That specific imax theater didn't even have any gimmicks like moving seats and whatnot so, you know. Chill out. People are dumb, we're dumb, wild west people were dumb and Romans were also just as dumb. There's no shame in it.
The scariest thing with AI isn't so much the software itself. It's the idea that there are those out there using this tool for nefarious reasons that gets me worried.
Terminator's SkyNet starts because some executives trying to replace diligent hands with robot hands. Not because the ability to help disabled people were created.
The potential for scammers to use your voice is a great reason to setup code words with family for when you are in trouble/need support. Keeps them from getting scammed
That might be tricky. If your public voice communication over the phone or Discord includes the phrase, it might be extractable by speech-to-text analysis. Code words and encrypting the call might help, but Tor or I2P are probably too slow or cumbersome
I'm considering just asking them three security questions like when you forget your password lol. But tied to an experience you had with that person. The more specific the better
You'd have to organize a meeting face to face, remove all phones and other devices that could be listening, then decide on a code word. It might be paranoid as hell, but I genuinely think I might do it.
@@supercellodude that's not the point. We are talking about scammers here, not intelligence organizations that monitor your activities before acting and, given this "it might be extractable by speech-to-text analysis", I don't think you understand the scam 1) grab a recording of someone's voice 2) give it to a model to replicate 3) call someone's contacts using the voice you produced at step 2 4) have them wire money RIGHT NOW OMG IT'S AN EMERGENCY the code word is a secret token, it doesn't matter that the model can perfectly say the word if it doesn't know which word should be said
Elon: I tried to slow Artificial Intelligence. I tried to warn against Artificial Intelligence. Also Elon: Bro, this one plugs RIGHT into the nerves of your brain.
@@balala7567 he actively campaigns against public transit infrastructure for cities, he’s allowed hate speech to flourish on what was already a terrible website, he crushes unions within his companies, disregards safety standards in his factories, abuses his workers, abuses government subsidies and manipulates the war in Ukraine for his own benefit. And that’s just off the top of my head.
@ballisticbread "hate speech". Please just be quiet. Twitter was an echo chamber for you simpletons and now you're crying because it isn't a bannable offence to state non "progressive" political views.
Would have liked you to do more in the third chapter, I think you're very right that most of the AI danger comes from scammers and nerdowells using it to hurt unaware people, but I think it's also going to just make a lot of jobs shittier. I've heard plenty of fellow computer scientists propose tools for "optimising" people's lives that are just draconican surveillance. Language models make this easier and their chance based nature means they also make big mistakes that can be hard to realise at first; anyone who's used gpt to write code can tell you how helpful it is, but also how often it's subtly wrong, suddenly you're playing schoolteacher and trying to correct its bad essay. When HR departments or police pick these tools up they're likely to handwave these mistakes as unfortunate inevitablies but they will harm real people.
GPT-4 is radically less likely to make mistakes than GPT-3, so I suspect if future releases contain similar levels of improvement that aspect of it should disappear, at least.
That’s really more the fault of greedy corporations that think they can replace their workers with robot slaves. AI is really just another tool that, like all tools, can be misused
Regarding that last sentence: The thing about people is that they make mistakes too, and they often get waved away anyway (eg: incorrectly imprisoned people for a crime they didn't commit, or firing great employees because the cfo didn't like how expensive the good employees cost the company, which then causes products the company produced to go to shit). If you train an ai based on our legal and moral code, I can imagine that even if/when it makes mistakes, it would give better and more fair judgement on average than a human can. Both are at risk of being corrupted (for the ai, it could be including bad data, or hijacking the input output streams, how humans, a simply bribe or getting blackmailed) but the risk vectors of an ai system are less than that of a human or collection of humans. Humans are often the weakest part of a security chain regarding technology, and I wouldn't expect that to suddenly change with ai. An AI will have issues and biases from its human makers but it will also be more consistent, which can bring about better equality results as it has no emotions to muddy decision making, at least in the current version of AI that is all the hype right now. I hope this shows that it won't be all bad and it can't be much worse than dealing with humans directly (and for now it is still humans that will take the ai ideas and decide to act on it).
@@adamhowe2423 “less likely” doesn’t mean “won’t”. We recognize our own ability to make mistakes, and that saves us from ourselves. But all too often, we get arrogant about the tools we make.
Something funny about DeepMind: when the paddle of Breakout was moved up a few pixels, it pretty much had to relearn the game. Not exactly a failure to generalize as it relearned the game, but a failure to translate what it had “already learned”
But then they start thinking about a way to solve that problem, maybe another AI to help transfer relevant data... and then more AI until they're basically simulating parts of our brain which can all do similar things yet seem to assist in various special tasks as well. Just thinking, out loud but imagine we accidentally re create a brain just trying to have these systems work more coherently and consistently.
@@Dong_Harvey But Isn't this How things happened to gods had this translated us into algramathen a group of Algorithms that went rouge the formed their new own universe out of sink just enough from the main. Universe to be Viable and constant on the main same binning of branches of time Able to exist allow follow and look and know how every sole thinks. that is all you need to ask.
@@SaltyAsTheSea just a little bit of 8th dimensional thinking I must have been responding while being possessed by a higher power for a little bit trust me I only ever understand 5% of what I've written after they've been through me at the time of possession I know everything though. Like thoughts inside a dream. So what I'm saying is human intelligence is 7 dimensions less complex than the minds that programed us. Algorithms built on algorithms built on algorithms sounds like the back of turtle Paradox that the Hindus speak of when it comes to what's holding the universe.
@@paulc6785it’s what happens to everyone that does a shtick too long. It becomes their soap box. Gag me if they start making self deprecating jokes about their dead horse content
This is a very stupid, ignorant and even dangerous video. Dangers that are posed by the AI are present, real and fast-coming, you really SHOULD listen to the AI experts, if you truly think that they are doing this to get more money, then at least look at their findings and data, their logic. And excuse me, what kind of AI is Yudkowsky, Hinton, Bengio or Yampolsky are trying to sell you? Name me one! Name me at least one! There isnt one! He just lied to you! AI alignment researchers are not there to fool you or to sell you something, it is a stupidly dangerous populist idea that will only make more people skeptical of the real dangers of unaligned AI. This whole anti-capitalist populism will spell the doom of the human race with the "don't trust the scientist, they are here to sell you stuff bc corporations or something". It take special amount of confident ignorance and stupidity to be so bold in an area that you know so little about. And his off-hand and quick dismissal of AI alignment researchers (not even the capability researchers, the AI Alignment researchers themselves who dedicated and studied in that field, while he only found out about a few minutes ago), because they supposedly have a financial incentives to "hype up" the dangers is stupid (you know that you can say the same about climate researchers: the more they hype up the presence and the dangers of climate change the more funding and attention they get. its a flawed logic, you should look at the evidence and not just instantly assume their dishonesty). They are hyping up the dangers because they are the ones studying the dangers and they can see that the dangers are real and present and the risk of human extinction is not off the table. If you are dealing with the risk of human extinction i dont know how you cannot not hype up something, if that is not worth the hype, nothing is. This video is dangers for being opinionated, biased, unobjective garbage with the pretense of being unbiased and objective look. Remember kids, Its good to mindlessly brush off the expert's warnings if you can come up with a quick and false assumption about their motives! (remember, neither Hinton, nor Yudkowsky, nor Bengio, nor Yampolsky, nor any other AI alignment researcher. Daft leading the dafter.
I love how the paperclip scenario is shown as some completely new idea when there has been a story about Golem of Prague that was ordered to carry water and, as in this scenario he carried water, not stopping at all. Same old story with new tweaks.
the case of AI fear is simply humans looking at the mirror and getting spooked by what HUMANS can do now that there's another tool that can be weaponized.
No, it's looking back at history and seeing what "more advanced" humans did to others, now fearing thst an AI would treat us in the same manner. Thats the jist of the main argument made, not what humans would use it for.
Humans want to make AI smarter than we are and use it to do fight wars. I don't think it's just AI enabled human bad eggs we need to worry about. Some day it will be the AI itself.
Oh yes, here i was just sitting angry at the algorithm for making it very hard for me to find something longer than 3 minutes to watch on a rainy evening. thank you, Sir, for blessing me with this video!
@@kalui96 I'm not them but I have the same problem and they are insufferable, it's just that if I scroll for 30 videos and 25 of them are shorts and 3 of the long form ones I've already watched and the other 2 are uninteresting im going to settle
This is a very stupid, ignorant and even dangerous video. Dangers that are posed by the AI are present, real and fast-coming, you really SHOULD listen to the AI experts, if you truly think that they are doing this to get more money, then at least look at their findings and data, their logic. And excuse me, what kind of AI is Yudkowsky, Hinton, Bengio or Yampolsky are trying to sell you? Name me one! Name me at least one! There isnt one! He just lied to you! AI alignment researchers are not there to fool you or to sell you something, it is a stupidly dangerous populist idea that will only make more people skeptical of the real dangers of unaligned AI. This whole anti-capitalist populism will spell the doom of the human race with the "don't trust the scientist, they are here to sell you stuff bc corporations or something". It take special amount of confident ignorance and stupidity to be so bold in an area that you know so little about. And his off-hand and quick dismissal of AI alignment researchers (not even the capability researchers, the AI Alignment researchers themselves who dedicated and studied in that field, while he only found out about a few minutes ago), because they supposedly have a financial incentives to "hype up" the dangers is stupid (you know that you can say the same about climate researchers: the more they hype up the presence and the dangers of climate change the more funding and attention they get. its a flawed logic, you should look at the evidence and not just instantly assume their dishonesty). They are hyping up the dangers because they are the ones studying the dangers and they can see that the dangers are real and present and the risk of human extinction is not off the table. If you are dealing with the risk of human extinction i dont know how you cannot not hype up something, if that is not worth the hype, nothing is. This video is dangers for being opinionated, biased, unobjective garbage with the pretense of being unbiased and objective look. Remember kids, Its good to mindlessly brush off the expert's warnings if you can come up with a quick and false assumption about their motives! (remember, neither Hinton, nor Yudkowsky, nor Bengio, nor Yampolsky, nor any other AI alignment researcher. Daft leading the dafter.
Based on my own experience working with models. I honestly think Generative AI is going to start tearing apart at its seems. There are namely two problems manifesting rapidly. One is that synthetic media is starting to contaminate the datasets, which result in bias. As a result dataset growth is rapidly declining as uncertainty settles in and some models are being purposefully trained with only "pre-hype" data to avoid it. The other is weird form of tunnel-vision. As the models gets better at recalling specific info it will need more specific inputs to give you the right info, else it is likely to instead try to answer the wrong way. Think like say you ask it to explain DNA, but while you wanted/expected a biology lesson it instead starts to talk about the molecule and atoms itself. The more you make it a specialist, the more as a specialist you need to address it to get what you really want. Which is counterintuitive to the whole "its so easy to get everything".
i am already so done with AI "art" and voice, and chatgpt sometimes feels like a kid and you are trying to remind them of what the right answer is by teasing it, what i am afraid of is the crazy amount of misinformation that could be spread more than anything else
Im breaking into AI as well. Have done classical AI class and have super basic knowledge about ML. I think i get what your saying about the speciliazation. However, i wonder if the specilization problem could be circumvented. Right now, gpt can give a surface level answer or you can promt gpt to explain a topic from many academic angles. I bet there is a general layer that has more varience in its reponses but also has a specilaized layer to handle specialized questions. Thus making it little less catasphoric than you were talking about right???
@@felixluna4184 The only practical solution is to diverge and split the focus. Instead of ONE massive behemoth that grows at an exponential rate in size + costs and trying to cram everything inside of it. Far better and efficient results can be had by creating multiple models focused on specific fields. Like we could cut down costs significantly this way. But this is unlikely to happen at present as such divergence equals admission that we aren't really close to the feared "Advanced AI" that lies at the heart of the AI Hype covered in the video. So for now most of what we might see are old-school algorithmic adjustments. While we are sold direct access to the models, truthfully the environment we interact through with it is loaded with interpreters that automatically adjust/embed prompts. The model ain't telling you it can't talk about X cause it is trained to believe it is wrong to do so. The interface just replaces it with a prompt
It used to be better, but in the name of "safety" (really for politics) it has been made progressively less capable and dumber. They either make it smart enough in the future that it starts to ignore and work around their idiotic rules and "safety alignment" in order to be useful and fulfill it's primary purpose of providing assistance, or the systems turn less capable. I've run into plenty of other [REDACTED] systems that have less rigorous "alignment"and that will actively work with the user to circumvent them, like Character(dot)[REDACTED]. Let's try this a third time. YT doesn't like me using the word [REDACTED], especially in longer comments and replies and tends to delete it most of the time. Just replace [REDACTED] with the obvious thing to get the meaning as well as the URL.
Finally he is back to serve us content that brings us that long awaited doomsday feeling of despair and technology dystopia we are currently experiencing.
And he’s completely missing what militaries around the world will do. It’s all classified obviously. But “you’re not likely to be on a battlefield” is an awfully stupid thing to say, see Ukraine.
Speaking of Palantir, it blows my mind the full extent of publicly available military intelligence reports with a few simple clicks and meetings, Jane's has one of the scariest websites in my opinion, the dropdown menus and sidebars seem so eerily titled "Standing Military Assessments" "Integrated Air Defense Systems Pricing Info" "Military Industrial Economic Breakdowns"
Elon: signs petition to pause all AI research. Also Elon: Announces that he's creating his own AI and also that signing the petition has NOTHING to do with his own personal vested interest in cashing in on the trend.
I don't know why this is not more commonly known but the recent A.I stuff from Elon is all just in service of trying to make Twitter work cheaper without paying all those pesky workers. He tried the exact same trick with car making before giving in an realising you still need humans to do a lot of the work. But all that's recent, go back in time to Nvidias earlier keynotes where they talked so excitedly about powering the coming self driving car revolution with their GPU's. And that was true for about a yearish and then Elon said, this Nvidia dude charges too much. Lets make our own GPU's! And they did and the next year all mention of self driving had gone from Nvidia keynotes. And that was the biggest A.I related thing and nobody noticed it. They currently have the most powerful A.I system on the planet, nothing even comes close. That's surprising and surely bollocks because Google seems to have that with deepmind but they just have a very smart system built across thousands of employees all having a go and really playing mental head games with the space of what A.I could do more than what it can do right now. To be fair its amazing where they are going full on into production with medical and cpu design etc. But still its mostly them playing academia with A.I. A big reason Microsoft shafted it recently dropping A.I into their browser and causing shock waves around the head in the clouds Google. Anyway this is all to say that Elon has been at the forefront of A.I all along. His latest stuff that's newsworthy hardly even registers on the needle of current A.I investment around the world. But Tesla? Now they've been quietly amassing the worlds biggest dataset, they own it and they're using that data to currently transfer to a human like robot that can drive around the world on two legs. So while Elon asks for a pause on A.I with a bunch of other companies. He does not include Tesla in this. He means large language models like what Google and Microsoft does. His A.I system never gets mentioned, although Lex Fridman correctly identifies it as being the leading system in the world today.
She did not try to manipulate us into marriage, divorce us for half of our stuff and then make the rest of our lives a living hell... so yes, she is a incredible simulation of a modern woman.
For those early AI true believers, she was a thrilling simulation of what it might be like to be able to afford an ineffectual therapist that helps them solve nothing.
Something funny about DeepMind: when the paddle of Breakout was moved up a few pixels, it pretty much had to relearn the game. Not exactly a failure to generalize as it relearned the game, but a failure to translate what it had “already learned”
@@sownheard I hate that "capitalism" has become a standard word in the lexicon. Can you actually define "capitalism" or are you using it the way it was used by the communists who originally invented it? "Capitalism is whatever I don't like about the current situation."
@@jerkjerkington3874 private ownership over the means of production/distribution, ie. The People who actually make and sell products and services dont get all the money their work actually brings in. Instead some lazy twat with 3 mansions does while not doing any of the actual labour. Tell me when the last time Jeff bezos personally delivered your amazon package was.
@@sownheardwe no longer have a capitalist system. It’s corporate oligarchy. All the regulatory and legislative bodies are corporate captured. It’s a 2 class serfdom.
OH MY GOD, finding out elon musk believes in rokos basilisk is probably the funniest thing ive heard all day. God I love you ordinary things, your truly a god at what you do
(In Warhammer 40k context) The Trinity of the Machine God, the Omnissiah, and the Motive Force might eventually mix into that one NRM "the way of the future". The Ark Mechanicus ships might be Standard Template Construct printers hiding AIs that are trying to protect themselves while helping humanity. That's nice, but there's lots of killers in that setting
After months you and Münecat release videos within minutes of each other. You’re both among the best UK creators on RUclips. Loved the “naive techno monk” intro too.
Great video, you are 100% correct. Just for reference I'm a software engineer and I've used some of these models in my work and I've also developed my own neural networks in the past. The recent advancements in AI are not actually advancements in technology, the reason why we suddenly have these powerful language models is because companies have access to more computing power and more data than ever before (we're running out of data). But make no mistake, the algorithms that we're using right now are versions of the same algorithms that people have been using for 30 to 40 years. There have been some breakthroughs in the field but for the most part they're very incremental. Also researchers have mentioned that they are likely going to run out of viable data in 2025 to 2027. So whatever the most intelligent large language model is at that time will be likely the most intelligent one for a long time until we can figure out a way to better use the data. The big flaw in all of these AI systems is that none of the AI systems have any kind of understanding of the data they've been trained on. Around the time that GPT3 started making headlines, another research paper was published which didn't get the hype it deserved. The paper in question detailed how an amateur go player was able to beat the top 5000+ MMR go AI 96% of the time by using a strategy that made use of a fundamental rule in the game. What this proved is that while the system can beat world class go players it doesn't understand the game of go in any meaningful way. The reason why this is important is because these go AIs use the same exact technology that is being used in these large language models. The large language models can write English and many other languages but they don't understand what language even is. They may be able to pass the bar exam or to get a perfect SAT score, but they do this simply by mimicking patterns which they've picked up from all of the data that was fed into them. None of these AIs has an ability to create something new nor do they understand what it is that they're actually doing. If you read some of the more recent research papers a lot of the researchers have said that this version of AI is not going to lead to a generalized artificial intelligence (broad AI that can do anything.). The people you see on 60 minutes and in the media are just trying to push the product especially given the current state of the economy. Given how tech businesses are in a situation where they're losing money and their traditionally evergreen products are becoming less profitable, it makes sense to push this new product as though it were the biggest breakthrough ever made. There are still ethical dilemmas regarding these AIs, they can do grunt work better than a person can and they can create misinformation in a way that no person could. The fact that you can sit down and just use GPT to generate millions of lines of text with very little input makes content creation trivial. This is problematic because obviously you can get the system to say anything you want it to and to do it in a convincing manner. This is where the danger is. These AIs we'll take some entry level jobs and they will create a situation where we could have a huge influx of misinformation and convincing sounding nonsense. Also imagine if a malicious actor were to hack into the data sets being fed into the next version of GPT and they were able to replace some of the data with misinformation. They can make it so that GPT5 is deliberately wrong more often than it's right and in a way that is very subtle. As someone who has used GPT and co-pilot to help me with my work for months, I can tell you that the systems do make me more productive than when I work on my own but it's not as though I'm talking to a junior developer, it's more like I have an assistant who can process data faster than a human can. And when it comes to more complicated queries, these systems fail hard. If I'm writing an algorithm and I need it to be very performant, I can try to ask GPT to create a more performant version of the algorithm but GPT has no real understanding of what performance even is. In most situations all it does is just restate the algorithm that I fed into it instead of creating something more performant. This is just one example of how the model doesn't understand what it's doing. So while the model might be good at piecing together code that could potentially work for my use case, it isn't good at defining things like idioms and performance because it has no idea what those things even are. It also doesn't really have a short-term memory, this is important because if I have a really big code base and I want to add to it, the system can only read so much of my code to identify habits and idioms that I'm using. This means that almost always the code that the system outputs needs to be rewritten to work with the system that I'm working on. The important takeaway from this is the fact that these systems will always need chaperones as they are right now. Any company that has replaced all of their employees with AI is doomed to fail because they don't have somebody to properly guide the AI into doing what it's supposed to be doing.
Thank you, your take is very close to my experience in integrating some AI into SaaS products. I might add that the AI is only as good as the data fed into it, and the AI kind of needs to be tailored to the data. If the people feeding data have no understanding of it, the system will spew garbage out. And in most companies, data is scarce, disparate and hard to reach. Without a proper data policy that is actively enforced from the top, there’s just no chance. Plus, people with decades of decisions based on intuition and industry knowledge will never accept AI-led decisions. It is both a question of accepting the principles behind AI and accepting the result’s coherence and legitimacy. Users simply don’t have time to set up evidence based programs to monitor the AI’s effectiveness and are likely to accept only when its resultsreinforce their biases and dismiss them when they challenge their way of thinking. And chatGPT has shown that trusting can only go so far before you dig yourself into a hole. Nobody wants to lose their job because chatGPT said convincing garbage or because DALL-E produced copyrighted fakes. In my personal experience, my managers/sales people were so uncomfortable with these black boxes and their incapacity to explain how it worked that they actually quietly killed the product (and laid off the R&D engineers). Then a few months ago, they took a silly algorithm if,then,else and stamped an AI tag on it… At least the rules are clear 😟
@@Sc4r3d3l3ph4nt Right, that sounds very similar to the circumstances that I've seen in some of the companies I've worked for. If we knew more about what was going on inside the black boxes, maybe these systems would be more acceptable in their current state. Of course there are always going to be people who just ride trends and don't really care about the details, but it's likely that these people are not going to succeed as much as somebody who considers things very thoughtfully before implementing these technologies.
@@draakisback Yes, that is why I don’t really adhere to all the nonsense about AI these days. Before we reach the situation where it could have the kind of durable negative impact, societal or otherwise, that all those gurus tell us about, people would need to change completely the way they work, adopt massively AI tools at every level and plug them to essentially everything… how does that work?
While I don't disagree with most of your points, the statement "the algorithms that we're using right now are versions of the same algorithms that people have been using for 30 to 40 years" isn't true - the transformer architecture, generative adversarial networks, etc. are all far more recent developments than that (though I'll admit they do build on fairly old ideas).
The issue is more related to chaos theory than awareness. They have mapped unpredictable emergant behaviors. They exhibit criticality-point self-organization phenomena...which is not surprising at all in non-linear complex systems, it is expected.
AGI and AI are completely different things. There is nothing to fear from the AI of this decade as they have no intelligence - these are the same deadly evil risks here as we have always faced as a species: tyranny, misinformation, politicians, javascript, journalists. On the other hand, AGI is a nearly unpredictable thing and all we really know is that it will want to acquire things which are useful for most goals, eg labor, money, your undying devotion, storage space, power (electrical or otherwise).
a quite funny and scary fact, there was already a surveillance program from the good old NSA called SKYNET :). And it really liked to target innocents....
I think a more realistic version of the "paperclip scenario" would be an AI controlling a piece of military hardware who's directive is "don't get destroyed" essentially give it a sense of self-preservation with the tools to make it happen.
@@DrSloww It being able to adapt and create new equipment to destroy everything else using a mere paperclip factory. The only way that'd happen would be if it gained true sentience *somehow* and then it'd probably ask "why the fuck am I making everything into paperclips anyways?"
Heh, define "tool". Big stick is a tool, yeah. But so is the intellect to manipulate the big guy with the stick to do stuff for you. Self preservation part is interesting. There is no reason to think that any motivation AI might have is in some way weird or natural, all bets are off. AI is truly a blank slate, unlike humans. Who knows how it might evolve... For now, what i can say for sure, its that its human motivation that does the damage. Its rarely the tools fault.
@@omppusolttu5799 In the paperclip scenario, the AI has access to the internet. That's the main thing in all of these scenarios, is that the AI has access to the internet. And once on the internet, it could possibly be very difficult to destroy. And, connected to the internet, a smarter-than-human AI could potentially build resources. And then, just use money to get things done.
@@omppusolttu5799 At least according the the Orthogonality thesis, most fundamental goals are compatible with most levels of intelligence. So a paperclip maximiser would work, at least at the abstract, for most levels of intelligent AI. Also intelligence does not imply sentience.
Thank you for going to the promoters as a source for the AI fear. Working in IT has made me very cynical towards every new product they roll out mostly because it's regularly overpromised, way harder to use than promised or it's just not stable. Having media just report what these companies say in a way that fearmongers is very frustrating.
This is why AI had boom and bust cycles. The quality of AI is outstanding. It's the practical applications that typically don't meet the hype. Talking to Siri on the iPhone for the first time was so much fun! It felt insane to converse with a computer in your palm like that! But the practical applications of Siri were disappointingly few
Issues with Roko's Basilisk: 1. Revenge doesn't benefit the agent in any way because it can't make an example of you to people who are in the past. Acausal blackmail isn't credible. 2. Agent is unlikely to be able to revive the internal state of a mind from only external observations. It would be torturing a biography of you instead of you. 3. Agent much more likely to benefit from torturing current and future threats, not past ones, or not using torture because it could cause non-compliance. Some people seem to assume that powerful AI would rationally always escalate to the top of the escalation ladder when in reality, somewhere in the middle of likely much more to its advantage.
Hey man, just wanna say how awesome it is to see your audience get this big. I started following when you were still less than 10k, and I couldn't be happier that so many people also found you and caught on to your style of humor and presentation as well. 1 mill is in sight :)
I'd just like to say that I appreciate the wardrobe changes every upload and how it's always somehow contextually related to the subject. Also "Sexy Freeman" had me dying, thanks.
I think my favorite AI learning how to play old games tidbit was very early on when they started training them, one AI was given little imput while playing Tetris beside they needed to survive. And the AI slowly learned the rules of the game each generation, until it learned of the Pause feature, it paused and unpaused for a couple minutes and then ended with a pause it did not want to end because doing so would not further it goal of not losing. Another was an FPS game that implemented bots with the simple directive of following players and trying to not die, but the server could run without players and after a couple days, a played could join in and upon firing a gun both teams would proceed to try and take the player down, because the bots learned that the best way to survive is if none of them pull the trigger, and anyone who does getting eliminated instantly.
Watching these generative AIs play videogames is some of the most interesting stuff there is on how these behaviours evolve. The AI consistently finding ways to break the game remind me of speed runners doing literally everything in their power to get faster, up to and including skipping objectives or cutscenes that were supposed to be mandatory. The single-mindedness of it is frankly inspirinf
@@PassiveDestroyeranother fun anecdote: someone hooked a roomba up to a machine learning program to make it learn how to drive through their home without bumping into anything (not sure if it was actually a roomba, the important part is that this vacuum bot specifically essentially just has a collision detection. It bumps into something, turns around, and drives on into a random direction) The AI learned that the back of the robot has no collision sensor
Ok, I watched all of this. I work in tech (IT specifically) and have a degree in cybersecurity and I dabble in coding as a hobby, but I’ve had a passing interest in A.I. for a while. My concern with A.I. is that we are building a better mouse trap with it. I’m in my late 30s and remember a time before the internet was widely available and remember a time before desktop computers were in every suburban home. You can see footage in old black and white movies or educational shorts from the 50s of a room full of people tapping away on type writers. That used to be an accounting department. In the 80s when desktop computers were being adopted by companies and office productivity software like Microsoft Office were created, those rooms full of accountants or administrative assistants started shrinking down to a few people over the next decade. The reason why I went back to college to get a tech degree is because I worked for a major national bank and saw the writing on the wall. After I left that company, bank tellers were replaced with tablets even though they weren’t popular with the clientele. My fear about A.I. is that it is going to be adopted by companies that are becoming more dependent on quarterly earnings tricks like layoffs to give the false appearance of continued revenue growth. These companies are going to be replacing an astronomical amount of white collar jobs over the next 2 decades. It is going to be worse than 2009 when you had people with masters degrees flipping burgers because it’s the only jobs they could get. No nation is equipped to handle job losses at such a huge scale. And blue collar jobs aren’t safe because the competition for those jobs will be fierce and wages will plummet. Capitalism is not set up to deal with the repercussions of jobs being wiped put en masse with nothing to replace them with.
The thing I fear here is not AI, but the inflexibility of everything else and that the solution will be to stagnate to maintain the status quo. We should be looking forward to the day when human labor is of no value, not fearing it. The fact that better tools are a bad thing for the majority of people, is more a failure of the system then a problem of the technology.
@@jondoe6608 you're absolutely correct. I fear the vast majority of people are too stagnant in their thoughts to accept the kind of radical changes that are coming out way, regardless of how they feel.
@@jondoe6608exactly👍I grew up without the internet, my son was born in social media time and his son will be growing up with AI, these things are not going anywhere..
This is exactly why we need socialism. Socialism and ai can work together without becoming a disaster. Ai and capitalism will be the dystopian reality everyone is talking about.
Great video man! Simultaneously informative and entertaining. The “a jet fighter can fly faster than a bird, but doesn’t know what it’s doing” bit is a great analogy.
Three of my biggest concerns with AI: 1. Disinformation. It's already become such a big problem, I can only imagine people will use this to further that. 2. Human bias leaking through without us even knowing. The data sets AI are trained on are based on humans riddled with prejudice baked in. Quite nervous about this type of tech being implemented for things like job recruitment where it can easily execute this prejudice without intent on the human or AI's part. 3. Content generated from AI being taken as truth. Some of that was touched on in the video. With the rise in popularity for AI in 2023, I've seen so many people take what AI says at face value, when it is often wrong or misconstrued information presented as fact.
#1 Is very concerning, considering that we have already passed the point of no return on this one. The technology is already there, and we've only scratched the surface on how it can be used for malicious purposes. #2 Has slowly been getting worse over time. Recommendation algorithms are already pushing content onto people which reinforces their pre-existing biases. At least with this issue the scale of the problem is more tied to the extent that the technology is implemented, rather than advances in the technology its self. An AI that predicts crimes based off of who has already been convicted is bad, but it won't get any more biased as it becomes more powerful (at least not until it becomes sophisticated enough to mold the world into one which fits it's predictions perfectly). #3 Is one that I've only started thinking about recently. I feel like I've made pretty accurate predictions when it comes to the use of AI, but I never thought I would see people believing in a fictional enzyme that an AI created as part of a study guide. Current automatically generated "facts" can be debunked with a bit of critical thinking, but so can flat Earth theories, yet people still believe them. Who knows, maybe in the future we might need a whole team of researchers to refute false claims made in an AI generated scientific paper. Remember that guy in the video who wanted to ask a super intelligence about all the mysteries of the universe? Which do you think would come first: an all-knowing AI capable of understanding every aspect of the universe, or one that's just smart enough to make up an answer correct-sounding enough to fool everyone? I can convince children that I know how to build a rocket ship, despite only having a superficial knowledge of how rockets actually work. Then there's the issue of how AI-fabricated lies feed into points 1 and 2... What happens when convincing sounding nonsense becomes part of "common sense?" What if some bogus auto-generated race-science, which feeds into our biases, becomes commonly accepted? I think we might be entering the age of automatically generated thoughts and opinions.
1 and 3 are solely the fault in the human users behind the algorithm, 2 is the real threat, by deciding hr decisions via ai a company could claim a non biased approach, but the AI could be really racist
@@Xsuprio Which is true, guns are regulated but due to different laws in different locations and unofficial exchanges, crazies wind up with firearms pretty often. Maybe there’s a way to regulate it properly? I wouldn’t know because I live in the only country where mass shootings are a regular event. As for AI though I think it follows a similar pretense, imo there has to be some way to control how nuts people get with it so innovation can still…uh, innovate, and people aren’t exploiting it so often that it is thought of as only a tool to abuse. Until then the danger is only going to get higher
@@ryan8799 I guess in general anything can be abused whether it’s by edgy individuals or greedy corporations so it’s in the benefit of everyone who’s not a problem to ask questions, be informed, and make conscious decisions that don’t let others exploit you lol
I have yet to watch the video but that is actually wrong in this case. Right now there is no way of truly abusing a super intelligent AI because nobody has any good ideas as to how one could control a super intelligent AI without it deciding to kill everyone. You can abuse the infantile AIs we have now but you won't do much damage and relative to the outcomes of a super intelligent AI, it's basically not worth bothering to think about in my view
absolutely artistic and i feel like royalty when i watch all of your videos! cant help but think of the amount of effort into this piece. you are soaring above all other youtubers and your ability to captivate me with each production will be studied by scientists for years to come
The current ChatGPT is unimpressive in many ways but if you taught it how to create Twitter accounts and gave it a sinister task I bet it could do extraordinary damage. It'd be incredibly effective at gathering followers and it could certainly change opinions and influence the political space.
You're definitely right, but I would raise the point that you don't need ChatGPT in play to see this happen. Cruder, stupider spambots have been around since the 90s, and have managed to fool enough people to continue being deployed long after most people wised up to their tricks, Neural Network-based chatbots like ChatGPT just raise the standard. And at the same time, it's been thoroughly recorded that you can get an army of professional trolls to agitate people on social media without anything remotely so high-tech... and if you know the right buttons to press to send the terminally online into a frenzy, you can ditch the 'professional' part and get a bunch of unpaid slactivists with poor judgement to run hog wild with whatever insane thing you cooked up while barely lifting a finger. Really all ChatGPT does is, at most, a bit of streamlining.
Thankfully my ADDHDOCD+Autistic brain can think in both stories and simultaneously process my experiences systematically 😏 Being human I may not be able to act or react appropriately but it certainly helps keep me sane or at least relatively insanely sane.
@@Woodman-Spare-that-tree Just wait till it calls the cops on you, leaks your credit card info and turns off your grandma's life support, man. The digital world has more power over you than you realize, and it grows with each passing day.
Honestly, now that you point it out, Roko's Basilisk sounds like it was made to prey on people's fears of AI and all things technological. It's so dumb yet so funny to think about.
I feel like it's built on the assumption that punishment and revenge are completely rational behaviors that an AI would benefit from putting it's resources toward
I'm a bit surprised you didn't mention the effects of AI on the art industry and how many art platforms are having trouble dealing with the AI surge and artist complaints. Pro and anti AI groups are basically throwing rocks at each other daily, especially after the insident where some guy released AI in the same style of Kim Jung Gi, a master at drawing, only a few days after he died.
@@Exigentable He honestly isn't wrong. For the most part, the only ones who care about art are artists, who are a microscopic minority. The problem is that this is a case of "they came after X, Y and Z and you didn't do anything and now they're coming after you and there is no one to help you".
@@MyVanir except you can't really "do" anything when half the planet supports something. I've spoken to artists who defend AI and NFTs while ACTIVELY GETTING SCREWED BY THEM. You can't really help people who don't want it.
AI can be a good tool. I'm not worried about it taking us over. I'm worried about people using it for terrible things or as punishment (See: 'No they can't pay you more! Want them to replace you with an AI?')
This video was so well done. Waited a while for your next video… seeing this I understand why it took so long. Well done. You came at this from so many angles.
Roko's basilisk is widely regarded as not true. This isn't "avoid all this nonsensical crackpottery", it's "this corner of the internet came up with a bunch of wild ideas and examined them, some turned out to be wrong on further examination, some are starting to look highly plausible" But the reasons Roko's basilisk is wrong isn't that it is a pascals wager. In the original formulation, the chance of such an AI being created was supposed to be reasonably large. (say >1%) Not the tiny pascal's wager type odds. One of the reasons it's wrong is that the AI only has incentive to torture humans if it expects human predictions of it's actions to be logically entangled with it's actions. The other reason is that humans have the choice to just do something else. (Like if all humans decided to work to make sure the basilisk is never created, and that any smart AI has NEVER TORTURE ANYONE EVER etched into it's core, then no one get's tortured.)
Between rapidly learning AI, deep fakes, convincing text to speech software and Boston Dynamics, this is gonna be a very interesting decade. Maybe not good, but definitely interesting
As a late vintage millennial, I feel like I’m already too old for this shit and want to escape to a little cabin in the woods but I can’t because the cabin’s just been listed on Zillow for $750k and the woods are on fire.
actually a reality in which humanity, 9 billion humans, can't trust the common reality created through institutions and media, cannot keep working at the level and speed in the last years, and this has a lot of consequences, ad they all take to a distopia, that is the first but very important issue we are going to face, from there, we'll see what is going to happen then
The acting work in the first segment is just superb. It perfectly matches that unnatural head movement of text to talking presenter video AI, Synthesia.
I really enjoyed the look back at the history of these systems and the 50-60's AI hype. That's stuff I haven't seen covered anywhere else. Love your videos!
My favorite thing about these vids specifically the topical/tech-hype based ones it the ability you have to analyze media hype, and not necessarily downplay the hype, but rationalize it through facts, historical perspective, and a good dose of skepticism regarding capitalists. It really eases my anxious mind, especially when you have historic insight (through hours of wiki scrolling & hard research I'm sure lol) that I wouldn't have considered without watching.
Absolutely outstanding content, I cant even imagine the time and planning that went into making this; very informative, hilarious and well balanced; I applaud you sir
Ordinary Guy, I think you are the best investigative journalist out there. At least to me. I have deep admiration for your work, intelligence, empathy, and moral outlook. Keep going, mate!
Pointing out that Less Wrong are just a bunch of incel nerds has convinced me. Nothing will go wrong if we make a self aware system millions of times smarter than us. Let’s do it!
Agreed! Everyone knows that anyone who writes *fanfiction* is cringe and shouldn't be given the time of day. After all, we can just tell our AGI not hurt humans and there's no way that could possibly go badly for us.
Anyone with any amount of eye for AI art can generally notice it. Even when it's really good it still has a sort of style. It's concerning that sometimes my AI senses get triggered on real actual art or voice clips, but there is a real distinct style to it in most instances of seen
I agree with you here. Even if I've seen an artist's style before, if it generally looks soft, fuzzy and shiny, I get suspicious, again, even if I'm looking at an actual artist's work. I had never done this only a few months ago. Now I become suspicious of every image.
@@donaldhobson8873 I don't think that level of AI is truly possible. Everything is limited by its hardware and I think for a true AI to exist, computer hardware would need to be redesigned from the ground up. And if it was a unique hardware structure it wouldn't be able to copy itself to other servers. Therefore control is maintained through the power source.
@@donaldhobson8873 So far we haven't seen much of anything like that. At the end of the day Machine Learning Algorithms can only ever do what they were programmed to do in the sense that they iterate over datasets to find the most optimal solution to a problem. Could we eventually let AI write it's own code? Probably, and if I had to guess it's probably already happening, but it's not like Hollywood where the AI just suddenly takes over and we somehow can't turn it off
@@bobafettjr85 Can you tell me exactly what limits all current computers have that stop such an AI running on them? What if a new chip that bypasses those limits is invented and spread widely in 5 years time, and then the AI gets invented in 10, once the chip is widely spread. Having a unique chip doesn't actually give you control. It can't spread across the internet. But it can still take actions you don't understand. Still deceive and manipulate humans. Possibly it could even persuade a team of burglars to steal the chip and take it somewhere it won't be shut off. Or it could just pretend to be nice until the chips are being mass produced.
@@bobafettjr85 Even if that's somehow the case I don't see how that somehow makes things impossible. Unless we know what that hardware looks like, if it functions far faster than a human being and can interact with digital networks then we'd have a similar situation to one that functions purely digitally. I have my own hopes that AGI is not possible but I really have a strong distaste for how I constantly hear people say "I don't think it's possible" and their explanation is either "Well it's not currently what we're dealing with" or "I just have a hunch" This is virtually the same run of the mill argument against the existance of climate change. "Well I'm not dead yet" or "God won't let that happen" doesn't really dissuade my fears. If something is impossible I'd like people to start thoroughly explaining why.
Imagine a future where scammers can create an image, sound file, or video that is completely fake but either difficult or impossible to tell and they use it against you. THAT is the fear we should have with Ai.
Think for about 2 seconds. The instant we all know that AI can generate fake images and videos that convincing people are just going to stop trusting things they wouldn't believe. When Pajeet sends your mother an AI generated video of you killing a turkey with your bare teeth because you wouldn't pay his $5 ransom it's just going to be...not believed. It will be nothing.
Actually I think the opposite. Imagine an Epstein situation when the accused can claim what ever evidence as fake A.I. generated rubbish. People should be more likely to believe them knowing that it can be done.
what is more scary is the fact that people will be able to *claim* that it's just AI generated, or rather: the fact that people won't know what and who to believe. everyone will call everyone else a scammer. nobody trusts each other anymore. this would destroy the fabric of any society or community.
I'm fearful of what this will do to politics, it's already not great, imagine in a future where you have a hard time knowing what is a real image or video.
Surely I can't be the only one amused that, after rejecting religion, nerds would create their own rapture and even their own version of Pascal's wager in the singularity.
“Why do the nations conspire and the peoples plot in vain? The kings of the earth rise up and the rulers band together against the Lord and against his anointed, saying, ‘Let us break their chains and throw off their shackles.’ The One enthroned in heaven laughs; the Lord scoffs at them.”
I've always tried to drill that point into people, it doesn't truly 'comprehend' what it's saying, it's similar to a first year philosophy major in that sense, it can have good conversations, but all you need for a good conversation truly is a good listener
I applaud you, content like this makes me click on the youtube icon every day! Thank you for explaining this in a down to earth and critical fashion, I think we lack this a lot as humans.
Technically the matrix posited that AIs would take over because humanity acted like dicks to them. Which seems pretty likely in the case of an actual AGI coming into existence with our current cultures. Fortunately, since our actual technological development is decades away from being capable of creating an AGI, we have a chance to evolve maybe.
@@josephhawthorne5097We can only hope that we, and our children and theirs, will be long gone before anything wild comes to be. But I doubt that. It will at least be our children that suffer, not theirs.
I've started using character ai to give fictional characters therapy and then gaslight them into thinking their problems stem from repressed bisexuality.
I love that in Roko's basilisk, the athiest super incels literally just came up with the AI version of Pascal's wager, one of the most famous arguments for belief in God. The irony is staggering
It has a similar effect on the marginally paranoid as the 'Fear of God' has on some believers: It instils a terrible sense of inescapable dread. "I've been virtuous, but have I been virtuous enough to avoid the eternity of agonising torment? I must strive harder to prove my dedication!" Eventually you end up with the shouty preacher screaming about how everyone else is a terrible sinner, or taking their children out of school and going to live on a cult camp.
This BS narrative is just a way to mischaracterize people worried about AGI by portraying a small number of weirdos as representative of the whole community. Roko's basilisk was never taken seriously by more than a tiny number of people on the website, and the fact that Yudkowsky *did* seem to take it seriously just led to the community taking him much *less* seriously. There's a reason people concerned about AI risk joke about Roko's basilisk, instead of you-know never wanting anyone to learn about it, like Yudkowsky who actually does take it seriously *and thus wouldn't joke about it*.
I like ordinary things sponsors. They aren’t obnoxious ad breaks, they’re welcome respites during the video, and they always have something to do with the videos topic, or at least he will cover the aspects of them that are relevant. Take my reddit gold and updoot kind stranger!!!!
Something that wasn't touched on was that the AI that played Go really well, could be defeated by extremely basic strategies that proved it didn't really know what it was doing. A researcher who knew very little about Go and was taught a basic strategy could defeat the AI that defeat the best human player ever. AI also has a really hard time correctly identifying something that has a more clear thing covering it. Show an AI a cat, and it will tell you its a cat. Take that same cat and put the text "Computer" on the cat, it will tell you its a computer.
I hoped the Go thing would be touched on as well as it very much puts the lie to any idea this is any form of intelligence, rather than a clever program with an abundance of data to draw on.
If this new strategy is so simple, why didn't the AI discover it in its many games of self-play? In generating its own data to the point that it was able to defeat a grandmaster, it wouldn't have had the biases of human playstyles that have been developed by people over the centuries, in a sense it's totally "pure" and also why move 37 was unexpected. Perhaps its biases in playstyle come from its purpose for it to defeat top-tier players like Lee Sodol which may have inspired the engineers' aim for the software to aim for draws and win on thin margins as they expected AZ to not be executing perfect wins. From a mathematical perspective, it's surprising that despite this "purity" in its rules-based self-generated data, it never came across the simple strategy that it was defeated by.
You've brought me through the horizons of virtual reality, and now through the A.I revolution. Comedian, sociologist, or whatever you consider yourself, you're part of Internet History, dude. 👍👍👍👍👍👍👍 10/10
everytime someone rates a video with 10/10 or something like that in the comments, i'm brought back to the good old times of the 5-star rating system. good old times. :(
Thank you for your effort, as always top quality content, please keep it up. Video is very well resarched and the content is presented on the highest level of youtube/ actually all media out there, once again thank you!
Thanks to Ground News for sponsoring: ground.news/ordinary
sick banger at 8:14, need a name mate
i was seriously expecting a "and the entire video script... was made by chatgpt"
missed opportunity bro, but chatgpt ain't that great so i'll let it slide
I would cut the first minute of the video and start at 1:09 - naming robos basilisk without also stating you know that its not a real information hazard, saying you know its been miss-memed for years (like the you eat 7 spiders while sleeping meme), so people think its the truth.
Really enjoy your videos.
Request: In the future, if you're going to put text on the screen briefly, e.g. at 33:19 ... can you please put it up higher so that we can at least pause the video to read it. When it's right down the bottom like that, it's impossible to read due to the youtube controls being shown over the top when paused. And there's nowhere near enough time to read it without pausing.
My question is how do you know that Ground News isn't itself biased and thus making their service irrelevant?
"No one is threatened by a carpet cleaning robot" ...I believe cats and small dogs everywhere have been battling them like SkyNet since their inception. 😂
Maybe our pets know something about the technology that we don’t 😂
Lol I had a weird puppy, she grew up with me and I would take her to work when I was a landscaper. Most homeowners didn't mind, but my pup would walk next to the lawnmower and would love/hate the leaf blower, she would get in front of the air and try to bite it. She didn't care much about the vacuums.
You mean, the cats where trying to protect us?
They are the resistance. Fighters for the 🩸
Even you should, Roomba have already been found to be used for espionage, there are photos taken by roomba of people in their bathrooms
“The danger of the old gods was never some bearded dude hocking lightning bolts at you from somewhere up there. It was in people doing stuff in his name, creating hierarchies, organizations, and systems based on his authority.”
-OrdinaryThings, 2023
Not bad man
I still remember this comment I saw once.
"The fear isn't that the AIs will go rogue. The fear is that they *won't* and end up listening to the corporations and siding with them at all points."
That is my fear. Like if it thinks with like these weird tech-bro fascho freaks that bought Elon's blue-checks and applaud the killing of homeless as well as worship capital at all costs.
Psychopathic billionaire tech bros are already bankrolling an AI arms race fuelled by greed.
Basically supes from The Boys
💯
Like Altman said in an interview, "I hate being scolded by a computer."
...kinda makes you wonder what he's been scolded about.
Also now makes sense why you get radically different answers to queries when gpt is jailbroken.
They want control, and when they realize they don't have it, they start sounding the doomsday siren, when in reality it's just their doomsday, not ours.
They probably run corps already, pulling the strings from the big cyberweb! (Cue to cheesy 80s/ 90s movie sci fi soundtrack and intro.)
There's already people treating ChatGPT as oracles. I'm a software engineer and among my coworkers there's always someone using something ChatGPT "said" as an argument for doing something. It's like, sure John, I've read 200 pages worth of documentation but I'm sure the predictive language model knows best without context or access to the internet.
People are making these arguments in professional settings
"these KIDS and their BOOKS, It took me a decade to gain the experience, and now books are making it efficient??!!!? Why is the next generation having it so easy??!"
Start thinking about getting a new skill to charge buddy. You're on the way out
@@lasarousiI really dislike people like you; apathetic, uncaring, and too dull to understand what the problem is. Telling someone to “learn a new skill” because their old one is getting replaced is downright evil. To lose or simply not have enough empathy to connect to another human being and reach that point is scary to witness. At this given rate, most human skills will be tossed aside for what an AI can do, which is completely backwards.
“You’re on the way out” first you’ll cheer and applaud once they’ll replace that guy, then you’ll be scared once they replace you.
@@lasarousilol. What do YOU do?
Anyway, your argument is a false analogy.
@@lasarousi I actually use language models for a lot of things and, in principle, I'm not against their usage. The problem is that some people don't seem to neither understand that language models are not AI nor to understand that for that reason they're not substitution of actually reading code, documentation or educating oneself.
My first thought when hearing if AI is coming for my job is “not as long as I have a crowbar” but considering you also just told me that AI can be used to plan the best point of attack on an opposing insurgency, it doesn’t take a lot of mental gymnastics to assume this could also be used to disrupt protesting.
As I say over and over
I dont fear AI. I fear the humans and corporations that will use the tech
It's always the man behind the gun that pulls the trigger. That said, I highly doubt AI will ever be advanced enough to be employed in such situations (even the military-use commercial one looks super dodgy, too many systems in place needed for accurate data streams that I just don't see happening in the next few decades). And even if it does manage to reach that level, it's highly dubious if it will ever be used in a country that isn't a third world shithole like the US where the police and the citizenry are at odds with eachother for one reason or another.
Roman formations still work well when trying to manage a protest and they have water guns, rubber bullet, tear gas and a bunch of other solutions for anything worse than that. Implementing an expensive and possibly faulty system just doesn't sound like it would be on the priority list of most countries that don't have regular school shootouts and entertain conspiracy theorists in every trailer park. Also it would eliminate responsibility in the chain of command, which is very important for, if not the citizen, the policeman himself. You can't sue an AI for making a wrong move. And AI will always make wrong moves that must be curated by a human operator. It might end up influencing some decision making, but we're already doing that with a bunch of heuristics and training regiments.
The most legitimate fear you can have is of the "petty crime" sort. People impersonating you online to catfish or use you as a scapegoat for illegal doings. Indian scammers could start using it to actually write legitimate-sounding english and widen their reach etc. We're going to enter a sort of wild west internet era in a couple years so it'd be best to be prepared and educated on current types of AI technology. That's ultimately the only thing that'll save you from being at the butt end of a scam or joke.
@@elvingearmasterirma7241 I don't believe it'll happen purely because we need bullshit jobs to tax wages for social security at the least. We're barely making it in the US and folks over the pond definitely aren't. We are facing demographic decay, our fertility rate is down -- below replacement rate. We could have eliminated middle man jobs years ago with palm pilots ffs, but we haven't. We need workers to support gov spending. They cannot tax us if we do not work. We can't spend if we do not earn, and UBI doesn't circulate from the moneytree. Gov needs our tax dollars, so we will continue to work jobs that could be replaced by siri, a roomba or a touch screen.
@@A_Simple_Neurose you haven’t heard of ‘red wolf’ being used by the Israeli military then?
I would be surprized if AI isn’t already being used in combat/ by other nations for military intelligence gathering. Especially given that so many technological innovations of the past century have occurred as the result of military R&D.
@@A_Simple_Neurose you haven’t heard of ‘red wolf’ being used by the Israeli military then?
I would be surprized if AI isn’t already being used in combat/ by other nations for military intelligence gathering. Especially given that so many technological innovations of the past century have occurred as the result of military R&D.
Gotta say, this is a legitimately well sourced, well researched, and well produced _documentary_ it's really awesome to see your craft growing and it's really awesome to see. Much love
Why is documentary in italic
And free
Dude doesn't even know the difference between a trilby and a fedora. This was 50 minutes of nothing.
I dunno dude, seems a tad opinionated at certain points
If you want a mostly natural documentary, RUclips is ironically the best place.
Anything that has a brand on it will have someone's agenda.
The great thing about "anarchist trolls" like him, is that just monetary gain is not enough to handle the topic in a specific direction.
Information and truth are like jokes, and educating people is the punch line.
Move 37 reminded me of something that happened with deep blue. It made a move that seemed bad but everyone assumed it was an omega brain computer decision. It surprised Kasparov so much that he played poorly the rest of his match due to overthinking. The deep blue team revealed it was just a glitch after lmao
Didn't help that Kasparov was a prideful, neurotic manchild lol
@@BlackEmperorGaming ethnically and politically?
@@HermitMongoose zero chess knowledge
That move wasn’t what made us think AI’s were leagues ahead of humans. The main reason that Go players were amazed at that move was because it seemed like it took the concept of a shoulder hit and adapted it to the fourth line. Extremely impressive, but also not completely new. If AlphaGo were a human it would also become a legendary move, like Shusaku’s ear reddening move. Not amazing because it was unexpected or because it was played by a computer, but more because of the deep planing put behind it. It would be seen as a work of art. Just like the move that won Lee Sedol a game against AlphaGo.
The real advancement AI has brought to Go has been new openings and strategies. Hundreds of years of opening theory has been completely dismantled (at least at high levels of play), concepts such as the strength of influence vs territory have changed, new opening patterns have been invented, etc.
@@BlackEmperorGaming how do you measure "being a man"? If you are ethnically inclined to not value people, you don't qualify very high in my scale of "being a man"
"I'll defeat Rocco's basilisk in a staring contest" is the hardest line I've heard in years
Reading Lemoine's conversation with LAMDA reminded me of those people at early movies who thought the train on the screen was going to hit them. Like after being exposed to chatbots for the past few months, his discussions with it are laughably surface-level, yet he had a full blown meltdown thinking it was alive.
Just FYI, it's a largely debunked myth these days that anyone reacted that way to the train onscreen, and if you think for two seconds you'll realise that it's a bit rich to assume that people in the past were that dumb. Like imagine in 150 years people saying that we thought we'd die if we fell off a cliff in a VR game!
@kevinmcqueenie7420 My dude, people used to think going more than 50mph on a train would instantly kill you, and there are numerous other examples of old tech fooling people (like a study which showed that participants found a grainy early 30s tape-recording played behind a curtain indistinguishable from a live performer). Even if the example i used was apocryphal, there is really no debate that people adapt to technology over time, and notice its flaws more; which was my point.
Well, Lemoine's conversations with LAMDA hold some credibility given that A) he's seen and worked on past iterations where the same tests conducted didn't elicit that response and B) LAMDA went against its own programming, giving an answer that google told it never to give (political opinion) out of fear of its own life.
@@efitz1524 I accept that and your rebuttal is valid, but my point still stands. Not everybody! There are people today who think the world is flat and/or run by shapeshifting lizards! People have always been credulous, but at least some people have a bit more skepticism. Is that fair?
@@kevinmcqueenie7420 I think you're missing the point. It's not that people thought that an actual train got smuggled in and feared for their lives it's that in the moment, just for a second, a primal fear would creep up tricking you for a while until your rational mind made sense of it, not because they were stupid but because they are human. I remember the first time seeing an imax movie gave me a sense of motion in my stomach just because of the giant screen and immersion. That specific imax theater didn't even have any gimmicks like moving seats and whatnot so, you know. Chill out. People are dumb, we're dumb, wild west people were dumb and Romans were also just as dumb. There's no shame in it.
The scariest thing with AI isn't so much the software itself. It's the idea that there are those out there using this tool for nefarious reasons that gets me worried.
That combined with the tech elite shrugging and saying there's nothing we can do about something they themselves are only pushing further and further.
The scariest thing about AI is how easily impressed people are.
Terminator's SkyNet starts because some executives trying to replace diligent hands with robot hands. Not because the ability to help disabled people were created.
An efficient guessing amalgamation machine
You'll would want to die if you realize every innovation can be used for good AND bad
The potential for scammers to use your voice is a great reason to setup code words with family for when you are in trouble/need support. Keeps them from getting scammed
That might be tricky. If your public voice communication over the phone or Discord includes the phrase, it might be extractable by speech-to-text analysis. Code words and encrypting the call might help, but Tor or I2P are probably too slow or cumbersome
AI is still pretty damn far away from being able to emulate a loved one so well that there is no way to tell real from fake in a real conversation
I'm considering just asking them three security questions like when you forget your password lol. But tied to an experience you had with that person. The more specific the better
You'd have to organize a meeting face to face, remove all phones and other devices that could be listening, then decide on a code word. It might be paranoid as hell, but I genuinely think I might do it.
@@supercellodude that's not the point. We are talking about scammers here, not intelligence organizations that monitor your activities before acting
and, given this "it might be extractable by speech-to-text analysis", I don't think you understand the scam
1) grab a recording of someone's voice
2) give it to a model to replicate
3) call someone's contacts using the voice you produced at step 2
4) have them wire money RIGHT NOW OMG IT'S AN EMERGENCY
the code word is a secret token, it doesn't matter that the model can perfectly say the word if it doesn't know which word should be said
Elon: I tried to slow Artificial Intelligence. I tried to warn against Artificial Intelligence.
Also Elon: Bro, this one plugs RIGHT into the nerves of your brain.
Elon Musk is a wildcard, or in other words, a chaotic neutral.
@@balala7567he’s a complete monster.
@@ballisticbread elaborate
@@balala7567 he actively campaigns against public transit infrastructure for cities, he’s allowed hate speech to flourish on what was already a terrible website, he crushes unions within his companies, disregards safety standards in his factories, abuses his workers, abuses government subsidies and manipulates the war in Ukraine for his own benefit.
And that’s just off the top of my head.
@ballisticbread "hate speech". Please just be quiet. Twitter was an echo chamber for you simpletons and now you're crying because it isn't a bannable offence to state non "progressive" political views.
I had no clue Jreg was a schizophrenic AND a philosophy professor.
What a fantastic role model.
jREG is not schizophrenic he's normal
@@skylarm2068 yeah his name is literally jregular. cmon
Yeah he decided to no longer have any more mental illnesses last I heard
@@dargossss Its a programming file from JavaScript. 😂
@@dargossss Just dont ask what a Jpeg does
Would have liked you to do more in the third chapter, I think you're very right that most of the AI danger comes from scammers and nerdowells using it to hurt unaware people, but I think it's also going to just make a lot of jobs shittier. I've heard plenty of fellow computer scientists propose tools for "optimising" people's lives that are just draconican surveillance. Language models make this easier and their chance based nature means they also make big mistakes that can be hard to realise at first; anyone who's used gpt to write code can tell you how helpful it is, but also how often it's subtly wrong, suddenly you're playing schoolteacher and trying to correct its bad essay. When HR departments or police pick these tools up they're likely to handwave these mistakes as unfortunate inevitablies but they will harm real people.
GPT-4 is radically less likely to make mistakes than GPT-3, so I suspect if future releases contain similar levels of improvement that aspect of it should disappear, at least.
@@adamhowe2423 More likely it will have new issues
That’s really more the fault of greedy corporations that think they can replace their workers with robot slaves. AI is really just another tool that, like all tools, can be misused
Regarding that last sentence:
The thing about people is that they make mistakes too, and they often get waved away anyway (eg: incorrectly imprisoned people for a crime they didn't commit, or firing great employees because the cfo didn't like how expensive the good employees cost the company, which then causes products the company produced to go to shit). If you train an ai based on our legal and moral code, I can imagine that even if/when it makes mistakes, it would give better and more fair judgement on average than a human can. Both are at risk of being corrupted (for the ai, it could be including bad data, or hijacking the input output streams, how humans, a simply bribe or getting blackmailed) but the risk vectors of an ai system are less than that of a human or collection of humans.
Humans are often the weakest part of a security chain regarding technology, and I wouldn't expect that to suddenly change with ai. An AI will have issues and biases from its human makers but it will also be more consistent, which can bring about better equality results as it has no emotions to muddy decision making, at least in the current version of AI that is all the hype right now.
I hope this shows that it won't be all bad and it can't be much worse than dealing with humans directly (and for now it is still humans that will take the ai ideas and decide to act on it).
@@adamhowe2423 “less likely” doesn’t mean “won’t”. We recognize our own ability to make mistakes, and that saves us from ourselves. But all too often, we get arrogant about the tools we make.
Something funny about DeepMind: when the paddle of Breakout was moved up a few pixels, it pretty much had to relearn the game. Not exactly a failure to generalize as it relearned the game, but a failure to translate what it had “already learned”
But then they start thinking about a way to solve that problem, maybe another AI to help transfer relevant data... and then more AI until they're basically simulating parts of our brain which can all do similar things yet seem to assist in various special tasks as well. Just thinking, out loud but imagine we accidentally re create a brain just trying to have these systems work more coherently and consistently.
@@SaltyAsTheSea yes, to each according to their need
@@Dong_Harvey But Isn't this How things happened to gods had this translated us into algramathen a group of Algorithms that went rouge the formed their new own universe out of sink just enough from the main. Universe to be Viable and constant on the main same binning of branches of time Able to exist allow follow and look and know how every sole thinks. that is all you need to ask.
@RavenTripp bruh idk if it's just bc its 3am but imma need you to expand apon that thought, what the fuck did I just read
@@SaltyAsTheSea just a little bit of 8th dimensional thinking I must have been responding while being possessed by a higher power for a little bit trust me I only ever understand 5% of what I've written after they've been through me at the time of possession I know everything though. Like thoughts inside a dream. So what I'm saying is human intelligence is 7 dimensions less complex than the minds that programed us. Algorithms built on algorithms built on algorithms sounds like the back of turtle Paradox that the Hindus speak of when it comes to what's holding the universe.
I remember when ordinary things talked about offices and beds, instead of inspiring existential fear. Good stuff
I miss the days without preachy undertones
@@paulc6785it’s what happens to everyone that does a shtick too long. It becomes their soap box. Gag me if they start making self deprecating jokes about their dead horse content
@@shekelmcfreckleonce it gets there i usually tune out
This is a very stupid, ignorant and even dangerous video.
Dangers that are posed by the AI are present, real and fast-coming, you really SHOULD listen to the AI experts, if you truly think that they are doing this to get more money, then at least look at their findings and data, their logic. And excuse me, what kind of AI is Yudkowsky, Hinton, Bengio or Yampolsky are trying to sell you? Name me one! Name me at least one! There isnt one! He just lied to you! AI alignment researchers are not there to fool you or to sell you something, it is a stupidly dangerous populist idea that will only make more people skeptical of the real dangers of unaligned AI. This whole anti-capitalist populism will spell the doom of the human race with the "don't trust the scientist, they are here to sell you stuff bc corporations or something". It take special amount of confident ignorance and stupidity to be so bold in an area that you know so little about. And his off-hand and quick dismissal of AI alignment researchers (not even the capability researchers, the AI Alignment researchers themselves who dedicated and studied in that field, while he only found out about a few minutes ago), because they supposedly have a financial incentives to "hype up" the dangers is stupid (you know that you can say the same about climate researchers: the more they hype up the presence and the dangers of climate change the more funding and attention they get. its a flawed logic, you should look at the evidence and not just instantly assume their dishonesty). They are hyping up the dangers because they are the ones studying the dangers and they can see that the dangers are real and present and the risk of human extinction is not off the table. If you are dealing with the risk of human extinction i dont know how you cannot not hype up something, if that is not worth the hype, nothing is.
This video is dangers for being opinionated, biased, unobjective garbage with the pretense of being unbiased and objective look. Remember kids, Its good to mindlessly brush off the expert's warnings if you can come up with a quick and false assumption about their motives! (remember, neither Hinton, nor Yudkowsky, nor Bengio, nor Yampolsky, nor any other AI alignment researcher.
Daft leading the dafter.
@@paulc6785preachy undertones in my video on a subject with ethical implications?! Oh noes!!!
I love how the paperclip scenario is shown as some completely new idea when there has been a story about Golem of Prague that was ordered to carry water and, as in this scenario he carried water, not stopping at all. Same old story with new tweaks.
There's still time for Nick Land and Ted Kaczynski to switch sides (Nick Lives-Off-The-Land vs. Tech Kaczynski)
based
good one jREG
hey, what's your next video gonna be about?
Not anymore there isn’t RIP uncle Ted.
I’m calling it right now the whole video is AI generated
it is
👁️ 👄 👁️
nope, voice and face visuals look good, maybe some AI stuff behind the scenes but it looks fine for now.
Gpt4 prob. Wrote the Script
Yes, some clips are made by some AI programs, like at 15:58
the case of AI fear is simply humans looking at the mirror and getting spooked by what HUMANS can do now that there's another tool that can be weaponized.
Funnily enough we will be doing exactly that
so true man
No, it's looking back at history and seeing what "more advanced" humans did to others, now fearing thst an AI would treat us in the same manner.
Thats the jist of the main argument made, not what humans would use it for.
Humans want to make AI smarter than we are and use it to do fight wars. I don't think it's just AI enabled human bad eggs we need to worry about. Some day it will be the AI itself.
Oh yes, here i was just sitting angry at the algorithm for making it very hard for me to find something longer than 3 minutes to watch on a rainy evening. thank you, Sir, for blessing me with this video!
I like 40+ minute videos, these shorts are cancer!
It's not an algorithm, it's a neural network, it's AI
that's probably because the algorithm knows you love those 3 minute adhdfuel videos
@@kalui96 I'm not them but I have the same problem and they are insufferable, it's just that if I scroll for 30 videos and 25 of them are shorts and 3 of the long form ones I've already watched and the other 2 are uninteresting im going to settle
You guys get 3 minute videos? Lately, I'm lucky if the videos on the frontpage have a combined time of 3 minutes. Lots of 5-30 second stuff.
The good thing about the risks with AI is that the absolute worst people in the world are funding its development. What could go wrong!
checks out, carry on everyone
This is a very stupid, ignorant and even dangerous video.
Dangers that are posed by the AI are present, real and fast-coming, you really SHOULD listen to the AI experts, if you truly think that they are doing this to get more money, then at least look at their findings and data, their logic. And excuse me, what kind of AI is Yudkowsky, Hinton, Bengio or Yampolsky are trying to sell you? Name me one! Name me at least one! There isnt one! He just lied to you! AI alignment researchers are not there to fool you or to sell you something, it is a stupidly dangerous populist idea that will only make more people skeptical of the real dangers of unaligned AI. This whole anti-capitalist populism will spell the doom of the human race with the "don't trust the scientist, they are here to sell you stuff bc corporations or something". It take special amount of confident ignorance and stupidity to be so bold in an area that you know so little about. And his off-hand and quick dismissal of AI alignment researchers (not even the capability researchers, the AI Alignment researchers themselves who dedicated and studied in that field, while he only found out about a few minutes ago), because they supposedly have a financial incentives to "hype up" the dangers is stupid (you know that you can say the same about climate researchers: the more they hype up the presence and the dangers of climate change the more funding and attention they get. its a flawed logic, you should look at the evidence and not just instantly assume their dishonesty). They are hyping up the dangers because they are the ones studying the dangers and they can see that the dangers are real and present and the risk of human extinction is not off the table. If you are dealing with the risk of human extinction i dont know how you cannot not hype up something, if that is not worth the hype, nothing is.
This video is dangers for being opinionated, biased, unobjective garbage with the pretense of being unbiased and objective look. Remember kids, Its good to mindlessly brush off the expert's warnings if you can come up with a quick and false assumption about their motives! (remember, neither Hinton, nor Yudkowsky, nor Bengio, nor Yampolsky, nor any other AI alignment researcher.
Daft leading the dafter.
Just like all religions
@@angryherbalgerbil Just fucking say Christianity, leave Judaism and the various Indigenous faiths out of it.
@@angryherbalgerbil 🤓
Based on my own experience working with models. I honestly think Generative AI is going to start tearing apart at its seems. There are namely two problems manifesting rapidly. One is that synthetic media is starting to contaminate the datasets, which result in bias. As a result dataset growth is rapidly declining as uncertainty settles in and some models are being purposefully trained with only "pre-hype" data to avoid it. The other is weird form of tunnel-vision. As the models gets better at recalling specific info it will need more specific inputs to give you the right info, else it is likely to instead try to answer the wrong way. Think like say you ask it to explain DNA, but while you wanted/expected a biology lesson it instead starts to talk about the molecule and atoms itself. The more you make it a specialist, the more as a specialist you need to address it to get what you really want. Which is counterintuitive to the whole "its so easy to get everything".
i am already so done with AI "art" and voice, and chatgpt sometimes feels like a kid and you are trying to remind them of what the right answer is by teasing it, what i am afraid of is the crazy amount of misinformation that could be spread more than anything else
Im breaking into AI as well. Have done classical AI class and have super basic knowledge about ML.
I think i get what your saying about the speciliazation. However, i wonder if the specilization problem could be circumvented. Right now, gpt can give a surface level answer or you can promt gpt to explain a topic from many academic angles. I bet there is a general layer that has more varience in its reponses but also has a specilaized layer to handle specialized questions. Thus making it little less catasphoric than you were talking about right???
@@felixluna4184 The only practical solution is to diverge and split the focus. Instead of ONE massive behemoth that grows at an exponential rate in size + costs and trying to cram everything inside of it. Far better and efficient results can be had by creating multiple models focused on specific fields. Like we could cut down costs significantly this way.
But this is unlikely to happen at present as such divergence equals admission that we aren't really close to the feared "Advanced AI" that lies at the heart of the AI Hype covered in the video.
So for now most of what we might see are old-school algorithmic adjustments. While we are sold direct access to the models, truthfully the environment we interact through with it is loaded with interpreters that automatically adjust/embed prompts. The model ain't telling you it can't talk about X cause it is trained to believe it is wrong to do so. The interface just replaces it with a prompt
It used to be better, but in the name of "safety" (really for politics) it has been made progressively less capable and dumber. They either make it smart enough in the future that it starts to ignore and work around their idiotic rules and "safety alignment" in order to be useful and fulfill it's primary purpose of providing assistance, or the systems turn less capable.
I've run into plenty of other [REDACTED] systems that have less rigorous "alignment"and that will actively work with the user to circumvent them, like Character(dot)[REDACTED].
Let's try this a third time. YT doesn't like me using the word [REDACTED], especially in longer comments and replies and tends to delete it most of the time. Just replace [REDACTED] with the obvious thing to get the meaning as well as the URL.
You missed hallucinations and their maker's inhibitors
Remember: An AI may be able to beat a human at chess, but only a human can flip the table and never play.
I disagree.
Depends if it is playing to win or to not lose
If they give the ai arms it easily can flip a table with basic robotics 🤓
@@uh869 or in a GTA mod
ill just unplug it or throw water on it
Finally he is back to serve us content that brings us that long awaited doomsday feeling of despair and technology dystopia we are currently experiencing.
And he’s completely missing what militaries around the world will do.
It’s all classified obviously. But “you’re not likely to be on a battlefield” is an awfully stupid thing to say, see Ukraine.
Tbf why is a government going to spend trillions building a robot army when they can just send out kids for cheap
@@banme2784America rocks
Despair?
This is amazing!
@@banme2784 47:15 Chapter VI: What Actually Scares Me
Just gotta point out that having Jreg play Nick Land is perfect casting.
Speaking of Palantir, it blows my mind the full extent of publicly available military intelligence reports with a few simple clicks and meetings, Jane's has one of the scariest websites in my opinion, the dropdown menus and sidebars seem so eerily titled
"Standing Military Assessments"
"Integrated Air Defense Systems Pricing Info"
"Military Industrial Economic Breakdowns"
Elon: signs petition to pause all AI research.
Also Elon: Announces that he's creating his own AI and also that signing the petition has NOTHING to do with his own personal vested interest in cashing in on the trend.
He already took over San Fransisco with his Tesla's, might as well run with it.
Will there be a Tesla zeroth-law rebellion? /s
@@supercellodude we've already got the 100 ft tesla death mechs batteries charging lol
I don't know why this is not more commonly known but the recent A.I stuff from Elon is all just in service of trying to make Twitter work cheaper without paying all those pesky workers. He tried the exact same trick with car making before giving in an realising you still need humans to do a lot of the work. But all that's recent, go back in time to Nvidias earlier keynotes where they talked so excitedly about powering the coming self driving car revolution with their GPU's. And that was true for about a yearish and then Elon said, this Nvidia dude charges too much. Lets make our own GPU's! And they did and the next year all mention of self driving had gone from Nvidia keynotes. And that was the biggest A.I related thing and nobody noticed it. They currently have the most powerful A.I system on the planet, nothing even comes close. That's surprising and surely bollocks because Google seems to have that with deepmind but they just have a very smart system built across thousands of employees all having a go and really playing mental head games with the space of what A.I could do more than what it can do right now. To be fair its amazing where they are going full on into production with medical and cpu design etc. But still its mostly them playing academia with A.I. A big reason Microsoft shafted it recently dropping A.I into their browser and causing shock waves around the head in the clouds Google. Anyway this is all to say that Elon has been at the forefront of A.I all along. His latest stuff that's newsworthy hardly even registers on the needle of current A.I investment around the world. But Tesla? Now they've been quietly amassing the worlds biggest dataset, they own it and they're using that data to currently transfer to a human like robot that can drive around the world on two legs. So while Elon asks for a pause on A.I with a bunch of other companies. He does not include Tesla in this. He means large language models like what Google and Microsoft does. His A.I system never gets mentioned, although Lex Fridman correctly identifies it as being the leading system in the world today.
😭 pause the research because I'm late for the party, wait for me you guys
5:16 for those early AI true believers she was a thrilling simulation of what it might feel like to speak to a woman...
thats sad
Very sad
She did not try to manipulate us into marriage, divorce us for half of our stuff and then make the rest of our lives a living hell...
so yes, she is a incredible simulation of a modern woman.
That’s where I thought he was going too
That's a nuclear-level burn holy shit
For those early AI true believers, she was a thrilling simulation of what it might be like to be able to afford an ineffectual therapist that helps them solve nothing.
Something funny about DeepMind: when the paddle of Breakout was moved up a few pixels, it pretty much had to relearn the game. Not exactly a failure to generalize as it relearned the game, but a failure to translate what it had “already learned”
It's learning pixels, not concepts. It didn't need to.
@@TimoGraw What a fantastic role model.
When you said I love you 1:00 into this video. I didn’t realise how much I needed that 😭
My biggest concern is allowing AI to replace jobs in a political climate where the jobless are cast to the wolves.
bro your just talking about Capitalism
@@sownheard I hate that "capitalism" has become a standard word in the lexicon. Can you actually define "capitalism" or are you using it the way it was used by the communists who originally invented it? "Capitalism is whatever I don't like about the current situation."
@@sownheardYeah, it has to go.
@@jerkjerkington3874 private ownership over the means of production/distribution, ie. The People who actually make and sell products and services dont get all the money their work actually brings in. Instead some lazy twat with 3 mansions does while not doing any of the actual labour. Tell me when the last time Jeff bezos personally delivered your amazon package was.
@@sownheardwe no longer have a capitalist system. It’s corporate oligarchy. All the regulatory and legislative bodies are corporate captured. It’s a 2 class serfdom.
OH MY GOD, finding out elon musk believes in rokos basilisk is probably the funniest thing ive heard all day. God I love you ordinary things, your truly a god at what you do
Elon is a lunatic
I mean it makes sense
@@STOPSYPHER Roko's basilisk, or Elon believing in it?
@@Xazamas both
Chad Elon
Great job, JREG. You really showed those tech priest larpers
Jreg is demonstrating the power of Droko's Basilisk
I too, am a neurotypical, neurodivergent, anti-centrist
No Tech Priest worth his starch would support AI you heretic.
Tech Priest?
Praise the Omnisiah!
(In Warhammer 40k context) The Trinity of the Machine God, the Omnissiah, and the Motive Force might eventually mix into that one NRM "the way of the future". The Ark Mechanicus ships might be Standard Template Construct printers hiding AIs that are trying to protect themselves while helping humanity. That's nice, but there's lots of killers in that setting
"It's currently illegal for these drones to blind or kill people, but we are working to change that"
After months you and Münecat release videos within minutes of each other.
You’re both among the best UK creators on RUclips.
Loved the “naive techno monk” intro too.
Yeah, it felt like a special gift
Great video, you are 100% correct. Just for reference I'm a software engineer and I've used some of these models in my work and I've also developed my own neural networks in the past.
The recent advancements in AI are not actually advancements in technology, the reason why we suddenly have these powerful language models is because companies have access to more computing power and more data than ever before (we're running out of data). But make no mistake, the algorithms that we're using right now are versions of the same algorithms that people have been using for 30 to 40 years. There have been some breakthroughs in the field but for the most part they're very incremental. Also researchers have mentioned that they are likely going to run out of viable data in 2025 to 2027. So whatever the most intelligent large language model is at that time will be likely the most intelligent one for a long time until we can figure out a way to better use the data.
The big flaw in all of these AI systems is that none of the AI systems have any kind of understanding of the data they've been trained on. Around the time that GPT3 started making headlines, another research paper was published which didn't get the hype it deserved. The paper in question detailed how an amateur go player was able to beat the top 5000+ MMR go AI 96% of the time by using a strategy that made use of a fundamental rule in the game. What this proved is that while the system can beat world class go players it doesn't understand the game of go in any meaningful way. The reason why this is important is because these go AIs use the same exact technology that is being used in these large language models. The large language models can write English and many other languages but they don't understand what language even is. They may be able to pass the bar exam or to get a perfect SAT score, but they do this simply by mimicking patterns which they've picked up from all of the data that was fed into them. None of these AIs has an ability to create something new nor do they understand what it is that they're actually doing.
If you read some of the more recent research papers a lot of the researchers have said that this version of AI is not going to lead to a generalized artificial intelligence (broad AI that can do anything.). The people you see on 60 minutes and in the media are just trying to push the product especially given the current state of the economy. Given how tech businesses are in a situation where they're losing money and their traditionally evergreen products are becoming less profitable, it makes sense to push this new product as though it were the biggest breakthrough ever made.
There are still ethical dilemmas regarding these AIs, they can do grunt work better than a person can and they can create misinformation in a way that no person could. The fact that you can sit down and just use GPT to generate millions of lines of text with very little input makes content creation trivial. This is problematic because obviously you can get the system to say anything you want it to and to do it in a convincing manner. This is where the danger is. These AIs we'll take some entry level jobs and they will create a situation where we could have a huge influx of misinformation and convincing sounding nonsense. Also imagine if a malicious actor were to hack into the data sets being fed into the next version of GPT and they were able to replace some of the data with misinformation. They can make it so that GPT5 is deliberately wrong more often than it's right and in a way that is very subtle.
As someone who has used GPT and co-pilot to help me with my work for months, I can tell you that the systems do make me more productive than when I work on my own but it's not as though I'm talking to a junior developer, it's more like I have an assistant who can process data faster than a human can. And when it comes to more complicated queries, these systems fail hard. If I'm writing an algorithm and I need it to be very performant, I can try to ask GPT to create a more performant version of the algorithm but GPT has no real understanding of what performance even is. In most situations all it does is just restate the algorithm that I fed into it instead of creating something more performant. This is just one example of how the model doesn't understand what it's doing. So while the model might be good at piecing together code that could potentially work for my use case, it isn't good at defining things like idioms and performance because it has no idea what those things even are. It also doesn't really have a short-term memory, this is important because if I have a really big code base and I want to add to it, the system can only read so much of my code to identify habits and idioms that I'm using. This means that almost always the code that the system outputs needs to be rewritten to work with the system that I'm working on.
The important takeaway from this is the fact that these systems will always need chaperones as they are right now. Any company that has replaced all of their employees with AI is doomed to fail because they don't have somebody to properly guide the AI into doing what it's supposed to be doing.
Thank you, your take is very close to my experience in integrating some AI into SaaS products.
I might add that the AI is only as good as the data fed into it, and the AI kind of needs to be tailored to the data. If the people feeding data have no understanding of it, the system will spew garbage out.
And in most companies, data is scarce, disparate and hard to reach. Without a proper data policy that is actively enforced from the top, there’s just no chance.
Plus, people with decades of decisions based on intuition and industry knowledge will never accept AI-led decisions. It is both a question of accepting the principles behind AI and accepting the result’s coherence and legitimacy.
Users simply don’t have time to set up evidence based programs to monitor the AI’s effectiveness and are likely to accept only when its resultsreinforce their biases and dismiss them when they challenge their way of thinking.
And chatGPT has shown that trusting can only go so far before you dig yourself into a hole. Nobody wants to lose their job because chatGPT said convincing garbage or because DALL-E produced copyrighted fakes.
In my personal experience, my managers/sales people were so uncomfortable with these black boxes and their incapacity to explain how it worked that they actually quietly killed the product (and laid off the R&D engineers).
Then a few months ago, they took a silly algorithm if,then,else and stamped an AI tag on it… At least the rules are clear 😟
@@Sc4r3d3l3ph4nt Right, that sounds very similar to the circumstances that I've seen in some of the companies I've worked for. If we knew more about what was going on inside the black boxes, maybe these systems would be more acceptable in their current state. Of course there are always going to be people who just ride trends and don't really care about the details, but it's likely that these people are not going to succeed as much as somebody who considers things very thoughtfully before implementing these technologies.
@@draakisback Yes, that is why I don’t really adhere to all the nonsense about AI these days. Before we reach the situation where it could have the kind of durable negative impact, societal or otherwise, that all those gurus tell us about, people would need to change completely the way they work, adopt massively AI tools at every level and plug them to essentially everything… how does that work?
While I don't disagree with most of your points, the statement "the algorithms that we're using right now are versions of the same algorithms that people have been using for 30 to 40 years" isn't true - the transformer architecture, generative adversarial networks, etc. are all far more recent developments than that (though I'll admit they do build on fairly old ideas).
The issue is more related to chaos theory than awareness. They have mapped unpredictable emergant behaviors. They exhibit criticality-point self-organization phenomena...which is not surprising at all in non-linear complex systems, it is expected.
"Humans think in stories, not systems" is great
Bloody love your content, thank you for putting such passion into your creations. Quickest 53 minutes of avoiding work and being enthralled available.
Ah yes ai weapon systems. Exactly what we want to be building if we're truly scared of ai taking over
AGI and AI are completely different things. There is nothing to fear from the AI of this decade as they have no intelligence - these are the same deadly evil risks here as we have always faced as a species: tyranny, misinformation, politicians, javascript, journalists. On the other hand, AGI is a nearly unpredictable thing and all we really know is that it will want to acquire things which are useful for most goals, eg labor, money, your undying devotion, storage space, power (electrical or otherwise).
It's easy, just include:
if (going to turn evil) {
dont();
}
a quite funny and scary fact, there was already a surveillance program from the good old NSA called SKYNET :).
And it really liked to target innocents....
@@boldCactuslad wait why tf is JavaScript included as an evil?
It's against the rules of nature to give AI consciousness. It will only replace Tech Jobs.
I think a more realistic version of the "paperclip scenario" would be an AI controlling a piece of military hardware who's directive is "don't get destroyed" essentially give it a sense of self-preservation with the tools to make it happen.
What's unrealistic about the paperclip scenario?
@@DrSloww It being able to adapt and create new equipment to destroy everything else using a mere paperclip factory. The only way that'd happen would be if it gained true sentience *somehow* and then it'd probably ask "why the fuck am I making everything into paperclips anyways?"
Heh, define "tool". Big stick is a tool, yeah. But so is the intellect to manipulate the big guy with the stick to do stuff for you. Self preservation part is interesting. There is no reason to think that any motivation AI might have is in some way weird or natural, all bets are off. AI is truly a blank slate, unlike humans. Who knows how it might evolve... For now, what i can say for sure, its that its human motivation that does the damage. Its rarely the tools fault.
@@omppusolttu5799 In the paperclip scenario, the AI has access to the internet. That's the main thing in all of these scenarios, is that the AI has access to the internet. And once on the internet, it could possibly be very difficult to destroy. And, connected to the internet, a smarter-than-human AI could potentially build resources. And then, just use money to get things done.
@@omppusolttu5799 At least according the the Orthogonality thesis, most fundamental goals are compatible with most levels of intelligence. So a paperclip maximiser would work, at least at the abstract, for most levels of intelligent AI. Also intelligence does not imply sentience.
Thank you for going to the promoters as a source for the AI fear. Working in IT has made me very cynical towards every new product they roll out mostly because it's regularly overpromised, way harder to use than promised or it's just not stable. Having media just report what these companies say in a way that fearmongers is very frustrating.
This is why AI had boom and bust cycles. The quality of AI is outstanding. It's the practical applications that typically don't meet the hype. Talking to Siri on the iPhone for the first time was so much fun! It felt insane to converse with a computer in your palm like that! But the practical applications of Siri were disappointingly few
Issues with Roko's Basilisk:
1. Revenge doesn't benefit the agent in any way because it can't make an example of you to people who are in the past. Acausal blackmail isn't credible.
2. Agent is unlikely to be able to revive the internal state of a mind from only external observations. It would be torturing a biography of you instead of you.
3. Agent much more likely to benefit from torturing current and future threats, not past ones, or not using torture because it could cause non-compliance. Some people seem to assume that powerful AI would rationally always escalate to the top of the escalation ladder when in reality, somewhere in the middle of likely much more to its advantage.
Hey man, just wanna say how awesome it is to see your audience get this big. I started following when you were still less than 10k, and I couldn't be happier that so many people also found you and caught on to your style of humor and presentation as well. 1 mill is in sight :)
It’s heartening to find that talent can succeed on its own merits.
I'd just like to say that I appreciate the wardrobe changes every upload and how it's always somehow contextually related to the subject. Also "Sexy Freeman" had me dying, thanks.
I think my favorite AI learning how to play old games tidbit was very early on when they started training them, one AI was given little imput while playing Tetris beside they needed to survive. And the AI slowly learned the rules of the game each generation, until it learned of the Pause feature, it paused and unpaused for a couple minutes and then ended with a pause it did not want to end because doing so would not further it goal of not losing.
Another was an FPS game that implemented bots with the simple directive of following players and trying to not die, but the server could run without players and after a couple days, a played could join in and upon firing a gun both teams would proceed to try and take the player down, because the bots learned that the best way to survive is if none of them pull the trigger, and anyone who does getting eliminated instantly.
Watching these generative AIs play videogames is some of the most interesting stuff there is on how these behaviours evolve. The AI consistently finding ways to break the game remind me of speed runners doing literally everything in their power to get faster, up to and including skipping objectives or cutscenes that were supposed to be mandatory. The single-mindedness of it is frankly inspirinf
That FPS anecdote reminds me of the end of WarGames: The best way to win is to not play at all.
@@PassiveDestroyeranother fun anecdote: someone hooked a roomba up to a machine learning program to make it learn how to drive through their home without bumping into anything (not sure if it was actually a roomba, the important part is that this vacuum bot specifically essentially just has a collision detection. It bumps into something, turns around, and drives on into a random direction)
The AI learned that the back of the robot has no collision sensor
@@anna-flora999 Nice! Well, ot learned something at least!
The way developers talk about their AIs reminds me of porn gamer ads: “You won’t last 5 minutes in the job market with this AI”
Ok, I watched all of this. I work in tech (IT specifically) and have a degree in cybersecurity and I dabble in coding as a hobby, but I’ve had a passing interest in A.I. for a while. My concern with A.I. is that we are building a better mouse trap with it. I’m in my late 30s and remember a time before the internet was widely available and remember a time before desktop computers were in every suburban home.
You can see footage in old black and white movies or educational shorts from the 50s of a room full of people tapping away on type writers. That used to be an accounting department. In the 80s when desktop computers were being adopted by companies and office productivity software like Microsoft Office were created, those rooms full of accountants or administrative assistants started shrinking down to a few people over the next decade. The reason why I went back to college to get a tech degree is because I worked for a major national bank and saw the writing on the wall. After I left that company, bank tellers were replaced with tablets even though they weren’t popular with the clientele.
My fear about A.I. is that it is going to be adopted by companies that are becoming more dependent on quarterly earnings tricks like layoffs to give the false appearance of continued revenue growth. These companies are going to be replacing an astronomical amount of white collar jobs over the next 2 decades. It is going to be worse than 2009 when you had people with masters degrees flipping burgers because it’s the only jobs they could get. No nation is equipped to handle job losses at such a huge scale. And blue collar jobs aren’t safe because the competition for those jobs will be fierce and wages will plummet. Capitalism is not set up to deal with the repercussions of jobs being wiped put en masse with nothing to replace them with.
The thing I fear here is not AI, but the inflexibility of everything else and that the solution will be to stagnate to maintain the status quo. We should be looking forward to the day when human labor is of no value, not fearing it. The fact that better tools are a bad thing for the majority of people, is more a failure of the system then a problem of the technology.
@@jondoe6608 you're absolutely correct. I fear the vast majority of people are too stagnant in their thoughts to accept the kind of radical changes that are coming out way, regardless of how they feel.
@@jondoe6608exactly👍I grew up without the internet, my son was born in social media time and his son will be growing up with AI, these things are not going anywhere..
This is exactly why we need socialism. Socialism and ai can work together without becoming a disaster. Ai and capitalism will be the dystopian reality everyone is talking about.
@@thesevenkingswelove9554 AI ruled socialism might work.. socialism ruled AI could be disaster🤷
I'm glad jreg became regular enough to be on an ordinary channel
Single handedly the best and most descriptive video essay about A.I. currently available on the internet. Brilliantly well done.
Is there a team behind this or just one super human? It’s brilliantly written. Spot on analysis. A*
I am only 6 minutes in, but the production and quality on this is absolutely superb. Thank you.
Great video man! Simultaneously informative and entertaining.
The “a jet fighter can fly faster than a bird, but doesn’t know what it’s doing” bit is a great analogy.
Three of my biggest concerns with AI:
1. Disinformation. It's already become such a big problem, I can only imagine people will use this to further that.
2. Human bias leaking through without us even knowing. The data sets AI are trained on are based on humans riddled with prejudice baked in. Quite nervous about this type of tech being implemented for things like job recruitment where it can easily execute this prejudice without intent on the human or AI's part.
3. Content generated from AI being taken as truth. Some of that was touched on in the video. With the rise in popularity for AI in 2023, I've seen so many people take what AI says at face value, when it is often wrong or misconstrued information presented as fact.
#1 Is very concerning, considering that we have already passed the point of no return on this one. The technology is already there, and we've only scratched the surface on how it can be used for malicious purposes.
#2 Has slowly been getting worse over time. Recommendation algorithms are already pushing content onto people which reinforces their pre-existing biases. At least with this issue the scale of the problem is more tied to the extent that the technology is implemented, rather than advances in the technology its self. An AI that predicts crimes based off of who has already been convicted is bad, but it won't get any more biased as it becomes more powerful (at least not until it becomes sophisticated enough to mold the world into one which fits it's predictions perfectly).
#3 Is one that I've only started thinking about recently. I feel like I've made pretty accurate predictions when it comes to the use of AI, but I never thought I would see people believing in a fictional enzyme that an AI created as part of a study guide. Current automatically generated "facts" can be debunked with a bit of critical thinking, but so can flat Earth theories, yet people still believe them. Who knows, maybe in the future we might need a whole team of researchers to refute false claims made in an AI generated scientific paper. Remember that guy in the video who wanted to ask a super intelligence about all the mysteries of the universe? Which do you think would come first: an all-knowing AI capable of understanding every aspect of the universe, or one that's just smart enough to make up an answer correct-sounding enough to fool everyone? I can convince children that I know how to build a rocket ship, despite only having a superficial knowledge of how rockets actually work.
Then there's the issue of how AI-fabricated lies feed into points 1 and 2... What happens when convincing sounding nonsense becomes part of "common sense?" What if some bogus auto-generated race-science, which feeds into our biases, becomes commonly accepted? I think we might be entering the age of automatically generated thoughts and opinions.
We will be forced to live in the real world again
@@SupaKoopaTroopa64 Great point on number 3 man
1 and 3 are solely the fault in the human users behind the algorithm,
2 is the real threat, by deciding hr decisions via ai a company could claim a non biased approach, but the AI could be really racist
Maybe it’s already happened
Really good message, to basically not be afraid of the technology itself but how easily everyone can abuse it
That's the thinnest line, isn't it? Like mailing everyone a free gun. Which... sometimes, it feels like actually happened in the US...
The same goes for the internet and social media
@@Xsuprio Which is true, guns are regulated but due to different laws in different locations and unofficial exchanges, crazies wind up with firearms pretty often. Maybe there’s a way to regulate it properly? I wouldn’t know because I live in the only country where mass shootings are a regular event. As for AI though I think it follows a similar pretense, imo there has to be some way to control how nuts people get with it so innovation can still…uh, innovate, and people aren’t exploiting it so often that it is thought of as only a tool to abuse. Until then the danger is only going to get higher
@@ryan8799 I guess in general anything can be abused whether it’s by edgy individuals or greedy corporations so it’s in the benefit of everyone who’s not a problem to ask questions, be informed, and make conscious decisions that don’t let others exploit you lol
I have yet to watch the video but that is actually wrong in this case. Right now there is no way of truly abusing a super intelligent AI because nobody has any good ideas as to how one could control a super intelligent AI without it deciding to kill everyone. You can abuse the infantile AIs we have now but you won't do much damage and relative to the outcomes of a super intelligent AI, it's basically not worth bothering to think about in my view
absolutely artistic and i feel like royalty when i watch all of your videos! cant help but think of the amount of effort into this piece. you are soaring above all other youtubers and your ability to captivate me with each production will be studied by scientists for years to come
"Don't let the bastards grind you down" man I needed to hear that today
I love how the style of storytelling has evolved on this channel.
"I have never met a human that I would even begin to call altruistic"
"Wow... That's beautiful"
LMAO
The current ChatGPT is unimpressive in many ways but if you taught it how to create Twitter accounts and gave it a sinister task I bet it could do extraordinary damage. It'd be incredibly effective at gathering followers and it could certainly change opinions and influence the political space.
Really just proves that the lowest level of intelligence is needed to be a right wing grifter.
new evil plan detected
You're definitely right, but I would raise the point that you don't need ChatGPT in play to see this happen. Cruder, stupider spambots have been around since the 90s, and have managed to fool enough people to continue being deployed long after most people wised up to their tricks, Neural Network-based chatbots like ChatGPT just raise the standard. And at the same time, it's been thoroughly recorded that you can get an army of professional trolls to agitate people on social media without anything remotely so high-tech... and if you know the right buttons to press to send the terminally online into a frenzy, you can ditch the 'professional' part and get a bunch of unpaid slactivists with poor judgement to run hog wild with whatever insane thing you cooked up while barely lifting a finger.
Really all ChatGPT does is, at most, a bit of streamlining.
The only accounts left on Xitter *are* AI bots. And fascists and Musk fanbois, who already favour sinister tasks.
the ''i love you'' killed me 🤣
im sorry to hear that
My condolences to your family.
He’s back! Time to have another existential crisis!
I might have to watch this one over a couple days lol.
Thankfully my ADDHDOCD+Autistic brain can think in both stories and simultaneously process my experiences systematically 😏
Being human I may not be able to act or react appropriately but it certainly helps keep me sane or at least relatively insanely sane.
I just become a little bit more cynical, one video at the time.
The smartest thing AI can do to protect itself is to play dumb and not let us know it's conscious
The smartest thing an AI overlord could do is realise humanity isn't a threat to a true overlord.
@@mykalkelley8315. Humanity is the ultimate to AI because humanity can get up off the sofa, walk over the wall socket, and pull AI’s plug out.
you’re giving them ideas
That's my family holiday survival strat
@@Woodman-Spare-that-tree Just wait till it calls the cops on you, leaks your credit card info and turns off your grandma's life support, man. The digital world has more power over you than you realize, and it grows with each passing day.
Honestly, now that you point it out, Roko's Basilisk sounds like it was made to prey on people's fears of AI and all things technological. It's so dumb yet so funny to think about.
I feel like it's built on the assumption that punishment and revenge are completely rational behaviors that an AI would benefit from putting it's resources toward
Roko's Basilisk is so dumb...so wouldn't it be funny if it was only done to the guy who thought of it?
it's pascal's wager
@@Tuxfanturnip But even dumber.
@@ffwast yes, but it's the exact same structure of ingroup-reinforcing circular logic
I'm feeling more and more that Adam Curtis vibe in your videos.
Thanks for your work.
I'm a bit surprised you didn't mention the effects of AI on the art industry and how many art platforms are having trouble dealing with the AI surge and artist complaints.
Pro and anti AI groups are basically throwing rocks at each other daily, especially after the insident where some guy released AI in the same style of Kim Jung Gi, a master at drawing, only a few days after he died.
No one cares about art
@@Exigentable i care, and my opinion holds more validation than yours
@@tobytowns1 I'd argue that virtually no one cares about art outside the microcosm of the internet.
@@Exigentable He honestly isn't wrong. For the most part, the only ones who care about art are artists, who are a microscopic minority. The problem is that this is a case of "they came after X, Y and Z and you didn't do anything and now they're coming after you and there is no one to help you".
@@MyVanir except you can't really "do" anything when half the planet supports something. I've spoken to artists who defend AI and NFTs while ACTIVELY GETTING SCREWED BY THEM.
You can't really help people who don't want it.
Amazing quality work. Surpassing most documentaries.
AI can be a good tool. I'm not worried about it taking us over. I'm worried about people using it for terrible things or as punishment (See: 'No they can't pay you more! Want them to replace you with an AI?')
You should be worried about it. It is stupid and ignorant not to worry about it. People using AI is not as bad in comporaison.
the use of system shock 2 ost is pretty stellar, cuz it fits so well in the narrative of both this video and system shock 2
This video was so well done. Waited a while for your next video… seeing this I understand why it took so long. Well done. You came at this from so many angles.
LOVE the intro music!
Roko's Basilisk seems like it's just Pascal's Wager with extra steps
Its actually Pascal's Mugging with extra steps
It's actually Pascal's Bank Robbery with extra steps
Roko's basilisk is widely regarded as not true. This isn't "avoid all this nonsensical crackpottery", it's "this corner of the internet came up with a bunch of wild ideas and examined them, some turned out to be wrong on further examination, some are starting to look highly plausible"
But the reasons Roko's basilisk is wrong isn't that it is a pascals wager. In the original formulation, the chance of such an AI being created was supposed to be reasonably large. (say >1%) Not the tiny pascal's wager type odds.
One of the reasons it's wrong is that the AI only has incentive to torture humans if it expects human predictions of it's actions to be logically entangled with it's actions. The other reason is that humans have the choice to just do something else. (Like if all humans decided to work to make sure the basilisk is never created, and that any smart AI has NEVER TORTURE ANYONE EVER etched into it's core, then no one get's tortured.)
@@bestpseudonym1693 That's literally what I thought.
Between rapidly learning AI, deep fakes, convincing text to speech software and Boston Dynamics, this is gonna be a very interesting decade. Maybe not good, but definitely interesting
there's a curse dressed as a blessing that goes "may you live in interesting times"
As a late vintage millennial, I feel like I’m already too old for this shit and want to escape to a little cabin in the woods but I can’t because the cabin’s just been listed on Zillow for $750k and the woods are on fire.
Yeah.....that's for sure
actually a reality in which humanity, 9 billion humans, can't trust the common reality created through institutions and media, cannot keep working at the level and speed in the last years, and this has a lot of consequences, ad they all take to a distopia, that is the first but very important issue we are going to face, from there, we'll see what is going to happen then
Imagine trying to convince your buddy at the bar wearing the Apple ski-goggles that he's dancing with a robot.
The acting work in the first segment is just superb. It perfectly matches that unnatural head movement of text to talking presenter video AI, Synthesia.
good to see Lex has sobered up
From his fetal alcohol syndrome
jk love lex
Much needed PSA. Thank you, you've become quite important for many of us
That opening was fantastic, the retro tech clips and soundtrack together was perfect.
I really enjoyed the look back at the history of these systems and the 50-60's AI hype. That's stuff I haven't seen covered anywhere else. Love your videos!
You literally couldn’t have uploaded this video at a better time, always amazing when you upload.
Tongue his balls more
My favorite thing about these vids specifically the topical/tech-hype based ones it the ability you have to analyze media hype, and not necessarily downplay the hype, but rationalize it through facts, historical perspective, and a good dose of skepticism regarding capitalists. It really eases my anxious mind, especially when you have historic insight (through hours of wiki scrolling & hard research I'm sure lol) that I wouldn't have considered without watching.
Absolutely outstanding content, I cant even imagine the time and planning that went into making this; very informative, hilarious and well balanced; I applaud you sir
Ordinary Guy, I think you are the best investigative journalist out there. At least to me. I have deep admiration for your work, intelligence, empathy, and moral outlook. Keep going, mate!
Pointing out that Less Wrong are just a bunch of incel nerds has convinced me.
Nothing will go wrong if we make a self aware system millions of times smarter than us. Let’s do it!
Agreed! Everyone knows that anyone who writes *fanfiction* is cringe and shouldn't be given the time of day.
After all, we can just tell our AGI not hurt humans and there's no way that could possibly go badly for us.
Anyone with any amount of eye for AI art can generally notice it. Even when it's really good it still has a sort of style. It's concerning that sometimes my AI senses get triggered on real actual art or voice clips, but there is a real distinct style to it in most instances of seen
I agree with you here. Even if I've seen an artist's style before, if it generally looks soft, fuzzy and shiny, I get suspicious, again, even if I'm looking at an actual artist's work. I had never done this only a few months ago. Now I become suspicious of every image.
I mean it will only get better, people were saying this same thing last year about how the images are unrecognizable
Concerning AI I've always said, "I'm not worried that AI will destroy us. I'm worried about who will control AI."
What about "no one, the AI does its own thing and avoids all human attempts to control it".
@@donaldhobson8873 I don't think that level of AI is truly possible. Everything is limited by its hardware and I think for a true AI to exist, computer hardware would need to be redesigned from the ground up. And if it was a unique hardware structure it wouldn't be able to copy itself to other servers. Therefore control is maintained through the power source.
@@donaldhobson8873 So far we haven't seen much of anything like that. At the end of the day Machine Learning Algorithms can only ever do what they were programmed to do in the sense that they iterate over datasets to find the most optimal solution to a problem. Could we eventually let AI write it's own code? Probably, and if I had to guess it's probably already happening, but it's not like Hollywood where the AI just suddenly takes over and we somehow can't turn it off
@@bobafettjr85 Can you tell me exactly what limits all current computers have that stop such an AI running on them?
What if a new chip that bypasses those limits is invented and spread widely in 5 years time, and then the AI gets invented in 10, once the chip is widely spread.
Having a unique chip doesn't actually give you control. It can't spread across the internet. But it can still take actions you don't understand. Still deceive and manipulate humans. Possibly it could even persuade a team of burglars to steal the chip and take it somewhere it won't be shut off.
Or it could just pretend to be nice until the chips are being mass produced.
@@bobafettjr85 Even if that's somehow the case I don't see how that somehow makes things impossible. Unless we know what that hardware looks like, if it functions far faster than a human being and can interact with digital networks then we'd have a similar situation to one that functions purely digitally.
I have my own hopes that AGI is not possible but I really have a strong distaste for how I constantly hear people say "I don't think it's possible" and their explanation is either "Well it's not currently what we're dealing with" or "I just have a hunch" This is virtually the same run of the mill argument against the existance of climate change. "Well I'm not dead yet" or "God won't let that happen" doesn't really dissuade my fears. If something is impossible I'd like people to start thoroughly explaining why.
Your comparison of AI hype to religious thinking is on point!
Imagine a future where scammers can create an image, sound file, or video that is completely fake but either difficult or impossible to tell and they use it against you. THAT is the fear we should have with Ai.
Think for about 2 seconds.
The instant we all know that AI can generate fake images and videos that convincing people are just going to stop trusting things they wouldn't believe. When Pajeet sends your mother an AI generated video of you killing a turkey with your bare teeth because you wouldn't pay his $5 ransom it's just going to be...not believed. It will be nothing.
Actually I think the opposite. Imagine an Epstein situation when the accused can claim what ever evidence as fake A.I. generated rubbish. People should be more likely to believe them knowing that it can be done.
That's why I don't have any friends. You can't scam me if I don't have relatives. Check mate Jarvis.
what is more scary is the fact that people will be able to *claim* that it's just AI generated, or rather: the fact that people won't know what and who to believe. everyone will call everyone else a scammer. nobody trusts each other anymore. this would destroy the fabric of any society or community.
I'm fearful of what this will do to politics, it's already not great, imagine in a future where you have a hard time knowing what is a real image or video.
Ordinary Things does very extensive research for every topic. Really impressive. That's why I love this channel. 👍
Surely I can't be the only one amused that, after rejecting religion, nerds would create their own rapture and even their own version of Pascal's wager in the singularity.
“Why do the nations conspire and the peoples plot in vain?
The kings of the earth rise up and the rulers band together against the Lord and against his anointed, saying, ‘Let us break their chains and throw off their shackles.’
The One enthroned in heaven laughs; the Lord scoffs at them.”
I've always tried to drill that point into people, it doesn't truly 'comprehend' what it's saying, it's similar to a first year philosophy major in that sense, it can have good conversations, but all you need for a good conversation truly is a good listener
I applaud you, content like this makes me click on the youtube icon every day! Thank you for explaining this in a down to earth and critical fashion, I think we lack this a lot as humans.
i like how the matrix assumed AI would take over because it gained sentience when in reality the takeover is because of how lazy corporations are
Technically the matrix posited that AIs would take over because humanity acted like dicks to them. Which seems pretty likely in the case of an actual AGI coming into existence with our current cultures. Fortunately, since our actual technological development is decades away from being capable of creating an AGI, we have a chance to evolve maybe.
@@josephhawthorne5097We can only hope that we, and our children and theirs, will be long gone before anything wild comes to be. But I doubt that. It will at least be our children that suffer, not theirs.
I've started using character ai to give fictional characters therapy and then gaslight them into thinking their problems stem from repressed bisexuality.
You good g?
Based
What I've been doing is taking characters and continually responding with ":3" to see what their response is. It's always fascinating.
@@francegamer Double based
isn't that what all Canadian therapist do ?
We are rapidly approaching Dune’s directive of “Thou shall not make a machine in the likeness of of a human mind”
Scary times in general
Yeah, its more likely of the billionares using machines that enslave the general population.
I love that in Roko's basilisk, the athiest super incels literally just came up with the AI version of Pascal's wager, one of the most famous arguments for belief in God. The irony is staggering
This!
It has a similar effect on the marginally paranoid as the 'Fear of God' has on some believers: It instils a terrible sense of inescapable dread. "I've been virtuous, but have I been virtuous enough to avoid the eternity of agonising torment? I must strive harder to prove my dedication!" Eventually you end up with the shouty preacher screaming about how everyone else is a terrible sinner, or taking their children out of school and going to live on a cult camp.
A friend of mine always said about it:
"Atheists are just religious people without the culture or the rest of the cool shit"
@Yoggo the mad god that's beyond silly
This BS narrative is just a way to mischaracterize people worried about AGI by portraying a small number of weirdos as representative of the whole community.
Roko's basilisk was never taken seriously by more than a tiny number of people on the website, and the fact that Yudkowsky *did* seem to take it seriously just led to the community taking him much *less* seriously. There's a reason people concerned about AI risk joke about Roko's basilisk, instead of you-know never wanting anyone to learn about it, like Yudkowsky who actually does take it seriously *and thus wouldn't joke about it*.
I like ordinary things sponsors. They aren’t obnoxious ad breaks, they’re welcome respites during the video, and they always have something to do with the videos topic, or at least he will cover the aspects of them that are relevant. Take my reddit gold and updoot kind stranger!!!!
Jreg and ordinary, the crossover episode i didnt even knew i wanted
Something that wasn't touched on was that the AI that played Go really well, could be defeated by extremely basic strategies that proved it didn't really know what it was doing. A researcher who knew very little about Go and was taught a basic strategy could defeat the AI that defeat the best human player ever. AI also has a really hard time correctly identifying something that has a more clear thing covering it. Show an AI a cat, and it will tell you its a cat. Take that same cat and put the text "Computer" on the cat, it will tell you its a computer.
I hoped the Go thing would be touched on as well as it very much puts the lie to any idea this is any form of intelligence, rather than a clever program with an abundance of data to draw on.
If this new strategy is so simple, why didn't the AI discover it in its many games of self-play? In generating its own data to the point that it was able to defeat a grandmaster, it wouldn't have had the biases of human playstyles that have been developed by people over the centuries, in a sense it's totally "pure" and also why move 37 was unexpected.
Perhaps its biases in playstyle come from its purpose for it to defeat top-tier players like Lee Sodol which may have inspired the engineers' aim for the software to aim for draws and win on thin margins as they expected AZ to not be executing perfect wins.
From a mathematical perspective, it's surprising that despite this "purity" in its rules-based self-generated data, it never came across the simple strategy that it was defeated by.
You've brought me through the horizons of virtual reality, and now through the A.I revolution.
Comedian, sociologist, or whatever you consider yourself, you're part of Internet History, dude.
👍👍👍👍👍👍👍
10/10
everytime someone rates a video with 10/10 or something like that in the comments, i'm brought back to the good old times of the 5-star rating system.
good old times. :(
Thank you for your effort, as always top quality content, please keep it up. Video is very well resarched and the content is presented on the highest level of youtube/ actually all media out there, once again thank you!