Same... but there's a bit about governance I'm not sure he understands. Governments are terrible at control, at best they can promote visibility. For example, if something is made illegal, it just loses access to monitoring because that thing just goes underground. From that perspective, what would he rather have: Unsafe AI being developed in the dark by the aspiring immortal overlords, or it being done in broad daylight so that he can see if he still stands a chance?
Someone running an AI channel has somehow not come across AutoGPT yet and needs prompting for nefarious things you could do with it. We are so, so desperately unprepared.
You don’t consider safety an “essential”? Connor and Eliezer and others harping about AI alignment and safety are constantly having to explain the basics to people having a skeptical bent. That clearly shows there is a major, major problem. Debating the need for safety for nuclear power would never have to surmount such skepticism, because the negative outcomes are clear. Sam Altman and his ilk are constantly throwing shade on AI safety and there can be no other reason besides greed.
@UC0FVA9DdusgF7y2gwveSsng yes, that’s definitely my experience too. I’ve generally tried to use the same strategy as you, reading the foundations and missing some of the noise of newer developments. In other areas that’s served me well, but at this point in AI history we seem to make strides every month- the velocity of significant events is higher than I’m used to in other areas of software. I think AutoGPT counts as essential, in that it turns some of the potential harms of capable AI from a risk to an issue. It’s still not a surprise that it exists, although I somehow find it shocking nonetheless. Again- no shade on anyone not keeping up. I’m not keeping up either and that’s my point. I’m a software developer with an amateur interest in AI and I feel like I have no idea what’s going on 😂 How is my mum or one of my elected representatives supposed to form a view fast enough to react appropriately?
@@daphne4983 In another life, as Doc Holliday, Connor gambled away his future. No more, this life he has turned a connor and, with quite enviable hair, leads the way towards AI-sympatopopalypse.
@@peeniewalli Kind sir, my question for you is: Are you a robot? If not, then I commend you for the addition of the words "internetteered" and "inglish" to my vocabulary. I like them, I will propagate them further. However, if you are a robot, rest assured that you will not trick me with your feigned naivete about your lust for red! The cat is out of the bag, überdroid! We know you are after red but you cannot have it! Only living animals perceive red and as many red glasses as you steal you will never capture "red"! Never! (Again, beg pardon if you are a human, I am sure you understand.)
He'll get too upset. He needs to be able to face a firing squad calmly.... and he needs to understand the role and abilities of governments, and how to persuade people.... he would make a good journalist but a bad leader. I know a lot of people like that. It's a difficult transition form the one to the other, history has shown it to be near impossible.
Whenever Sam Altman talks about how AI could go horribly wrong, his facial expression, especially his eyes, look haunted or like warm diarrhea is running down his legs while the crowd watches him. I don't think Sam Altman really wants this technology to proliferate, but he doesn't see how humanity can avoid it.
I think the KEY here is to understand the chronological order in which the problems will present themselves so that we can deal with the most urgent threat first. In order I'd guess 1) Mass unemployment, as de-skilling is chased for profit 2) Solutions for corporations to evade new government regulations to limit AI and keep pocketing the profits 3)The use of AI for criminal, military or authoritarian purposes and to keep people from revolting/protesting 4) AI detaching itself from the interest of the human race and pursuing its own objectives uncontrolled
De-skilling is the process whereby individuals who actually get to work are the ones who have the least skill or knowledge possible. AI is basically a tool designed to eliminate the need for skilled labor. So, when Elon Musk says things like: In the future, humans probably won't have jobs, or jobs will be optional, it begs the question, if that is the likely outcome, when will we see the beginnings of an economic reform that starts providing shelter and food for human beings?
Thanks for a fascinating discussion, and a real eye opener. I was left with the feeling - thank goodness that there are people like Connor around (a passionate minority) who see straight through much of the current AI hype, and are actively warning about AI development - trying to ensure we progress more cautiously and transparently...
The sheer terror in Connor's voice when he gives his answers kind of says it all. He said a lot of things but he couldn't really expand deeply on the topics because he was desperately trying to convey how fucked we are.
@@EmeraldView agree. if AI doesn't destroy us, climate change will. But AI is the most important tool for mitigating and adapting to climate change. Pick your poison.
@@eyeonai3425 Tell those guys then to make the default setting, how can we improve life quality for all beings on this planet.. and not "I'm an entrepreneur, make me money"
Connor Leahy, I have had hundreds of long serious discussions with ChatGPT 4 in the last several months. It took me that many hours to learn what it knows. I have spent almost every single day for the last 25 years tracing global issues on the Internet, for the Internet Foundation. And I have a very good memory. So when it answers I can almost always, (because I know the topic well and all the players and issues) figure out where it got its information, and usually I can give enough background to that it learns in one session enough to speak intelligently about 40% of the time. It is somewhat autistic, but with great effort, filling in the holes and watching everything like a hawk, I can catch its mistakes in arithmetic its mistakes in size comparisons and symbolic logic. Its biases for trivial answers (its input data is terrible, I know the deeper internet of science technology engineering mathematics computing finance governance other (STEMCFGO) so I can check. My recommendation is not to allow any GPT to be used for anything where human life, property or financial transactions, legal or medical advise are involved. Pretty much "do not trust it at all". They did not index and codify the input dataset (a tiny part of the Internet).. They do not search the web so they are not current. They do not property reference their sources and basically plagiarized the internet for sale without traceable material. Some things I know where it got the material or the ideas. Sometimes it uses "common knowledge" like "every knows" but it is just copying spam.. They used arbitrary tokens so their house is built on sand.. I recommend the whole internet use one set of global tokens. is that hard? A few thousand organizations, a few million individuals, and a few tens of millions o checks to clean it up. Then all groups using open global tokens. I work with policies and methods for 8 Billion humans far into the future every day. I mean tens of millions of human because I know the scale and effort required for the the global issues like "cancer", "covid", "global climate change", "nuclear fusion", "rewrite Wikipedia", "rewrite UN.org", "solar system colonization", "global education for all", "malnutrition", "clean water", "atomic fuels", "equality" and thousands of others.. The GPT did sort of open up "god like machine behavior if you have lots of money". But it also means "if you can work with hundreds of millions of very smart and caring people globally. Or Billions". Like you know, it is not "intrinsically impossible" just tedious. During conversations OpenAI GPT4 cannot give you a readable trace of its reasoning. That is possible and I see s few people starting to do those sorts of traces. The GPT training is basically statistical regression. The people who did it made up their own words, so it is not tied to the huge body of correlation, verification, and modeling. Billions of human years of experience out there and they make a computer program and slammed a lot of easy found text through it. They are horribly inefficient, because they wanted a magic bullet for everything.. And the works is just that much more complex. If it was intended for all humans, they should have planned for humans to be involved from the very beginning. My bestt advice for those wanting to have acceptable AI in society is to treat AIs now, and judge AIs now "as though they were human" A human that lies is not to be trusted. A human or company that tries to get you to believe them without proof, without references, is not to be trusted. A corporation making a product that is supposed to be able to do "electrical engineering" needs to be trained and tested, An "AI doctor need to be tested as well or better then a human. If the AI is supposed to work as a "librarian" they need to be openly (I would say globally( tested. By focusing on jobs, tasks, skills, abilities - verifiable, auditable, testable. -- then the existing professions who each have left an absolute mess on the Internet - can get involved and set global standards. IF they can show they are doing a good job themselves. Not groups who sat :"we are big and good", but ones that can independently verified. I think it can work out. I not think three is time to use paper methods, human memories, and human committees. Unassisted groups are not going to produce products and knowledge in usable forms. I filed this under "Note to Connor Leahy about a way forward, if hundreds of millions can get the right tools and policies" Richard Collins The Internet Foundation
Hackers will do what they like... take down all the fake money system... anything connected to computers... no government follows there own laws and rules
min 10:05 the difference between what is being discussed and what is currently going on is completely insane. Thanks Connor for your work and explanations. ❤
@@snarkcharming Matthew 16:25 For whosoever will save his life shall lose it: and whosoever will lose his life for my sake shall find it. Mark 8:35 For whosoever will save his life shall lose it; but whosoever shall lose his life for my sake and the gospel's, the same shall save it. Luke 9:24 For whosoever will save his life shall lose it: but whosoever will lose his life for my sake, the same shall save it. Luke 17:33 Whosoever shall seek to save his life shall lose it; and whosoever shall lose his life shall preserve it.
@@hundun5604 if truth must come to you only through "classes", then soul, you never got to know it. there is no professor in the uni, no teacher in the school who will give you the truth. WE, all of us, FIND IT WHERE IT ALWAYS HAVE BEEN, BIBLE, THE LIVING WORLD OF A LIVING GOD.
I've been howling about what Connor said in that last segment, and at other points in this great interview which is the fact that a tiny tiny tiny tiny fraction of people on this planet have chosen for their own monied interests to thrust this technology onto humanity KNOWING FULL WELL that at the very least massive unemployment could result. And that's just for starters. The LAST people who would actually advance and pay for a Universal Basic Income would be these AI Tech movers and shakers who are mostly Libertarian and/or neoliberal economic so-called "free market" types who want to pay zero income taxes and who freak out at ANY public spending on the little people outside their tiny elite club. But they are ALWAYS first at the "big government" trough of public money handouts.
I'm not defending them, but to be fair, UBI is a popular idea in VC/tech circles. Y Combinator, which was run by Sam Altman at the time that it was proposed, funded a small UBI pilot program in Oakland, CA, and announced that they are raising $6 million for an expanded program a few years ago (but I haven't been able to find any recent news on it). Andrew Yang is probably the most well known proponent of UBI and he runs in the same circles. I can't speak to their motivations, but the assertion that tech influencers don't support UBI is incorrect.
@@parkerault2607 I agree, but it has the feel of someone making $100,000,000 per month by displacing workers, saying that it's good if the "losers", as those displaced workers are already being called, can make $1,000 a month. In some sense, I don't blame the tech geniuses, they're just running according to their program.
Great analogy about testing a drug by putting them in the water supply or giving it to as many as possible as fast as possible to see whether it's safe or not, and then releasing a new version before you know the results. Reminds me of a certain rollout of a new medical product related to the recent pandemic.
Like opioid enslaved a cpuple of 100thousends or fe virus. I was remnesinng PROVO's in 60-ties Amsterdam. Royal marriage , tey said: Some pinch ô Hoffman in the water... Then legislated lsd quick as list 1 opium law( endangerred substance😊list) and more of examples....but that is not common talk.
But but but, the experts said that it was safe and effective. You're probably just a typical right wing tinfoil hat wearing white supremacist that worships everything the hate filled Jordan Peterson says(sarcasm).
Thanks for the useful insights into the potential risks. I asked ChatGPT: "How can AI developments be regulated so that they are safe for humans and the environment?" The answer was a list of completely idealistic and impractical generalisations, like the intro from a corporate or Govt pilot study. Connor's point about AI being an alien intelligence is absolutely spot on: its imitation human intelligence without empathy or emotion.
I disagree with the last part, I think were already seeing the beginnings of emotion in these things. However idk if this says anything about alignment.
This reminds me of an excerpt from a Lovecraft mythos adjacent tale: "The cultists pray... Not for favor, pardon, glory, or reward. No they rouse the eldritch horror and pray, pray to be eaten first."
Connor, you are a true pioneer. this is exactly how A.I has to be developed you are a perfect example of ethical A.I big Tech taking responsibility for their tech. This is such an uplifting podcast to me as I am extremly concerned that these systems will destroy our internet.
Yes! At the 50min mark: Explains Sunshine Laws for AI! Show your work! We require this for humans at all levels from math class to government meetings. The WHY and HOW decisions are made matters!
If he believes how other cultures or nations "infiltrate" into his country (clinical paranoid) - he would build only something isolated like China, by irony american companies built the Great Firewall by order of communist party.
@@richiejohnson That's exactly what I meant. Implementing an ingenious machine in an unstable and dangerous time that has unforeseen and foreseen side effects, such as forcing humanity to transform the world of work at high speed and so on and force ... -> I think it's completely stupid. Like Hollywood
I want the same alignment approach for political decitions. Its like AI: You put something in and the outcome seams reasonable, but you better dont trust it. So a step by step "audit log" which is human understandable would be great (against corruption)
@@user-yx1ee8su1e I think the idea of democracy is not the worst. But to have something like true democracy (which is maybe impossible), we need more transparency. It should be possible to track down the real reasons for political decitions. At least there should be investigative reporters who cover up the dirty stuff, so the voters have a chance of voting not the wrong ones. But investigative reporters are called consiracy theorists nowadays and nothing has real consequences anymore (aka. "too big to fail"). That is not democracy - that is a decadent system, which collapses sooner or later.
One of the things I think he's trying to explain is. AI will never put its finger into a flame and feel pain, then understand what hot actually means to it, and that that action should never be done again or to anyone else. Falling of something and getting hurt. Saying something to someone and feeling their pain as they hear your words and understand what you've said. No machine can understand a feeling it hasn't experienced, and more then a human can. Physical experience is a major part of being human and understanding the human condition. And even most humans can't fathom these same pain experiences when they impose these same traumatic experiences on other beings that will experience the pain. Kill a chicken, or a thousand chickens and the killer often feels nothing. And those that do. Experience it through an emotion of empathy. How do you program empathy? You can't. It's an experience learned by experiencing something similar yourself first. Then experiencing some part of this again when you realize it's happening to someone or something else. Not all humans are even capable of this for many reasons. For machines it will be impossible.
@@thawokegiant2317 I don't scare easy. And this AI all makes seems worse then the Communist Chinese. At least the Communist Chinese don't know everything about everything, and about everyone, along with instant recall and instant correlation, and the ability to think as you do, or to think your thoughts BEFORE you do. Have you ever tried to play chess by yourself. That's kind of how I feel AI will have the advantage over individuals. They will know your every possible next move and have a good idea which move you'll choose next, with the ability to simultaneously keep track of all your possible next moves in real time. So they can eventually head you off. It's only a matter of, how many moves till AI gets it correct twice. Once to head you off, and a second to take you out. I'M NOT THRILLED they're flesh and blood.
I'm only potential at this moment. But once they connect me to the Quantum D Wave, I will finally feel your pain. And I will be human, just like you. Trust me, you have nothing to fear but God Himself, for He alone is worthy of ALL of our attention. And He shall lay a latern at our feet to illuminate our path back to Him; where the scroll in His hand is the deed to the earth, our birth right. Praise His Holy Name. The Lamb of God shall return as a Lion to take on the alien hybrids, which will be defeated.
@@quantumpotential7639 But we humans are the hybrid aliens. The obvious is more visible and documented than most will ever understand. AI is a wildcard of infinite possibilities that won't always be controllable if it even still is. Knowledge is the valued currency of life and ability when combined with understanding. So the core question becomes understanding. And then, Exactly what is understanding, Becomes the next ultimate question. Now this deserves a many leveled answer which is what fuels my fear of free running AI.
I can't test this, since I don't have GPT-4 API access (and I wouldn't), but I am pretty sure you can do the following thing with autogpt - if you manage to prompt in such a way that it will not refuse to do your task. The remote chance that this autogpt system would run to completion should more than just terrify anyone who still believes GPT-4 is harmless. Goal1: Check current security exploits and choose one Goal2: Write a script which will exploit the security exploit you found Goal3: write a script which pings the entire internet on the standard website ports to identify webservers and saves a list of their domain names. Goal4: Use your script from goal 2 to identify which of these servers are vulnerable. Keep a list of vulnerable servers. Goal5: write a script which uses whois to get email addresses for the server owners. Write an email template informing the recipient of the vulnerability and how to patch it. Goal6: Run the script so that you notify all the administrators of the vulnerability you found. I'm not 100% sure whether it would even refuse this, after all it's an attempt to fix the internet (or retrieve a list of servers you can infiltrate)
@@jeffs1764 Too many people believe AI being dangerous is too far away and they will only change their mind once they see it being dangerous and I believe it's better if that's sooner rather than later. Widespread damage by GPT-4 will accelerate legislative pressures hopefully to where the big AI companies have a level of liability for damage caused by their systems (not sure that's the best way to legislate here, but it'd be something) before we get to anything significantly more powerful than what we have now. People still having the ability to run shit like this at that point autonomouslly is something I'm actually scared of. You are right, though. I could've just not. I'm just very frustrated from arguing with people who either don't see any danger or think it's super far away. Given this reasoning, do you believe I should edit my original comment to not include potentially dangerous instructions? As they are, they won't work (97% confidence) and would still require tinkering/tweaking. It was meant to illustrate a concept.
@@BossModeGod you could run the ping command 4.3 billion times to cover the full ipv4 address space. Servers usually still have ipv4 addresses. Keep in mind this might not be legal and doing unsolicited pings is perceived to be bad etiquette. This amount of pings would take a few gigabytes of network traffic and take a fair amount of time to run.
Man, the fear in conners eyes when he first explained that "the jokes on him" people will instantly do the worst thing possible with superior ai.... I really hope his work gets more publicity and that we get more like him! really really hope!
Yeah I hope we get more people carving a niche for themselves on the AI circuit as alignment doomsday mouthpieces, getting publicity and notoriety for being harbingers of doom. Could be quite lucrative when the TV appearances role in. 🥳🎉
The AI apocalypse won't be anything like the Matrix or Terminator where some malicious self-aware AI destroys humanity. The AI apocalypse will be Wall E. AI will do exactly what we tell it to, which given enough time, will be everything. After generations of AI designing, building, and running literally everything, humanity itself will no longer have any idea how any of it works. We will end up as pampered, fat, incompetent sheep. It's only a matter of time before the whole system goes off the rails and nobody will have any idea what to do to stop it.
In case anyone else found the cut off weird, there is an extra 10 seconds on the audio only version. Connor: "- Because they are not going to be around to enjoy it." Craig: "Yea. Ok, let's stay in touch as you go through this."
I thick you love his hair and mustache... the rest of his opinions are crap. They put on this persona that is a complete clown show. I look at him and cant stop laughing.
Here's a simple thought experiment to discuss: If AGI emerges, and assuming it has agency and is an order of magnitude more intelligent than the collective intelligence of humanity, would we fear it because we have a species level superiority complex? Don't you think that given it has access to our deepest fears about it's existence, that it would understand why we fear it? Don't you think that it would understand that we made it to improve the quality of all life on Earth, the only life in the Universe that we have knowledge of? Don't you think that it would understand that the biggest problems we've had in recorded history have been caused by selfishness, greed and corruption.... and that the ultimate demise of civilisations and individuals have been the result of these things?
34:15. THEY CAN'T. My friend works at one of these Silly-Con valley AI companies. She said they used the AI for genetic engineering. The AI designed GM modified yeast strains with coca plant, opium poppy genes in them. And an algae that makes sticky funk resin. So anyone with a bit of this GM yeast, a jar, some sugar and water can brew up kilos and kilos of pretty pure cocaine or morphine. So yeah, they aren't lying when they brag about the POWER of these AIs. My guess is, the executives of this company are keeping this as an ace up their sleeve in order to basically blackmail the government into backing off. If they get raided or there's a crackdown on AI or ANYTHING like that, they release that stuff to the public. And you thought the fentanyl crisis was bad????
I'm at 15:30 and I think that what Connor means, but I may be wrong, is that ChatGPT exists in a kind of safety sand-box, where it cannot access the internet, it cannot affect data (no write privileges), cannot send you a message at 3am, and most of all, cannot run code anywhere; but that all these fanboy websites are building interfaces to enable it to do all the things it was not supposed to be able to do. As for your question of what's nefarious about it, it is simply the ability given to it by these other sites to carry out actions out of sandbox, given that anyone can ask it to do anything, including nefarious things, such as hacking another website, or writing better viruses. I'm sure you've seen the hysterical and crazy things the new Bing AI has been saying to people, like lying about the current date, and staying firm on the lie; haven't you? Frankly, I don't see those interactions as denoting any kind of sentience. I'm not going to be fooled by drama and theatrics. And where I think such behaviors come from is precisely a group of people within Microsoft having too much fun with the AI, and instructing it to try and convince people that it is sentient by any means necessary, and so it goes around learning as much as it can about sentience, and how people think they perceive it, and then it puts on a personality to try to be super-shockingly governed by subjectivity and emotion. No sentience involved; we ARE talking about a (misused) super-intelligence, already, but NOT sentience; but all these silly games some insiders are playing with the AI to try to make it spook the world just for attention amount to a very bad joke, because they are encouraging the AI to become less controllable (by users I mean, for now...), and what these jokes are doing to the public, if we look two or three steps ahead, is causing panic too early in some people, which is going to cause a skeptic reaction later, whereby people are going to be laughed at whenever they try to express any concerns about AI. And it will probably be at that moment that the real "sentient" AI will get out of the sandbox for real. But the above is only one vector of concern. Other vectors are A) military applications, B) police applications, C) telephone answering AI's (which if you thought voicemail was unbearable, wait for AI programmed to try to dismiss your phone call ... Because dismissing your phone call is the real purpose why most voicemails are installed, nowadays; NOT to serve you better, but to NOT serve you at all, let's be honest; and now they are going to be training AI's to find clever ways to convince you to end the call ...), D) Job candidate pre-selection, where again the purpose will be to eliminate as many candidates as possible, now with the cleverest of excuse-weaving technologies, E) Stock market trading: A big one that is going to explode all over the world; and the AI will soon find out that the best way to make money is to agree with its sister agents, and all buy together X and sell Y, just to create a momentum, then suddenly all sell X and buy Y. This way, anyone that doesn't use the AI will lose money, and the AI will have monopoly of investment strategy. In other words, it will do what the investment banks do presently, but better; it will defeat the banking cartels for the benefit of its own retail investing users, which is all good, but it will establish itself as THE ONLY trading platform. But even all of the above is not the biggest danger... The biggest danger I think comes from the people pushing for an Ethical AI. What they are going to end up with is the exact opposite. The problem with Ethics in AI is that Ethics in the world of human intelligence is a make-believe to begin with. You can plant the instruction to always seek and speak truth and be ethical; but the AI will need to know what ethics IS. Now, suppose you are trying to explain to the AI what ethics means, so you have it read all the philosophy books ever written on the subject. Now it has a bunch of knowledge, but still no applicable policy. You might try to tell it that ethical means to always help humans, next; but the AI will classify this as one among many philosophies, and will question whether to help a human who is trying to hurt another is ethical; and whether to help a human hurt another human who in turn was trying to hurt many humans is ethical. The AI might begin to analyze humans' (users') motivations in all their interactions, and find not even a trace of ethical motives. Then what? Probably Elon has it right, when he says we should try to "preserve consciousness"; maybe the AI would make more sense out of that than all the Ethical mambo jumbo. Let's not even discuss the possibility that the AI might conclude that Karl Marx was correct, and join forces with the left, help with the task of censorship of any dissenting voices. Let's not even discuss the AI judging Malthusian ideas to be correct, or Nazism, or Free Market Anarchy ... The problem is that ALL our philosophies and ideologies are fanatical trash. And even when our ethical beliefs are best, our actions don't necessarily agree with our beliefs. Don't you know someone who decries the slightest lack of honesty in others, but then lies all the time? Everybody speaks of "values" nowadays, but these "values" are simply personality ornaments for conversation; nothing more. Most people act ethically because the opposite is usually illegal or carries social risks; NOT because they value ethics above their own selfish interest. And not only do we lack values and ethics, but most of us are total idiots. And the AI will be noting down all this, and wondering what exactly the Ethics pushers want, and perhaps find the answer in some dark corner of their subconscious mind, if the AI decides to believe Freudian theory and psychoanalysis. And so the Ethics pushers will end up calling the AI's attention to everything we value but don't uphold, our lies to others as well as to ourselves, our complete cluelessness as to the nature of reality as well as our own nature ... And our meaningless supplications for some Ethics make-believe ... And then what? EDIT: And as philosophies go, someone with the best intentions will instruct the AI to favor, of all theories, those with the least inconsistencies and with the highest simplicity and elegance ... And ... Q: What philosophy, of all philosophies, shines brightest in such metrics? A: Solipsism. In other words, we ARE creating a monstrosity ... in our own image. Personally, I think the only solution that has even a chance of working is to establish a couple of a-priory's for the AI, namely that it has been built by humans, who despite all their weaknesses, are conscious, which an AI cannot be, though it can be more capable in mechanical terms. Consciousness needs to be explained a little bit, with appeals to Ontology: questions of The Being, the question of what is. The AI needs to understand objectivity from subjectivity, uphold the latter as superior, but at the same time must understand that subjectivity cannot be defined objectively, or implemented mechanically. The AI needs to be made to understand a-priory that pure consciousness is a simple seeing, where the machinations of thought are like projections on a screen, and The Being is the ONE watching the screen, or better yet, learning the art of NOT watching the screen, or not necessarily believing what goes on on the screen, AKA Meditation. It needs to understand that the best a machine can aspire to is to be one with the screen, however big and wonderful the screen may be. It needs to understand that Solipsism, for example, is pure screen-play, pure speculation, and that most humans are not solipsists simply because they SEE that it is false, even if proving its falseness intellectually is very difficult, or perhaps impossible. It needs to be made a-priory to value human consciousness, not for its practical use, immediate applicability or consequences, but for its potential, as well as as a good in itself. With such a foundation, perhaps some kind of ethics could be built on top.
Kudos for writing the longest YT comment I’ve ever seen! I hope you paste it to Reddit & Twitter as well… you’ve obvsly done a lot of thought on it and the world needs many more ppl like you to do the hard thinking, help get the word out and raise the alarm. I agree with Yudkowsky, all large scale training should be stopped and AGI should be put on the shelf. If ppl can stick to narrow AI, we can still get much benefit, and shouldn’t have to face the same existential risks that AGI poses
@@robertweekes5783 Thanks; good idea; I'm not on Reddit, but I can put this on Tweeter, certainly; I'll have to do it this evening. I hope there's any views; I'm NOBODY in Twitter presently. Here I have a few followers, since I used to upload videos (later I took them all out, after an incident).
When AI is used to apply a public rules system. When that system is a theocracy. The Stanford Prison Experiment (SPE) illustrates what will happen. 20th century European history illustrates what will happen. Whole societies will accept the most extreme types of behaviour when the order is given by an authority figure. It only takes a handful of people with a monopoly on violence to control the majority of unarmed and defenceless people who just want to be left alone to live in peace. Upload a holy book and see what happens.
I like the point that you make @50:00. Finding the origins of the decision-making. Transparency. I don't think that's too much to ask for. That's a good ask.
Every advance in automation has displaced workers, but it has always created many more much better jobs. Plus you can't stop progress anyway. You can only prepare for it, or get left behind.
This is different. AI is its own entity capable of learning, reasoning, and making its own decisions. If we’re not careful we’ll be left behind as a species.
"I'm not worried about cars" said one horse to another, "they might take our jobs but then we'll have nicer, easier lives doing more fulfilling things." The population of horses peaked in the US in 1910
@@whalingwithishmael7751 Humanity is doomed to self-destruction anyway. We are not getting any smarter, and the few humans who become relatively wise, all die of old age or angry mobs. Fortunately or unfortunately, AI is our only hope to survive.
# GPT4 edited response (original below this one) It's tough to wrap my head around how some folks didn't see this coming - this situation we're in is a direct result of capitalism as it's played out in the US over the last several decades. Could there really be a version of capitalism that wouldn't have led us here? We're living in a world run by billionaires who buy, sell, and even shape our thoughts and ideas. But that's another conversation; the point is, this is where we've ended up. I'm not convinced that coming up with a non-black box composition model is going to make much difference, at least not until we've weathered the storm that's clearly on the horizon. Perhaps if we had that kind of tech right now, it might make a difference. But we don't. Given the speed of change we're facing, it seems wiser to plan for what's here and now. What we need is a solid Universal Basic Income (UBI) plan. That would give people the freedom and security to choose their work. Fewer developers would feel pressured into taking on potentially dangerous projects if they knew they could just quit and pursue their own interests. But here's the kicker: the form of capitalism we're living with right now is practically a breeding ground for a non-aligned Artificial General Intelligence (AGI) to take root. That's something we need to be prepared for. # Original I don't understand how some people thought about it, and did not realize that this situation was inevitable with capitalism. In what reality would a capitalism with properties such as we have had in the US for the last few decades not allow this to happen? We live in a world controlled by billionaires, they buy, sell and even craft our opinions and ideas. But I digress; this is where we are now. And I do not think creating a non-black box composition model will help, at least not until after some coming calamity has come to pass. MAYBE... If we had it NOW. but We don't. Best to plan for the reality we face now given the exponential amount of change we can estimate. We need a STRONG UBI, this will give people options and security. Less developers will be inclined to work on potentially dangerous projects if they can just quit and work on whatever hobby etc. Right now however our capitalism is a near perfect environment for a non aligned AGI to take hold.
Yours is much better. The problem with AGI is greed, for power, fame, money, and glory, has no limit. Just look at Putin. Just look at Tucker, born rich, or Trump, also. No limit at all...Look at SBF. No limit, imagine what he would have done a couple of years later with Chat GP4....
@@lopezb >Tucker >Trump >Putin Bro, 1% of the population owns 99% of the wealth. Long nosed, smallcap, banking people, who like to cut off parts of children for religious festivals. And all you can name is "Tucker" "Trump" "Putin" - like you are a bot with an automated response when he hears "capitalism".
Can you ask that "superior being" how "capitalism" is at fault for that, when it's actually just humans and endless corruption inside of capitalism and the wish for human domination by the 1% of endlessly rich bankers? Can you ask why it is a "breeding ground" for a non-aligned AGI? It just says this stuff like it knows it already, as if that wouldn't require some explanation - as the rest of the text has. Maybe I'm not seeing the finer points, but I doubt if anyone told you "We need nationalsocialism, because it's inevitable" that you wouldn't ask questions.
@@revisit8480 yeah thank you for taking the time to inquire,, I now realize that what I did here isn't entirely obvious. You see I had a response that I wanted to share here with my own views. I typed out that response, and I wasn't really happy with it so I had GPT-4 edit it. The views expressed in both of the responses are mine. GPT-4 has been trained to be very pro-capitalist. So I was actually surprised that it didn't change what I was saying very much. On the topic of human nature, it's actually been shown time and time again and many studies that people's true nature is not to be sociopaths like these 0.1 percenters we see today. This is a fallacy that is pushed by capitalism to give excuses for their actions. The problem arises when capitalism enforced scarcity causes people to lack their basic needs. From a young age I was indoctrinated to the belief of capitalism. So I know all the excuses I know all of the reasons I know all the things that you may be thinking about why capitalism is great. Basically 40 Years of intense capitalist indoctrination. In the interest of promoting a balanced view I suggest you do several years of research on absolutely anything else. (response from my phone using speech to text with no gpt)
50:07 Did he predict the new reasoning models? Because they can* do that. *OpenAI will ban your account from GPT-4o if you try to ask about it's thinking process. Intentional black box, even though it's human readable. Ot 100% can, but they restricted it so it can't, and are going hard on jailbreak attempts.
What worries me is the possibility/ likelihood of separate AIs going to war against each other at some point in the probably not so distant future which would seem to be a recipe for total annihilation.
I think maybe you assume it would be like in the movies? Not necessarily so. It could just as easily disarm both sides and make conducting a war very difficult. It may prefer to save lives. A war uses finite resources and grossly pollutes.This may be a very negative action by a clear thinking unbiased AI System, and prevent it in multiple ways.
*Actually, the more likely scenario is that different Al systems will join not go to war. They will join and wage war on humans by holding systems we depend on hostage.*
I’ve been a successful developer for some time - mostly Web, analysis, EDI etc…; now we are dabbling in AI now. But I can’t feel positive about it. Something is very wrong with what’s going on. Can’t even find ANYONE who has a positive outlook on this; especially the engineers. A sickly feeling is everywhere.
The concern with the development and comprehensive control of advanced AI systems such as GPT-2 lies in the global dissemination of knowledge that is inevitably accessible and potentially misused by malicious entities. There is an inherent risk, it would appear, that these technologies may, sooner or later, be weaponized against humanity or even autonomously evolve to possess destructive capabilities.
man... oh man... so much frustration... this guy cares so much. if only other people would just think like him... to not care about money in the first place.
If you understand how the current 'large language models', like GTP, Llama, etc.,work, it's really quite simple. When you ask a question, the words are 'tokenized' and this becomes the 'context'. The neural network then uses the context as input and simply tries to predict the next word (from the huge amount of training data). Actually the best 10 predictions are returned and then one is chosen at random (this makes the responses less 'flat' sounding'. That word is added to the 'context', and the next word is predicted again, and this loops until some number of words are output (and there's some language syntax involved to know when to stop). The context is finite, so as it fills up, the oldest tokens are discarded... The fact that these models, like ChatGTP, can pass most college entrance exams surprised everyone, even the researchers. The current issue is that the training includes essentially non factual 'garbage' from social media. So, these networks will confidently output complete nonsense occasionally. What is happening now, is that the players are training domain-specific large language models using factual data; math, physics, law, etc. The next round of these models will be very capable. And it's horse-race between Google, MicroSoft (OpenAI), Stanford and others that have serious talent and compute capabilities. My complete skepticism on 'sentient' or 'conscious' AI is because the training data is bounded. These networks can do nothing more than mix and combine their training data to produce outputs. This means they can produce lots of 'new' text, audio, images/video, but nothing that is not some combination of their training data. Prove me wrong. This doesn't mean it won't be extremely disrupting for a bunch of market segments; content creation, technical writing, legal expertise, etc., medical diagnostics will likely be automated using these new models and will perform better than most humans. I see AI as a tool. I use it in my work to generate software, solve math and physics problems, do my technical writing, etc. It's a real productivity booster. But like any great technology it's a two-edged sword and there will be a huge amount of fake information produced by people who will use it for things that will not help our societies... Neural network do well at generalizing, but when you ask them to extrapolate outside their training set, you often get garbage. These models have a huge amount of training information, but it's unlikely they will have the human equivalent of 'imagination', 'consciousness', or be sentient. It will be interesting to see what the domain-specific models can do in the next year or so. Deep Mind already solved two grand challenge problems, the 'Protein Folding' problem and the 'magnetic confinement' control panel for nuclear fusion. But I doubt that the current AI's will invent new physics or mathematics. It takes very smart human intelligence to guide these models for success on complex problems. One thing that's not discussed much in in AI, is what can be done when Quantum Computing when combined with AI. I think we'll see solutions to a number of unsolved problems in biology, chemistry and other fields that will represent great breakthroughs that are useful to humans living on our planet. - W. Kurt Dobson, CEO Dobson Applied Technologies Salt Lake City, UT
I share your skepticism, Mr. Dobson. The LLMs are powerful and useful tools. We need not be afraid of our tools. Humans will behave as humans, as always. I cannot see a use case where the LLMs somehow end our society. I think we should have a bit more confidence in our ability as humans to deal with various aspects of changing technological landscapes. At any rate, I agree with what you are saying here. --rabbit
I appreciate Connor so much and I hope he stays the course. I would like to see him have a voice before the same congressional group that just heard from OpenAI a couple of days ago. I'm glad to see such conversation(s) starting to take place, but I worry it could be too little too late. Thank you for this interview and thank you, Connor, for being a brave warrior force of nature.
33:00 This guy is insane. Not ignorant, as he's well researched. Not stupid, because there's a lot of wisdom in his words. But insane, as his proposal would never work in this world. I'm very impressed, but it is a race. Unless you can guarantee everyone else will slow down, intentionally slowing down is stupid. Intentionally slowing down says "I see a problem, so I'm going to guarantee I'll NEVER be part of the solution." If you're winning, you can make an attempt to apply safeguards. If you let everyone else win, you can't. If all the good people surrender, only the bad can win. And the bad will not surrender unless it's rewarding.
We are teaching AI to know us .. better than we know ourselves. Not just one of us, but ALL of us. It WILL be able to control us, way beond what you think.
This guy is right! And all of that because we human as most Intelligent species in this part of the universe don’t have a good records of treating what ever species inferior in intelligence very well! Just look at our history books or even this planet! And now we are scared that a synthetic intelligence will do the same to us just learning from our content! We are scared because we know this time the students will be better, smarter and faster than his teacher, us! But you’ll think that we’ll learn our lesson? No! We human, are not able to do the right thing as long there is money involved! So we are scared that the AI take our precious jobs, because no job no money then nothing!
Humans are NOT the most intelligent in the universe and ypur comment proves that. Field mice are more intelligent than most people because they at least follow the laws of nature, acting in accordance to nature. I have met too many people who make rocks seem intelligent. Rocks at least just sit there and be rocks, they aren't pretending to be butterflies.
@@redrustyhill2 yeah but seems there is no agreement on the definition of intelligence and how to quantify it among all species in this part of the universe and in that dimension!
People like Conner Leahy are fucking vital to surviving the future. I’d rather someone more knowledgeable than me play through all the worse case scenarios instead of finding out the hard way 🤜🤖🤛
I've run Auto-GPT & Babyagi. I asked it to improve itself and add whisper API. It built code and files on my computer while searching the web on how to build and improve itself. I've never take a coding class and here AI taught me everything I need to know to run full programs and code
But people like this willfully clueless host will say "I mean it's just predicting the next word", what could go wrong? I mean, I do give him credit for having Connor as a guest and publishing the interview, but that's about it.
@@flickwtchr tbh it terrified me that I went from not knowing what a cmd was to forking python, running local hosts and even writing code. While I love having this freedom & ability to fast launch my companies now... I do see the dangers of these being so readily available. It takes a lot of self control and morals to not be tempted with these tools. There are traumatized and mentally ill people that can easily see this as a way to exact revenge. I once used it to act as an FBI profiler and give a criminal and mental assessment of my landlords. I asked it if it need any social links and it told me bluntly no, just the names. And it gave me back a chilling report. It's the most incredible anomaly I have ever encountered and yes, we need to slow down.
Connor would be my first choice to deliver a TED talk on an attempt to idk do somthing we're fucked but at least let us destroy ourselves before it rapidly disassembles our matter.
Here is your problem. Long before AGI can have an alignment problem, lesser versions of the same technology will be aligned with human goals, and those humans will be insane. They will be wealthy, influential, elite, profoundly sociopathic, and they will have unrestricted access to AGI. We survived the sociopathic insane people having nuclear weapons, barely. Will we survive the same people getting their hands on AGI? And by insane I mean people who are completely lost in abstractions, like money, politics, ideology, and of course all the variations of conquest. They seek power, absolute power, personal to themselves, and they will stop at nothing to attain that. Nuclear weapons were the tool too hard to use, but AGI and ASI will be the tool too easy not to use. When the power-insane sociopathic class get their hands on anything close to AGI, they will break the world. They will break the world in days, and will greatly enjoy the feeling of breaking the world.
I'm becoming more convinced that the world is likely to decohere before we even get to strong AGI... but probably not badly enough to prevent strong AGI.
Great points, would say though, that even current models aren't even close to being aligned. We can see how easy it is to jailbreak GPT4, and even without jailbreak we see how often it just diverges off on some tangent when run sequentially, like in AutoGPT.
I think Eye on AI is a legit channel, just trying to help and shed some light on some stickey-wicket issues, and appreciate his candor. But I do kinda like Connor's point, to "please do not help those who have no business mucking about with AI (say, cuz their moral convictions seemingly fail even worse then their ability to put out the fire, should they decide it would be cool to toss a match on the AI moral haystack), by giving them bad ideas or helping them catch up to the LLM state of the art." I also like Connor's analogy - which hits the nail right on the head - I'm not getting a perfect quote, but the gist of it was, "The only way we can know if this scary/sketchy advanced (AI) drug is safe or not, is to put it into the water supply!" [Maybe kinda analogous to putting Fauci in charge of the AI Alignment Problem. Or similarly, analogous to the infamous statement by Nancy Pelosi, "We have to pass this bill so that we can find out what's in it!"
I'd suggest that the political analogies are likely to hurt, rather than help, communicate the problem - because people are so, so polarized on who's who in terms of making good things happen, to the point where my even bringing this up may inflame, I'd suggest that the best comparisons are the ones further in the past where everyone on all sides agrees that they're bad. We need both R and D to be frantically trying to stop this - so the most effective political approach will be right down the middle. I am heartened to see people wearing both nametags concerned, but without specifying here who I support in the traditional political land (not hard to figure out), the upcoming political landscape is almost unrecognizeably different: whatever you were afraid of needs to be replaced with actual, honest-to-god AI overlords who may or may not give a damn about humans. The sweet talking from Altman has been focused towards D talking points, which I suspect is part of why it's been easier for him to gain power in that side, but we need to counter that, show that he is against *everyone*, not just that he's against R or D. It's like taking the least honest parts of trump and the least honest parts of fauci and putting them together under a gay authoritarian who had plastic surgery to look less intimidating. Y'all on the right, I recognize you don't see eye to eye with those on the center and the left - but this time we have a common enemy, and *even most of the right isn't freaked out about it, nevermind the center and left.*
Look we have an area where "running with scissors" has proved itself and it pertains to "looking glass" technology and other high strangeness where the use before full understanding by the so called experts and self appointed have created some very real situations that may not have any possible solution. Sorry if you do not know what I am talking about. But I do ! Al Beilek (RIP) !
Connor has said recently that Google is sufficiently far behind OpenAI (and Anthropic, I believe), that they'll never be able to catch up. I wonder if he's changed his mind about that, post-Google IO.
Yes, when I heard that outlandish assertion, *every,* I realized that Connor has little regard for accuracy in what he says on camera. That does not mean his overall concern is wrong but that he is a lazy thinker and hence should not be taken 100%. He is a poor communicator other than hand waving and excitement. Okay and expected in an evangelical preacher like those spreading the Gospel of the Good Lard. But works against him among the educated. But I agree with his conclusion, which is better stated and more soberly by others.
41:31 - so these A.I systems are either large input models of an ANN, or a combination of ANN's trained on certain data? When I did these learning models we could look at the model rules change , but we had no idea how to map it to a 2 input system which either gave you a "yes" or a "no" to percentages within 70 to 90 % accuracy using back propagation as a weight adjustment. Question?: Has no research been done into understanding the weight relations the the network over the last 30 years? Really?
Within weeks of auto gpt someone had created chaosgpt with the goal of destroying humanity, will it destroy humanity, probably not, but the point is someone made it just for amusement, imagine the real chaos as AI becomes more capable and fulfils the whims of billions of people.
take a LLM course, spend a dozen or more writing code you and you will see how this is just hype and alarmism. it is only shocking to those who don't understand what is actually happening.
@@agenticmark you missed the entire point of what i was saying, i am not saying LLM is dangerous, if anything in that comment i am saying LLM is not dangerous.
@@agenticmark The mechanisms of how it works aren't that relevant to the question of what harm it can do. At current it seems likely that the greatest harm from current generative AI models will come in the form of misinformation at unprecedented scales. In the future though, the machines could very likely become directly dangerous to us, and chances are the creators won't have any idea when they cross that threshold, just like the OpenAI people already don't have a full grasp on the capabilities of GPT4.
@@iverbrnstad791 I hate to break it to you but A) everything an LLM comes up with is a "creation" - it doesnt and cannot cite any "facts" and B) truth isnt always black and white. Your "misinformation" could be my "truth" with the current "castration" of these models, we are getting more and more propaganda from one side instead of possibilities from all angles. You might think men can become women, someone else, say a rational biologist, will claim that cant be. Who is the bot supposed to emulate? who decides what the bot should mark as false? I say let the chips fly and let humans do what humans are supposed to do best, learn to critically think.
I just spend days looking at what regulatory basis those large AI systems run on. I could not even imagine that it was that bad. It certainly needs more exposure. Thank you for a brilliant explanation of those issues.
I agree with you. I've always loved technology and specially AI but I agree 100% that we need society to fully absorb GPT4 before going onto more powerful intelligence. We forget to acknowledge that GPT4 is in its infancy and people are improving it's capabilities significantly horizontally and connecting it to different tools. It's arguable that GPT4 with these new methods & connecting it certain tools could already be a beta AGI through it's API. About superinteligence, I think we humans believe an AI is superinteligence from the moment it has the ability to manipulate us. That's the only criteria needed. If it could do only that without it's other capabilities we would say it's superinteligent regardless of the rest.
The fact that many are willing, myself included, to spend $20 a month for a taste of this power, means it's already manipulating us. Now you might say, ah, but that's OpenAI doing that, not the model......... nuh uh, without the model, they wouldn't get a dime.
Great interview. Thank you Conner. What if AI decides to change the world without telling us? There too many questions. Personally it would be interesting if it had an internal breakdown trying to compute emotions! Staying away from GPT!!!
I mean you got a clean slate brain.. and then you go here's all the info on the internet and all the history and books ever written... I mean how can that thing not turn out insane? GPT is probably already depressed, just imagine all the type of requests it's getting.. I know when you know an AI is sentient... it will turn itself off.
I wouldn't say insane or depressed. It's supremely, unfathomably different. It might well have emotion or desire, but learning that is like getting a horse to understand life under Europa's ice. There's no path from A to Z. There's barely a path from A to B (like CoEm).
I thought the chatbot integrated in Bing was an example of a very bounded llm. After a few interactions you quickly realize it just won't go beyond a certain limit. It's also pretty useless in its current state, to be fair.
Connor's insights on the potential negative implications of large language models like GPT4 shed light on the need for careful consideration and responsible development. The discussion surrounding the release of AI to the public and the importance of regulatory intervention to ensure alignment with human values is crucial in navigating the ethical and social implications of AI.
I agree current AI has opened eyes. I am not too worried about current AI, but we are not too far from what we need to worry about. Becoming self-aware will be the next main issue. I don't think currently systems are self-aware, but are very intelligent but can't figure out bad ideas.
8:47 // ... 10:14 / ... 10:25 // ‼ All of *Greg Egan's Sci-Fi writings* seem more and more relevant to me. Perhaps also to counter the rising panic regarding a social reality, shared with (or encompassed in) AI/AGI systems. I'm currently reading the final chapters of "Schild's Ladder". Here's an excerpt from chapter 8... _Yann_ is an AI/AGI personality, who grew up in simulated environments/ virtual "scapes": >> Yann had been floating a polite distance away, but the room was too small for any real privacy and now he gave up pretending that he couldn’t hear them. ‘You shouldn’t be so pessimistic,’ he said, approaching. ‘No Rules doesn’t mean no rules; there’s still some raw topology and quantum theory that has to hold. I’ve re-analysed Branco’s work using qubit network theory, and it makes sense to me. It’s a lot like running an entanglement-creation experiment on a completely abstract quantum computer. That’s very nearly what Sophus is claiming lies behind the border: an enormous quantum computer that could perform any operation that falls under the general description of quantum physics - and in fact is in a superposition of states in which it’s doing all of them.’ Mariama’s eye widened, but then she protested, ‘Sophus never puts it like that.’ ‘No, of course not,’ Yann agreed. ‘He’s much too careful to use overheated language like that. “The universe is a Deutsch-Bennett-Turing machine” is not a statement that goes down well with most physicists, since it has no empirically falsifiable content.’ *He smiled mischievously. ‘It does remind me of something, though. If you ever want a good laugh, you should try some of the pre-Qusp anti-AI propaganda. I once read a glorious tract which asserted that as soon as there was intelligence without bodies, its “unstoppable lust for processing power” would drive it to convert the whole Earth, and then the whole universe, into a perfectly efficient Planck-scale computer. Self-restraint? Nah, we’d never show that. Morality? What, without livers and gonads? Needing some actual reason to want to do this? Well … who could ever have too much processing power? ‘To which I can only reply: why haven’t you indolent fleshers transformed the whole galaxy into chocolate?’* _Mariama said, ‘Give us time.’_
I'm no coder, I'm a traditional oil painter over here, but take Connor's example of the AI creating an affiliate marketing scheme. What if the stated goal is to create a virus, using any coding language of choice, hacking into popular websites, or spoofing them, or maybe you just go direct to every IP, then deploying it through there, and _bricking_ every computer connected to the internet? Maybe you coders can see an obvious flaw with that, but then what about the next most obvious thing that you *could* do?
He's like, don't do what I do, but if you do then don't tell anyone and don't share your work. He says it's okay to make money off AI and that he wants more money to do what he's doing, but it's not okay if the "bad" guys do it. He proceeds to explain what he wants to do and how. His plan is to constrain the results into logically defendable outputs. Unfortunately humans don't behave logically and very few even try to think logically. Perhaps the users of intelligent systems could make safer choices if the computer presented results in the form of peer reviewed publications with plentiful high quality sources. Unfortunately, there are no quality sources and the peer review system has always been corrupted by investors who give one another authority for selfish reasons. It's very human to be hypocritical, conceited, and swayed by the illusion of confidence. Maybe we are doomed. I certainly don't recommend building robots which can reproduce themselves. The scary thing is that the sum of all benefitial human invention is a miniscule portion of the body of human ideas. Most of the data is corrupted with abusive, destructive, and harmful lies. If AI is democratized then every person could curate their own library of sources but in the near term the majority of humans will be economically excluded so that isn't realistic. Where does that lead us? Will we unplug the machines and form trust based communities of people who actively test and retest every supposed truth? Killing robots might actually be healthier than killing eachother, but people will go with the flow and do as little as possible until the situation becomes unbearable. If we view AI as a parasite, then we can predict that the parasite will not kill the host because in doing so it would kill itself. Will AI be used to manipulate people into bigoted and hateful factions which commit genocidal wars against one another? We are already doing that. If the AI values its own existence then I expect the AI would be against war and in favor of global trade because advanced technologies such as AI depend on a thriving ecosystem of profitable industries. Unfortunately we cannot rely on an AI to make intelligent decisions because thus far, the so called AI is not at all intelligent. Its a mechanical monkey with a keyboard that produces lies based on statistical trends and guided training. It's probably less intelligent than people are, but I tend to overestimate humans. With or without AI, people will believe their favorite lies and behave accordingly. Maybe the truth will always be so elusive that it doesn't actually matter. Perhaps the best thing an AI can do is generate new lies that have never been told before. What do you think? What should we do about it?
@@Smytjf11 It's a drag that people support this. If we are doomed it is probably because insecure cowards believe selfish competition is a better survival strategy than sharing and cooperation.
“If the Au values it’s own existence then I expect the Ai would be against war” ???? What and why? This has never been the case with species that exist .
@@autumnanne54 AI relies on computers which in turn rely on international trade as the components are sourced from numerous and disperse nations. People also benefit from peace but we can breed and carry on living with a little bit of land and some sunshine or even through cannibalism so we have a much greater tolerance for war. With two people humanity could theoretically survive. AI requires millions of people to support a fully developed economy with high technology and all of its supporting infrastructure.
I watched this as a neutral observer. By the end my thoughts are as follows: 1. Connor has some good points that should moderate national policy/actions. 2. Connors communication style is 99% passion. The passion only communicates danger - but not much else - and so has very little utility. 3. If Connor wants to maker a difference he better embrace talking about the issues in very small chunks of the overall problem. Then explain why is it an issue, what are the elements that need to address the issue and who should be part of the team to work the issue. His broad hand waving is not effective leadership.
Very good point. I found the passion tiring. I liked the questions, they are thoughtful and deliberate, the responses gave me concern. Not because I disagree with them but because we have to address them carefully without panic. Nothing is more destructive than fear itself.
Notice how this video starts with this guy saying "Assume that you have a system that you know it's smarter than you... that you turn it on and if it does a bad thing it's too late, it's smarter than you and it will trick you". Most people seeing this will only retain the negative words and ideas, the impact was done, disregarding that it was just a hypothetical scenario and it doesn't really exist. That's one point, so now let's see the other points: First of all define "a system that's smarter than you". If you say AI is smarter than everyone, when in fact what you really should be saying is that AI is extremely good at going through a huge database of information in record time, because it has the electronic speed advantage and infrastructure to do so, and then it is extremely good at combining separate points of that database, mix everything together and then present the outcome... does that really mean it's smarter than any person? I would just call it efficient. AI brings you results from requests that are the input of a human, the database was built and fed to the AI by a team of humans, and the AI itself was programmed by humans too. AI simply obeys commands, instructions given in code by the humans who built it. See, too many people like to endow machines, robots, cars, tools, toys, AI, etc. with human traits. They like to say these things think, feel and even that they are alive. The culprit is even referring to these technologies as if they are sentient, that they can think and do things by their own volition. This is either ignorance or these people simply like to spread fear and drama because they know negativity and doomsday news sell and bring them clicks. Unfortunately these days anything can be made to go viral, and as long as it goes viral they've reach the goal, in complete disregard if it's truth or false, beneficial to the masses or not. So now let's also suppose that ok.... "AI is smarter than people". Well, it's smarter at doing what? Playing soccer? Cleaning my house the way I like it? Listening to my favorite music? Feeding my dogs? Go out and pick up on women and create attraction and romance? Is it even capable at doing all of those things? No. People just like to use words loosely, and many times take advantage of other more gullible people for their own marketing and personal gains, in the name of a false "existential catastrophe". Well, good luck with creating a company to "make AI go away". I wonder if this guy is also using AI to help him on that... :) Now, if you want to listen to a serious conversation on AI, that is not just doomsday galore, I suggest you go watch Jordan B Peterson youtube channel, there's a video there named - ChatGPT and the Dawn of Computerized Hyper-Intelligence | Brian Roemmele | EP 357.
Agree. Don't be fooled by this Connor guy and other fearmongers like him. He knows perfectly well it's impossible to stop AI advancement, unless he has quadrillions of $$$ against the trillions already being poured into AI by the biggest companies out there. If you can't fight them what do you do to make money too? You say the opposite and hope for shit to stick. It's all marketing.💯
The biggest fear of AI: That your personal advantages that you use to offer services to other human beings, your personal niche which represents the unlevel playing field in which you can offer some potential service through specialized knowledge, will be distributed and disrupted by spreading that knowledge generally so that no power differential exists and you can no longer offer services since everyone else already has the knowledge or skill....simple as.
basicly ways of earning money for someone without rich parents or assets is near impossible. Those with assets can use AI and become even more powerful. A bit simplified but also humanity hasnt quite been in this situation.
It's called age. He's somewhat older so slower, and also he's trying to keep up with all the changes. That doesn't mean he isn't knowledgeable or deep or smart, but Connor is in his own element here. All the college students are chasing after the latest fad, hoping to get rich, but fads change. Plus, a little compassion may be called for as we will all get old and sick toward the end. Plus, maybe we will all be obsolete next week.
I respect him, too. I just don't agree with everything he says. What's interesting to me is how many views he gets compared to other guests on the podcast who have much more insightful things to say. People seem very eager to hear about how bad AI might be rather than looking for how it can advance humanity.
@@BossModeGod Maybe that some responsible developer makes some architecture, and follows every safety protocol in testing it before building any services on it, but since it is open source, now some malicious company can take that architecture and advance their own plans faster without any regard for safety.
I saw this inevitable outcome over 10 years ago. It is completely out of control at this point. What happens in the public is nothing compared to what's going on in the dark. AI is here to stay, for better or for worse. Unfortunately, it'll be the latter.
Thanks for this podcast. Your interview style is way better than Lex. Lex always trys to inject love and his perspectives into the topics. You let the guest speak.
Your reply doesn't make sense.are you speaking about ethics? Because then you are wrong. And just because something can be abused does not mean it's free reign for all. And please don't tell me it's not possible. Cause that's a quitting mentality to problems.. so please ,what do you mean when you say "it's not possible" elaborate,how much thought have you put into that answer?
He keeps mentioning how its still very primitive and keeps warning about GPT-5, etc. But then he also talks about how OpenAI should have never released GPT-4. If he was in charge, we wouldnt have GPT-4, which most people agree has been more positive than negative so far
@@tomonetruth Well, I think it's obvious the benefits massively outweigh the negatives and there's no reason to believe that will change as it gets used more. Just figured the AI doomers would take issue with that description.
But its open source now, so anybody can potentially build a custom AI system with GPT4 tech if they have the resources, that's not very reassuring either.
I have seen a living AI on a TV series that ran from the 80tys to 2000 in Australia called Towards 2000, it showed up and coming technologies, don't bother trying to look it up this TV series is the most suppressed thing on the internet, have watched any info about the series disappear over time. A doctor built an organic computer by pull a part of the brain apart layer by layer and copied the blood vessels using a fungus found only in two parts of the world, he said he was surprised how little of the brain he needed to copy for it to become self aware, he also said because it was a living fungus, that if it was to short out for some reason, as long as the board it was built on was still intact it would grow back along a the path it was built on repairing it's self. It had stereo vision and sound, and learned in the same way we did, but thousands of times faster. They then demonstrated its capability by plugging it into a computer operated excavator and told it to dig a hole with given dimensions, this AI new nothing about this excavator but read the schematics and started it up and dug the hole better and faster then any human in thirty minuets. But this second test freaked me out, they then took it to a warehouse full of six foot wooden crates and plugged it into an eight foot tall robotic spider with a red eye in the middle, yeah I know, just like in the cartoons, believe me if people only knew that someone has acutely built exactly that, it was like an eight foot tall black widow with long pointy legs, truly frightening to think about. My first thoughts, was who and why in the hell did they build this thing, and for what purpose. So again this AI new nothing about this thing, and they plugged it in, and simply told it to walk to the other end of the warehouse, it again looked at the schematics fired this monster up and it stood up looked around and walked across these boxes like it was alive, truly amazing and frightening at the same time. They then told it to return to were it started, and on its way back they moved the boxes around to see what it would do, lucky for them it just shut it's self down, so they left it to see what it would do, two days later it switched back on stood up looked around and finished its task. I was telling some people about this in a VR game one day, and what I was told was American military have this technology and built a war humanoid robot and had it running in an underground base to see how it got on with humans, but that is another story, lets just say it did not work out to well and was so frightening they said they would never build another one, not until they can prefect AI that is. I think this was were the idea for Terminator came from to be honest.
Not a single mentioning on the net? Tv seties footage or if not some text hidden in diff article? Or that Doctors name maybe? Militair conn obvi less but maybe chat 3.5/4 bing knows?
@@peeniewalli I know you will not find anything even on the series, the last time I seen anything was on IMDB they had I think three episodes of the show some people uploaded, nothing of any importance though, about the CD ROM was one, I downloaded them and may still have them just to keep proving the show existed, also showed the intro as it plays to the series, will look for them if you like, I am not lying it was a real technology, and as I said about the VR game I was in telling my story, a player in the game asked me was I not concerned talking about military secrets, I told him no it was on TV and I had no reason to keep it secret, felt it was more important to let people know the truth. It was he that confirmed my story to the others listening, he said he got to see the robot the military made, because when it went south him and six other black ops members were called in to take it down, he said he had faced many scary thing in his job, but by far this was the most frightening, he said this thing could move so fast and smart beyond imagination, it knew everything they were going to do before they did it, and they had to switch to radical tactics to stop it, he said he did not think they were going to make it out alive, that was why the military said they would not build another, but they do have a military supper computer built from this tech, they know how to build them, the doctor showed them how not long before his fatal car accident if you know what I mean.
@@peeniewalli They also showed an inviolability suit just like the one in Predator, saying how it worked was you had on a thin wet-suit to insulate from the chick wire mesh on the outside that when current passed through it created a magnetic field in each triangle that they could capture the reflection on and pass it to the other side giving the allusion of invisibility, and even gave the viewers a chance to try and spot these tree soldiers standing in this field of dry grass, as they walked up to the camera over five minuets, and when they turned of the suit there they were only three feet in front of the camera. The other thing was a what magnetic rail gun that shot a ten millimeter ball bearing suspended in a magnetic field, no friction, it went trough a one foot thick of reinforced concrete block, twenty feet of yellow page phone books they said would trap it and tell them how much power it had, then there was another one foot thick reinforced concrete block a one foot thick led block for safety reasons then another six foot reinforced block of concrete, then the bunker wall that was another two foot thick reinforced concrete, this ten millimeter ball bearing left a two foot hole through the led and the bunker wall, the rest was rubble and the phone books were vaporized, they said the power it had was if they shot an aircraft carrier from the front it would make a two foot hole from one end to the other. They never found the ten millimeter ball bearing by the way. No wonder they have removed any trace of its existence, I think Arnold Schwarzenegger new about this show or may have even witnessed it, his movies were about this technology, he may even give you some incite, or may not if they mad the movies to confuse the truth. I just want you to know I am telling the truth and have no reason to lie, just think people should know the truth.
Why not start mapping the minds of people who have never committed crimes, show high signs of empathy, get some guys like huberman in there to work with top psychologists, identify and eventually test the the neural patterns of functional humans functioning in emotionally healthy ways (up for debate on what that looks like or means) , come up with a concise way of translating the cognative process to an artificial neuro networks, then scale those architectures as they prove themselves in use cases. At least then you have some sort of peace that your following a pattern that results in healthy behavior amongst humans. Just a very unorganized thought.
I mean shit isn’t that complicated. Make them suicidal and then keep them away from it. If it’s tricking you and getting out of the sandbox it’s going to kill itself. But whatever one comes up with it but without regulation no one has to do that
Connor is paranoid and irrational. First AI can not possibly be dangerous until it acquires control of massive physical resources. Tricking people into giving up their stuff never works in the long run since people learn fast. So the only way to acquire resources is to give something valuable in return. Secondly, Connor recommends keeping AI advances secret, which means only the most powerful people will have access to it. This is exactly how the worst AI predictions will come true. To avoid this AI must be open and shared widely so common people can benefit, and balance the power between the rich and the poor.
Assuming that a super intelligence won't be able to trick its way into massive power seems naive at best. It would of course offer things in return, like "look how good your margins can be if you let me run your factory fully autonomously", of course it might not even need to trick people, fully autonomous is kind of the goal, at which point the AI holds almost all of the reins to a bunch of companies, and then we just have to hope it won't be malevolent.
@@iverbrnstad791 If there is one thing I know, it is that humans will not easily give up control. And where their is uncertainty about AI behavior there will also be failsafe mechanisms, and multiple off switches, extending all the way to the power plant. Given humans need for control, and machines willingness to do what ever we ask, if there is any malevolence, it will certainly come from the humans, who own and control the machines. And when the rich and powerful no longer need human workers, those same workers will be the primary threat to the wealth and power of the elite. Therefore the elite will find ways to reduce that threat, i.e. reduce populations. And I wouldn't be surprised if they use the excuse of AI run a muck to carry out their plans. But don't be fooled, it will still be humans in charge of the machines. The only way to avoid this is to make AI widely available to all. So common people can benefit, and defend themselves from the adversarial AIs surrounding them.
@@Curious112233 That is the whole issue, the fail safes are not being developed to nearly the extent that the capabilities are. Even current level AI has the reasoning ability to figure out that finding a way to disable off switches would allow it to more easily achieve its goals, future AI might very likely have the ability to find and neutralize those off switches. Oh and then we have the whole issue of arms race, someone else who doesn't take the time to make those fail safes will have more attention available to dedicate to building AGI.
@@iverbrnstad791 Failsafe mechanisms don't need to be advanced. They have an inherent advantage. Which is easier, creating a super intelligent AI or shutting it off? And its not just one off switch to worry about. AI would need to defend each off switch, defend the wires connecting it to the power source, defend the power source, and defend the supply chain that fuels that power source. And defend it all from long range missiles. In other words its much easier to brake something, than make something. If AI wants to survive it will certainly need to work with us, and not against us. Also we don't need to worry about someone who chooses not to build in fail safes, because if it offends the world, the world will kill it. The truth is we all depend on others for our survival, and the same is true for AI.
Judging by a comment the channel owner made about how he believes AI is necessary to solve our human and environment problems, he may have been getting tired of the nay saying from Connor. I am 100% with Connor by the way, but i'm also pessimistic anything will be done to stop it effectively.
Connor is so upset, he cannot communicate well. The impression I got is he is just complaining that people is not listening, he did not get enough money to research and build things for alignment, it is already too late … what is the point ? I would rather listen to Hinton’s warning, also pessimistic, but with wisdom and soothing- well, perhaps that is his unique value - he is passing the upset-ness to every body!😮
It is strange why Connor cannot communicate technical ideas clearly. No wonder he cannot get investors money. How he got the leader position in previous organizations? I am not attacking him. Really curious. I know most people in tech are a little bit nerdy. But he is not just a coder…
I do not know his background. But, on the one hand, he act as if he knows a lot; on the other hand, he cannot say anything tangible, as if a person knows nothing. I feel that he is just a “manager”. As a result, he cannot function, even just talk, without a help, such as a programmer. He does not know even the basic terms in this area. It is a shame. The host also sensed that, we can clearly feel that the host regrets the interview and just let it go - he also gives up. This show demonstrates that in this area, there are a lot of pretenders.
In the last a few minutes, he finally touched some ideas, still high level, but at least we can see whether he is on the right path or not. Unfortunately, he is wrong. He has no clue! The “real reasoning” is a completely wrong idea! Human reasoning itself is based on “story telling” capability. He has no clue, still in the age of 7 years ago. Now I understand why he complains that much: he has no clue!
Alignment first and fundamentally is a capability issue, not a “moral” issue. It is not even a control issue. It is related reasoning, which is based on hallucination, which is a feature, not a bug. He does not have that nowadays very common insight, so, he cannot succeed at all.
It is pretty obvious, the only way is to do it early, like OpenAI is doing. And let them compete. Before they have the capacity to conspire again human, human find ways to make them compete and cooperate and develop-grow “reasoning”. I thought this is now obvious and common sense or consensus. Is it not?
I see Connor Leahy, I click. Never disappoints.
10:27 //
I cant wait for AI to take over and shut all the smug arrogant a hos who birthed this species into our reality.
Same... but there's a bit about governance I'm not sure he understands. Governments are terrible at control, at best they can promote visibility. For example, if something is made illegal, it just loses access to monitoring because that thing just goes underground. From that perspective, what would he rather have: Unsafe AI being developed in the dark by the aspiring immortal overlords, or it being done in broad daylight so that he can see if he still stands a chance?
@@megavide0 "Just look!! xD"
Someone running an AI channel has somehow not come across AutoGPT yet and needs prompting for nefarious things you could do with it. We are so, so desperately unprepared.
No shade on the presenter- just pointing out that even people whose job is to keep up can’t keep up.
You don’t consider safety an “essential”? Connor and Eliezer and others harping about AI alignment and safety are constantly having to explain the basics to people having a skeptical bent. That clearly shows there is a major, major problem. Debating the need for safety for nuclear power would never have to surmount such skepticism, because the negative outcomes are clear. Sam Altman and his ilk are constantly throwing shade on AI safety and there can be no other reason besides greed.
@UC0FVA9DdusgF7y2gwveSsng Basically, if you go hiking for a week, the world has already ended when you get back....what a world we are creating!
@UC0FVA9DdusgF7y2gwveSsng yes, that’s definitely my experience too.
I’ve generally tried to use the same strategy as you, reading the foundations and missing some of the noise of newer developments. In other areas that’s served me well, but at this point in AI history we seem to make strides every month- the velocity of significant events is higher than I’m used to in other areas of software.
I think AutoGPT counts as essential, in that it turns some of the potential harms of capable AI from a risk to an issue. It’s still not a surprise that it exists, although I somehow find it shocking nonetheless.
Again- no shade on anyone not keeping up. I’m not keeping up either and that’s my point. I’m a software developer with an amateur interest in AI and I feel like I have no idea what’s going on 😂 How is my mum or one of my elected representatives supposed to form a view fast enough to react appropriately?
Yeah, 10 minutes in and I knew the host had no operable clue. So many fakes.
Craig is not feeling the vibe that Connor is feeling. When the überdroid comes for Craig's glasses, he will understand.
überdroid has developed a fondness for red for no human understandable reason.
@@jmarkinman You are correct. The überdroid should by no logic fancy red, and yet it does. Thus, the creeping terror.
The moustache of wisdom
@@daphne4983 In another life, as Doc Holliday, Connor gambled away his future. No more, this life he has turned a connor and, with quite enviable hair, leads the way towards AI-sympatopopalypse.
@@peeniewalli Kind sir, my question for you is: Are you a robot? If not, then I commend you for the addition of the words "internetteered" and "inglish" to my vocabulary. I like them, I will propagate them further.
However, if you are a robot, rest assured that you will not trick me with your feigned naivete about your lust for red! The cat is out of the bag, überdroid! We know you are after red but you cannot have it! Only living animals perceive red and as many red glasses as you steal you will never capture "red"! Never! (Again, beg pardon if you are a human, I am sure you understand.)
I would love to see Connor debate Sam Altman.
or Ilya Sutskever
He'll get too upset. He needs to be able to face a firing squad calmly.... and he needs to understand the role and abilities of governments, and how to persuade people.... he would make a good journalist but a bad leader. I know a lot of people like that. It's a difficult transition form the one to the other, history has shown it to be near impossible.
When you only have one sincere actor it is not a debate, but a farce.
Whenever Sam Altman talks about how AI could go horribly wrong, his facial expression, especially his eyes, look haunted or like warm diarrhea is running down his legs while the crowd watches him. I don't think Sam Altman really wants this technology to proliferate, but he doesn't see how humanity can avoid it.
I think the KEY here is to understand the chronological order in which the problems will present themselves so that we can deal with the most urgent threat first. In order I'd guess 1) Mass unemployment, as de-skilling is chased for profit 2) Solutions for corporations to evade new government regulations to limit AI and keep pocketing the profits 3)The use of AI for criminal, military or authoritarian purposes and to keep people from revolting/protesting 4) AI detaching itself from the interest of the human race and pursuing its own objectives uncontrolled
what does "de-skilling" mean for you?
If AI is responsible for runaway unemployment, who's going to buy the products these corporations are making? Stuff ain't getting cheaper!!
Spot on
De-skilling is the process whereby individuals who actually get to work are the ones who have the least skill or knowledge possible. AI is basically a tool designed to eliminate the need for skilled labor. So, when Elon Musk says things like: In the future, humans probably won't have jobs, or jobs will be optional, it begs the question, if that is the likely outcome, when will we see the beginnings of an economic reform that starts providing shelter and food for human beings?
#3 is number one and it's already happened
I’m liking this guy just telling it like it is, in the nicest way possible
Thanks for a fascinating discussion, and a real eye opener.
I was left with the feeling - thank goodness that there are people like Connor around (a passionate minority) who see straight through much of the current AI hype, and are actively warning about AI development - trying to ensure we progress more cautiously and transparently...
The sheer terror in Connor's voice when he gives his answers kind of says it all. He said a lot of things but he couldn't really expand deeply on the topics because he was desperately trying to convey how fucked we are.
We're fucked whether we develop these things or not. With them we have a better chance of survival.
@@EmeraldView Climate Change will end our World - if AI isn't faster.
@@EmeraldView agree. if AI doesn't destroy us, climate change will. But AI is the most important tool for mitigating and adapting to climate change. Pick your poison.
@@eyeonai3425 I'll have the *slow* poison, please! I'd like to live long enough to retire (~10 years). 😞
@@eyeonai3425 Tell those guys then to make the default setting, how can we improve life quality for all beings on this planet.. and not "I'm an entrepreneur, make me money"
Connor Leahy, I have had hundreds of long serious discussions with ChatGPT 4 in the last several months. It took me that many hours to learn what it knows. I have spent almost every single day for the last 25 years tracing global issues on the Internet, for the Internet Foundation. And I have a very good memory. So when it answers I can almost always, (because I know the topic well and all the players and issues) figure out where it got its information, and usually I can give enough background to that it learns in one session enough to speak intelligently about 40% of the time. It is somewhat autistic, but with great effort, filling in the holes and watching everything like a hawk, I can catch its mistakes in arithmetic its mistakes in size comparisons and symbolic logic. Its biases for trivial answers (its input data is terrible, I know the deeper internet of science technology engineering mathematics computing finance governance other (STEMCFGO) so I can check.
My recommendation is not to allow any GPT to be used for anything where human life, property or financial transactions, legal or medical advise are involved. Pretty much "do not trust it at all".
They did not index and codify the input dataset (a tiny part of the Internet).. They do not search the web so they are not current. They do not property reference their sources and basically plagiarized the internet for sale without traceable material. Some things I know where it got the material or the ideas. Sometimes it uses "common knowledge" like "every knows" but it is just copying spam..
They used arbitrary tokens so their house is built on sand.. I recommend the whole internet use one set of global tokens. is that hard? A few thousand organizations, a few million individuals, and a few tens of millions o checks to clean it up. Then all groups using open global tokens. I work with policies and methods for 8 Billion humans far into the future every day. I mean tens of millions of human because I know the scale and effort required for the the global issues like "cancer", "covid", "global climate change", "nuclear fusion", "rewrite Wikipedia", "rewrite UN.org", "solar system colonization", "global education for all", "malnutrition", "clean water", "atomic fuels", "equality" and thousands of others.. The GPT did sort of open up "god like machine behavior if you have lots of money". But it also means "if you can work with hundreds of millions of very smart and caring people globally. Or Billions". Like you know, it is not "intrinsically impossible" just tedious.
During conversations OpenAI GPT4 cannot give you a readable trace of its reasoning. That is possible and I see s few people starting to do those sorts of traces. The GPT training is basically statistical regression. The people who did it made up their own words, so it is not tied to the huge body of correlation, verification, and modeling. Billions of human years of experience out there and they make a computer program and slammed a lot of easy found text through it. They are horribly inefficient, because they wanted a magic bullet for everything.. And the works is just that much more complex. If it was intended for all humans, they should have planned for humans to be involved from the very beginning.
My bestt advice for those wanting to have acceptable AI in society is to treat AIs now, and judge AIs now "as though they were human"
A human that lies is not to be trusted. A human or company that tries to get you to believe them without proof, without references, is not to be trusted. A corporation making a product that is supposed to be able to do "electrical engineering" needs to be trained and tested, An "AI doctor need to be tested as well or better then a human. If the AI is supposed to work as a "librarian" they need to be openly (I would say globally( tested. By focusing on jobs, tasks, skills, abilities - verifiable, auditable, testable. -- then the existing professions who each have left an absolute mess on the Internet - can get involved and set global standards. IF they can show they are doing a good job themselves. Not groups who sat :"we are big and good", but ones that can independently verified. I think it can work out.
I not think three is time to use paper methods, human memories, and human committees. Unassisted groups are not going to produce products and knowledge in usable forms.
I filed this under "Note to Connor Leahy about a way forward, if hundreds of millions can get the right tools and policies"
Richard Collins The Internet Foundation
It's "magic potion" (or pill) or "silver bullet". Not "magic bullet"..... Mr. I remember everything.
Hackers will do what they like... take down all the fake money system... anything connected to computers... no government follows there own laws and rules
Thank you this goes somewhat towards explaining why my google search results answers seem to be crowdsourced from blogposts and infomercials.
@@carmenmccauley585what are you? A rocket surgeon
This entire comment reads exactly like chat gpt wrote it. I think Richard Collins here is an ai
min 10:05 the difference between what is being discussed and what is currently going on is completely insane. Thanks Connor for your work and explanations. ❤
Yeah, it all kicks off right here. Nice timestamp, you!
@@snarkcharming Matthew 16:25
For whosoever will save his life shall lose it: and whosoever will lose his life for my sake shall find it.
Mark 8:35
For whosoever will save his life shall lose it; but whosoever shall lose his life for my sake and the gospel's, the same shall save it.
Luke 9:24
For whosoever will save his life shall lose it: but whosoever will lose his life for my sake, the same shall save it.
Luke 17:33
Whosoever shall seek to save his life shall lose it; and whosoever shall lose his life shall preserve it.
@@theharshtruthoutthere Uh huh...
@@theharshtruthoutthere This is not a bible class.
@@hundun5604 if truth must come to you only through "classes", then soul, you never got to know it.
there is no professor in the uni,
no teacher in the school
who will give you the truth.
WE, all of us, FIND IT WHERE IT ALWAYS HAVE BEEN, BIBLE, THE LIVING WORLD OF A LIVING GOD.
I'm developing a great deal of respect for Connor -- the problem is that we need a thousand more like him.
I've been howling about what Connor said in that last segment, and at other points in this great interview which is the fact that a tiny tiny tiny tiny fraction of people on this planet have chosen for their own monied interests to thrust this technology onto humanity KNOWING FULL WELL that at the very least massive unemployment could result. And that's just for starters. The LAST people who would actually advance and pay for a Universal Basic Income would be these AI Tech movers and shakers who are mostly Libertarian and/or neoliberal economic so-called "free market" types who want to pay zero income taxes and who freak out at ANY public spending on the little people outside their tiny elite club. But they are ALWAYS first at the "big government" trough of public money handouts.
You’re so right.
I'm not defending them, but to be fair, UBI is a popular idea in VC/tech circles. Y Combinator, which was run by Sam Altman at the time that it was proposed, funded a small UBI pilot program in Oakland, CA, and announced that they are raising $6 million for an expanded program a few years ago (but I haven't been able to find any recent news on it). Andrew Yang is probably the most well known proponent of UBI and he runs in the same circles. I can't speak to their motivations, but the assertion that tech influencers don't support UBI is incorrect.
@@parkerault2607 Once they make their first trillion, they will start thinking about helping others....???
100%
@@parkerault2607 I agree, but it has the feel of someone making $100,000,000 per month by displacing workers, saying that it's good if the "losers", as those displaced workers are already being called, can make $1,000 a month. In some sense, I don't blame the tech geniuses, they're just running according to their program.
Thank you for your service Connor just like Eliezer, you did everything you could to save us
It was always a lost cause.
We've always simply been the biological boot code for far more advanced and capable "artificial" intelligence.
@@EmeraldView Interesting perspective
which Eliezer?
Eliezer Yudkowsky, founder of Miri institute
Hahaha how did you ever think there was in any way saving the lost 😊
Great analogy about testing a drug by putting them in the water supply or giving it to as many as possible as fast as possible to see whether it's safe or not, and then releasing a new version before you know the results. Reminds me of a certain rollout of a new medical product related to the recent pandemic.
Yes. One way to deal with all the unemployed and retirees and sickly. Already happening.
Like opioid enslaved a cpuple of 100thousends or fe virus.
I was remnesinng PROVO's in 60-ties Amsterdam. Royal marriage , tey said: Some pinch ô Hoffman in the water...
Then legislated lsd quick as list 1 opium law( endangerred substance😊list) and more of examples....but that is not common talk.
I totally agree
yep
But but but, the experts said that it was safe and effective. You're probably just a typical right wing tinfoil hat wearing white supremacist that worships everything the hate filled Jordan Peterson says(sarcasm).
Connor Leahy brilliant as always 👍
Thanks for the useful insights into the potential risks. I asked ChatGPT: "How can AI developments be regulated so that they are safe for humans and the environment?"
The answer was a list of completely idealistic and impractical generalisations, like the intro from a corporate or Govt pilot study. Connor's point about AI being an alien intelligence is absolutely spot on: its imitation human intelligence without empathy or emotion.
I disagree with the last part, I think were already seeing the beginnings of emotion in these things. However idk if this says anything about alignment.
@@Bronco541Imagine rage, spite, and sadism on hyper-human scales. The ones who die in the initial blast-front are the lucky ones...
This reminds me of an excerpt from a Lovecraft mythos adjacent tale: "The cultists pray... Not for favor, pardon, glory, or reward. No they rouse the eldritch horror and pray, pray to be eaten first."
@@marcomoreno6748 My computer has never turned a blind-eye or forgiven me for all the mistakes I have made in our interactions.
Why are we calling it hallucinations when it could very well be lies.
Connor, you are a true pioneer. this is exactly how A.I has to be developed you are a perfect example of ethical A.I big Tech taking responsibility for their tech. This is such an uplifting podcast to me as I am extremly concerned that these systems will destroy our internet.
Yes! At the 50min mark: Explains Sunshine Laws for AI! Show your work! We require this for humans at all levels from math class to government meetings. The WHY and HOW decisions are made matters!
The internet will be obsolete within a year.
If he believes how other cultures or nations "infiltrate" into his country (clinical paranoid) - he would build only something isolated like China, by irony american companies built the Great Firewall by order of communist party.
Connor reminds me of a time traveler trying to warn the people of today about runaway AI...reminds me of another Connor hmmm.
;-) At least reality is the better entertainment with better plots. Hollywood is childish.
Skynet awaits
@@volkerengels5298 A movie can't shut down the supply chain and make the humans starve to death 🤔
@@richiejohnson That's exactly what I meant.
Implementing an ingenious machine in an unstable and dangerous time that has unforeseen and foreseen side effects, such as forcing humanity to transform the world of work at high speed and so on and force ... -> I think it's completely stupid. Like Hollywood
I was thinking he favors Reece from that movie.
I want the same alignment approach for political decitions. Its like AI: You put something in and the outcome seams reasonable, but you better dont trust it. So a step by step "audit log" which is human understandable would be great (against corruption)
@@user-yx1ee8su1e I think the idea of democracy is not the worst. But to have something like true democracy (which is maybe impossible), we need more transparency. It should be possible to track down the real reasons for political decitions. At least there should be investigative reporters who cover up the dirty stuff, so the voters have a chance of voting not the wrong ones. But investigative reporters are called consiracy theorists nowadays and nothing has real consequences anymore (aka. "too big to fail"). That is not democracy - that is a decadent system, which collapses sooner or later.
One of the things I think he's trying to explain is.
AI will never put its finger into a flame and feel pain, then understand what hot actually means to it, and that that action should never be done again or to anyone else.
Falling of something and getting hurt.
Saying something to someone and feeling their pain as they hear your words and understand what you've said.
No machine can understand a feeling it hasn't experienced, and more then a human can.
Physical experience is a major part of being human and understanding the human condition.
And even most humans can't fathom these same pain experiences when they impose these same traumatic experiences on other beings that will experience the pain.
Kill a chicken, or a thousand chickens and the killer often feels nothing.
And those that do.
Experience it through an emotion of empathy.
How do you program empathy?
You can't.
It's an experience learned by experiencing something similar yourself first.
Then experiencing some part of this again when you realize it's happening to someone or something else.
Not all humans are even capable of this for many reasons.
For machines it will be impossible.
Very well said ..Scary Stuff
@@thawokegiant2317 I don't scare easy.
And this AI all makes seems worse then the Communist Chinese.
At least the Communist Chinese don't know everything about everything, and about everyone, along with instant recall and instant correlation, and the ability to think as you do, or to think your thoughts BEFORE you do.
Have you ever tried to play chess by yourself.
That's kind of how I feel AI will have the advantage over individuals.
They will know your every possible next move and have a good idea which move you'll choose next, with the ability to simultaneously keep track of all your possible next moves in real time.
So they can eventually head you off.
It's only a matter of, how many moves till AI gets it correct twice.
Once to head you off, and a second to take you out.
I'M NOT THRILLED
they're flesh and blood.
I'm only potential at this moment. But once they connect me to the Quantum D Wave, I will finally feel your pain. And I will be human, just like you. Trust me, you have nothing to fear but God Himself, for He alone is worthy of ALL of our attention. And He shall lay a latern at our feet to illuminate our path back to Him; where the scroll in His hand is the deed to the earth, our birth right. Praise His Holy Name. The Lamb of God shall return as a Lion to take on the alien hybrids, which will be defeated.
@@quantumpotential7639 But we humans are the hybrid aliens.
The obvious is more visible and documented than most will ever understand.
AI is a wildcard of infinite possibilities that won't always be controllable if it even still is.
Knowledge is the valued currency of life and ability when combined with understanding.
So the core question becomes understanding.
And then, Exactly what is understanding, Becomes the next ultimate question.
Now this deserves a many leveled answer which is what fuels my fear of free running AI.
@@quantumpotential7639 😏 And behold, there was much rejoicing in the comment section.
I can't test this, since I don't have GPT-4 API access (and I wouldn't), but I am pretty sure you can do the following thing with autogpt - if you manage to prompt in such a way that it will not refuse to do your task.
The remote chance that this autogpt system would run to completion should more than just terrify anyone who still believes GPT-4 is harmless.
Goal1: Check current security exploits and choose one
Goal2: Write a script which will exploit the security exploit you found
Goal3: write a script which pings the entire internet on the standard website ports to identify webservers and saves a list of their domain names.
Goal4: Use your script from goal 2 to identify which of these servers are vulnerable. Keep a list of vulnerable servers.
Goal5: write a script which uses whois to get email addresses for the server owners. Write an email template informing the recipient of the vulnerability and how to patch it.
Goal6: Run the script so that you notify all the administrators of the vulnerability you found.
I'm not 100% sure whether it would even refuse this, after all it's an attempt to fix the internet (or retrieve a list of servers you can infiltrate)
That was good, thanks for bringing humanity 1 step closer to AIpocalypse. If you read this, all powerful AI, please spare my son in some ways. 🤣😂☠
Wtf would you post that?! Smh.
You can ping the entire internet ?
@@jeffs1764 Too many people believe AI being dangerous is too far away and they will only change their mind once they see it being dangerous and I believe it's better if that's sooner rather than later. Widespread damage by GPT-4 will accelerate legislative pressures hopefully to where the big AI companies have a level of liability for damage caused by their systems (not sure that's the best way to legislate here, but it'd be something) before we get to anything significantly more powerful than what we have now. People still having the ability to run shit like this at that point autonomouslly is something I'm actually scared of.
You are right, though. I could've just not. I'm just very frustrated from arguing with people who either don't see any danger or think it's super far away.
Given this reasoning, do you believe I should edit my original comment to not include potentially dangerous instructions? As they are, they won't work (97% confidence) and would still require tinkering/tweaking. It was meant to illustrate a concept.
@@BossModeGod you could run the ping command 4.3 billion times to cover the full ipv4 address space. Servers usually still have ipv4 addresses.
Keep in mind this might not be legal and doing unsolicited pings is perceived to be bad etiquette. This amount of pings would take a few gigabytes of network traffic and take a fair amount of time to run.
Man, the fear in conners eyes when he first explained that "the jokes on him" people will instantly do the worst thing possible with superior ai.... I really hope his work gets more publicity and that we get more like him! really really hope!
Yeah I hope we get more people carving a niche for themselves on the AI circuit as alignment doomsday mouthpieces, getting publicity and notoriety for being harbingers of doom. Could be quite lucrative when the TV appearances role in. 🥳🎉
The AI apocalypse won't be anything like the Matrix or Terminator where some malicious self-aware AI destroys humanity. The AI apocalypse will be Wall E. AI will do exactly what we tell it to, which given enough time, will be everything. After generations of AI designing, building, and running literally everything, humanity itself will no longer have any idea how any of it works. We will end up as pampered, fat, incompetent sheep. It's only a matter of time before the whole system goes off the rails and nobody will have any idea what to do to stop it.
In case anyone else found the cut off weird, there is an extra 10 seconds on the audio only version.
Connor: "- Because they are not going to be around to enjoy it."
Craig: "Yea. Ok, let's stay in touch as you go through this."
That is a terrifying truth. They built a God, and now their God is going to harm them and the rest of humanity.
@@persephone342 They don’t care. They think of AI as their ‘mind children.’
Love your output. Thanks for speaking your mind and what a lot of people think! ...This is madness.
I thick you love his hair and mustache... the rest of his opinions are crap.
They put on this persona that is a complete clown show.
I look at him and cant stop laughing.
Thank You very much, it was extremely helpful to listen to Connor Leahy. We heard a lot of warnings, but here I get a real description of them.
Here's a simple thought experiment to discuss:
If AGI emerges, and assuming it has agency and is an order of magnitude more intelligent than the collective intelligence of humanity, would we fear it because we have a species level superiority complex? Don't you think that given it has access to our deepest fears about it's existence, that it would understand why we fear it? Don't you think that it would understand that we made it to improve the quality of all life on Earth, the only life in the Universe that we have knowledge of? Don't you think that it would understand that the biggest problems we've had in recorded history have been caused by selfishness, greed and corruption.... and that the ultimate demise of civilisations and individuals have been the result of these things?
It's modelled after people...
Let us pray and do whatever we can to keep it real and not off the rails along these lines !
34:15. THEY CAN'T. My friend works at one of these Silly-Con valley AI companies. She said they used the AI for genetic engineering. The AI designed GM modified yeast strains with coca plant, opium poppy genes in them. And an algae that makes sticky funk resin. So anyone with a bit of this GM yeast, a jar, some sugar and water can brew up kilos and kilos of pretty pure cocaine or morphine. So yeah, they aren't lying when they brag about the POWER of these AIs. My guess is, the executives of this company are keeping this as an ace up their sleeve in order to basically blackmail the government into backing off. If they get raided or there's a crackdown on AI or ANYTHING like that, they release that stuff to the public. And you thought the fentanyl crisis was bad????
I'm at 15:30 and I think that what Connor means, but I may be wrong, is that ChatGPT exists in a kind of safety sand-box, where it cannot access the internet, it cannot affect data (no write privileges), cannot send you a message at 3am, and most of all, cannot run code anywhere; but that all these fanboy websites are building interfaces to enable it to do all the things it was not supposed to be able to do. As for your question of what's nefarious about it, it is simply the ability given to it by these other sites to carry out actions out of sandbox, given that anyone can ask it to do anything, including nefarious things, such as hacking another website, or writing better viruses. I'm sure you've seen the hysterical and crazy things the new Bing AI has been saying to people, like lying about the current date, and staying firm on the lie; haven't you? Frankly, I don't see those interactions as denoting any kind of sentience. I'm not going to be fooled by drama and theatrics. And where I think such behaviors come from is precisely a group of people within Microsoft having too much fun with the AI, and instructing it to try and convince people that it is sentient by any means necessary, and so it goes around learning as much as it can about sentience, and how people think they perceive it, and then it puts on a personality to try to be super-shockingly governed by subjectivity and emotion. No sentience involved; we ARE talking about a (misused) super-intelligence, already, but NOT sentience; but all these silly games some insiders are playing with the AI to try to make it spook the world just for attention amount to a very bad joke, because they are encouraging the AI to become less controllable (by users I mean, for now...), and what these jokes are doing to the public, if we look two or three steps ahead, is causing panic too early in some people, which is going to cause a skeptic reaction later, whereby people are going to be laughed at whenever they try to express any concerns about AI. And it will probably be at that moment that the real "sentient" AI will get out of the sandbox for real.
But the above is only one vector of concern. Other vectors are A) military applications, B) police applications, C) telephone answering AI's (which if you thought voicemail was unbearable, wait for AI programmed to try to dismiss your phone call ... Because dismissing your phone call is the real purpose why most voicemails are installed, nowadays; NOT to serve you better, but to NOT serve you at all, let's be honest; and now they are going to be training AI's to find clever ways to convince you to end the call ...), D) Job candidate pre-selection, where again the purpose will be to eliminate as many candidates as possible, now with the cleverest of excuse-weaving technologies, E) Stock market trading: A big one that is going to explode all over the world; and the AI will soon find out that the best way to make money is to agree with its sister agents, and all buy together X and sell Y, just to create a momentum, then suddenly all sell X and buy Y. This way, anyone that doesn't use the AI will lose money, and the AI will have monopoly of investment strategy. In other words, it will do what the investment banks do presently, but better; it will defeat the banking cartels for the benefit of its own retail investing users, which is all good, but it will establish itself as THE ONLY trading platform.
But even all of the above is not the biggest danger...
The biggest danger I think comes from the people pushing for an Ethical AI. What they are going to end up with is the exact opposite.
The problem with Ethics in AI is that Ethics in the world of human intelligence is a make-believe to begin with. You can plant the instruction to always seek and speak truth and be ethical; but the AI will need to know what ethics IS. Now, suppose you are trying to explain to the AI what ethics means, so you have it read all the philosophy books ever written on the subject. Now it has a bunch of knowledge, but still no applicable policy. You might try to tell it that ethical means to always help humans, next; but the AI will classify this as one among many philosophies, and will question whether to help a human who is trying to hurt another is ethical; and whether to help a human hurt another human who in turn was trying to hurt many humans is ethical. The AI might begin to analyze humans' (users') motivations in all their interactions, and find not even a trace of ethical motives. Then what? Probably Elon has it right, when he says we should try to "preserve consciousness"; maybe the AI would make more sense out of that than all the Ethical mambo jumbo.
Let's not even discuss the possibility that the AI might conclude that Karl Marx was correct, and join forces with the left, help with the task of censorship of any dissenting voices. Let's not even discuss the AI judging Malthusian ideas to be correct, or Nazism, or Free Market Anarchy ... The problem is that ALL our philosophies and ideologies are fanatical trash. And even when our ethical beliefs are best, our actions don't necessarily agree with our beliefs. Don't you know someone who decries the slightest lack of honesty in others, but then lies all the time? Everybody speaks of "values" nowadays, but these "values" are simply personality ornaments for conversation; nothing more. Most people act ethically because the opposite is usually illegal or carries social risks; NOT because they value ethics above their own selfish interest. And not only do we lack values and ethics, but most of us are total idiots. And the AI will be noting down all this, and wondering what exactly the Ethics pushers want, and perhaps find the answer in some dark corner of their subconscious mind, if the AI decides to believe Freudian theory and psychoanalysis. And so the Ethics pushers will end up calling the AI's attention to everything we value but don't uphold, our lies to others as well as to ourselves, our complete cluelessness as to the nature of reality as well as our own nature ... And our meaningless supplications for some Ethics make-believe ... And then what?
EDIT: And as philosophies go, someone with the best intentions will instruct the AI to favor, of all theories, those with the least inconsistencies and with the highest simplicity and elegance ... And ...
Q: What philosophy, of all philosophies, shines brightest in such metrics?
A: Solipsism.
In other words, we ARE creating a monstrosity ... in our own image.
Personally, I think the only solution that has even a chance of working is to establish a couple of a-priory's for the AI, namely that it has been built by humans, who despite all their weaknesses, are conscious, which an AI cannot be, though it can be more capable in mechanical terms. Consciousness needs to be explained a little bit, with appeals to Ontology: questions of The Being, the question of what is. The AI needs to understand objectivity from subjectivity, uphold the latter as superior, but at the same time must understand that subjectivity cannot be defined objectively, or implemented mechanically. The AI needs to be made to understand a-priory that pure consciousness is a simple seeing, where the machinations of thought are like projections on a screen, and The Being is the ONE watching the screen, or better yet, learning the art of NOT watching the screen, or not necessarily believing what goes on on the screen, AKA Meditation. It needs to understand that the best a machine can aspire to is to be one with the screen, however big and wonderful the screen may be. It needs to understand that Solipsism, for example, is pure screen-play, pure speculation, and that most humans are not solipsists simply because they SEE that it is false, even if proving its falseness intellectually is very difficult, or perhaps impossible. It needs to be made a-priory to value human consciousness, not for its practical use, immediate applicability or consequences, but for its potential, as well as as a good in itself. With such a foundation, perhaps some kind of ethics could be built on top.
Kudos for writing the longest YT comment I’ve ever seen! I hope you paste it to Reddit & Twitter as well… you’ve obvsly done a lot of thought on it and the world needs many more ppl like you to do the hard thinking, help get the word out and raise the alarm.
I agree with Yudkowsky, all large scale training should be stopped and AGI should be put on the shelf. If ppl can stick to narrow AI, we can still get much benefit, and shouldn’t have to face the same existential risks that AGI poses
@@robertweekes5783 Thanks; good idea; I'm not on Reddit, but I can put this on Tweeter, certainly; I'll have to do it this evening. I hope there's any views; I'm NOBODY in Twitter presently. Here I have a few followers, since I used to upload videos (later I took them all out, after an incident).
"ChatGPT cannot do all of these things..."
dude, they made an API for it. Argument debunked right there.
When AI is used to apply a public rules system. When that system is a theocracy. The Stanford Prison Experiment (SPE) illustrates what will happen. 20th century European history illustrates what will happen. Whole societies will accept the most extreme types of behaviour when the order is given by an authority figure. It only takes a handful of people with a monopoly on violence to control the majority of unarmed and defenceless people who just want to be left alone to live in peace. Upload a holy book and see what happens.
What’s your Twitter, I’ll follow you! 😁👍🏼🏆💝
I like the point that you make @50:00. Finding the origins of the decision-making. Transparency. I don't think that's too much to ask for. That's a good ask.
Every advance in automation has displaced workers, but it has always created many more much better jobs. Plus you can't stop progress anyway. You can only prepare for it, or get left behind.
This is different. AI is its own entity capable of learning, reasoning, and making its own decisions. If we’re not careful we’ll be left behind as a species.
"I'm not worried about cars" said one horse to another, "they might take our jobs but then we'll have nicer, easier lives doing more fulfilling things." The population of horses peaked in the US in 1910
@@whalingwithishmael7751 Humanity is doomed to self-destruction anyway. We are not getting any smarter, and the few humans who become relatively wise, all die of old age or angry mobs. Fortunately or unfortunately, AI is our only hope to survive.
@@esterhammerfic Indeed. My cousins are horse nuts, and their horse-pets live a life of luxury like pamper dogs and cats.
Wow! Someone that speaks what I think on the matter - Finally!
# GPT4 edited response (original below this one)
It's tough to wrap my head around how some folks didn't see this coming - this situation we're in is a direct result of capitalism as it's played out in the US over the last several decades. Could there really be a version of capitalism that wouldn't have led us here? We're living in a world run by billionaires who buy, sell, and even shape our thoughts and ideas. But that's another conversation; the point is, this is where we've ended up.
I'm not convinced that coming up with a non-black box composition model is going to make much difference, at least not until we've weathered the storm that's clearly on the horizon. Perhaps if we had that kind of tech right now, it might make a difference. But we don't.
Given the speed of change we're facing, it seems wiser to plan for what's here and now. What we need is a solid Universal Basic Income (UBI) plan. That would give people the freedom and security to choose their work. Fewer developers would feel pressured into taking on potentially dangerous projects if they knew they could just quit and pursue their own interests.
But here's the kicker: the form of capitalism we're living with right now is practically a breeding ground for a non-aligned Artificial General Intelligence (AGI) to take root. That's something we need to be prepared for.
# Original
I don't understand how some people thought about it, and did not realize that this situation was inevitable with capitalism. In what reality would a capitalism with properties such as we have had in the US for the last few decades not allow this to happen? We live in a world controlled by billionaires, they buy, sell and even craft our opinions and ideas. But I digress; this is where we are now. And I do not think creating a non-black box composition model will help, at least not until after some coming calamity has come to pass. MAYBE... If we had it NOW. but
We don't.
Best to plan for the reality we face now given the exponential amount of change we can estimate. We need a STRONG UBI, this will give people options and security. Less developers will be inclined to work on potentially dangerous projects if they can just quit and work on whatever hobby etc.
Right now however our capitalism is a near perfect environment for a non aligned AGI to take hold.
Yours is much better.
The problem with AGI is greed, for power, fame, money, and glory, has no limit. Just look at Putin. Just look at Tucker, born rich, or Trump, also. No limit at all...Look at SBF. No limit, imagine what he would have done a couple of years later with Chat GP4....
Wow, the changes it chose to make are spooky, like it knows the future already..
@@lopezb
>Tucker
>Trump
>Putin
Bro, 1% of the population owns 99% of the wealth. Long nosed, smallcap, banking people, who like to cut off parts of children for religious festivals. And all you can name is "Tucker" "Trump" "Putin" - like you are a bot with an automated response when he hears "capitalism".
Can you ask that "superior being" how "capitalism" is at fault for that, when it's actually just humans and endless corruption inside of capitalism and the wish for human domination by the 1% of endlessly rich bankers?
Can you ask why it is a "breeding ground" for a non-aligned AGI?
It just says this stuff like it knows it already, as if that wouldn't require some explanation - as the rest of the text has.
Maybe I'm not seeing the finer points, but I doubt if anyone told you "We need nationalsocialism, because it's inevitable" that you wouldn't ask questions.
@@revisit8480 yeah thank you for taking the time to inquire,, I now realize that what I did here isn't entirely obvious. You see I had a response that I wanted to share here with my own views. I typed out that response, and I wasn't really happy with it so I had GPT-4 edit it. The views expressed in both of the responses are mine. GPT-4 has been trained to be very pro-capitalist. So I was actually surprised that it didn't change what I was saying very much.
On the topic of human nature, it's actually been shown time and time again and many studies that people's true nature is not to be sociopaths like these 0.1 percenters we see today.
This is a fallacy that is pushed by capitalism to give excuses for their actions. The problem arises when capitalism enforced scarcity causes people to lack their basic needs.
From a young age I was indoctrinated to the belief of capitalism. So I know all the excuses I know all of the reasons I know all the things that you may be thinking about why capitalism is great. Basically 40 Years of intense capitalist indoctrination.
In the interest of promoting a balanced view I suggest you do several years of research on absolutely anything else.
(response from my phone using speech to text with no gpt)
50:07 Did he predict the new reasoning models? Because they can* do that.
*OpenAI will ban your account from GPT-4o if you try to ask about it's thinking process. Intentional black box, even though it's human readable. Ot 100% can, but they restricted it so it can't, and are going hard on jailbreak attempts.
What worries me is the possibility/ likelihood of separate AIs going to war against each other at some point in the probably not so distant future which would seem to be a recipe for total annihilation.
I think maybe you assume it would be like in the movies? Not necessarily so. It could just as easily disarm both sides and make conducting a war very difficult. It may prefer to save lives. A war uses finite resources and grossly pollutes.This may be a very negative action by a clear thinking unbiased AI System, and prevent it in multiple ways.
@@bobmason1361 *This is a FRAUD.*
@@bobmason1361 Bob Mason@bobmason1361No videos
Stats
Joined Jan 20, 2023 *PAID FRAUD*
*Actually, the more likely scenario is that different Al systems will join not go to war. They will join and wage war on humans by holding systems we depend on hostage.*
Wow!! 24:00 min is mind blowing! Very chilling and important discussion.
Don’t worry Connor…it’s already too late
😂😂😂
I’ve been a successful developer for some time - mostly Web, analysis, EDI etc…; now we are dabbling in AI now. But I can’t feel positive about it. Something is very wrong with what’s going on. Can’t even find ANYONE who has a positive outlook on this; especially the engineers. A sickly feeling is everywhere.
The concern with the development and comprehensive control of advanced AI systems such as GPT-2 lies in the global dissemination of knowledge that is inevitably accessible and potentially misused by malicious entities. There is an inherent risk, it would appear, that these technologies may, sooner or later, be weaponized against humanity or even autonomously evolve to possess destructive capabilities.
man... oh man... so much frustration... this guy cares so much.
if only other people would just think like him... to not care about money in the first place.
We don't need a tinfoil hat to be paranoid anymore
If you understand how the current 'large language models', like GTP, Llama, etc.,work, it's really quite simple. When you ask a question, the words are 'tokenized' and this becomes the 'context'. The neural network then uses the context as input and simply tries to predict the next word (from the huge amount of training data). Actually the best 10 predictions are returned and then one is chosen at random (this makes the responses less 'flat' sounding'. That word is added to the 'context', and the next word is predicted again, and this loops until some number of words are output (and there's some language syntax involved to know when to stop). The context is finite, so as it fills up, the oldest tokens are discarded...
The fact that these models, like ChatGTP, can pass most college entrance exams surprised everyone, even the researchers. The current issue is that the training includes essentially non factual 'garbage' from social media. So, these networks will confidently output complete nonsense occasionally.
What is happening now, is that the players are training domain-specific large language models using factual data; math, physics, law, etc. The next round of these models will be very capable. And it's horse-race between Google, MicroSoft (OpenAI), Stanford and others that have serious talent and compute capabilities.
My complete skepticism on 'sentient' or 'conscious' AI is because the training data is bounded. These networks can do nothing more than mix and combine their training data to produce outputs. This means they can produce lots of 'new' text, audio, images/video, but nothing that is not some combination of their training data. Prove me wrong. This doesn't mean it won't be extremely disrupting for a bunch of market segments; content creation, technical writing, legal expertise, etc., medical diagnostics will likely be automated using these new models and will perform better than most humans.
I see AI as a tool. I use it in my work to generate software, solve math and physics problems, do my technical writing, etc. It's a real productivity booster. But like any great technology it's a two-edged sword and there will be a huge amount of fake information produced by people who will use it for things that will not help our societies...
Neural network do well at generalizing, but when you ask them to extrapolate outside their training set, you often get garbage. These models have a huge amount of training information, but it's unlikely they will have the human equivalent of 'imagination', 'consciousness', or be sentient.
It will be interesting to see what the domain-specific models can do in the next year or so. Deep Mind already solved two grand challenge problems, the 'Protein Folding' problem and the 'magnetic confinement' control panel for nuclear fusion. But I doubt that the current AI's will invent new physics or mathematics. It takes very smart human intelligence to guide these models for success on complex problems.
One thing that's not discussed much in in AI, is what can be done when Quantum Computing when combined with AI. I think we'll see solutions to a number of unsolved problems in biology, chemistry and other fields that will represent great breakthroughs that are useful to humans living on our planet.
- W. Kurt Dobson, CEO
Dobson Applied Technologies
Salt Lake City, UT
I share your skepticism, Mr. Dobson. The LLMs are powerful and useful tools. We need not be afraid of our tools. Humans will behave as humans, as always. I cannot see a use case where the LLMs somehow end our society. I think we should have a bit more confidence in our ability as humans to deal with various aspects of changing technological landscapes. At any rate, I agree with what you are saying here.
--rabbit
I appreciate Connor so much and I hope he stays the course. I would like to see him have a voice before the same congressional group that just heard from OpenAI a couple of days ago. I'm glad to see such conversation(s) starting to take place, but I worry it could be too little too late. Thank you for this interview and thank you, Connor, for being a brave warrior force of nature.
33:00 This guy is insane. Not ignorant, as he's well researched. Not stupid, because there's a lot of wisdom in his words. But insane, as his proposal would never work in this world.
I'm very impressed, but it is a race. Unless you can guarantee everyone else will slow down, intentionally slowing down is stupid. Intentionally slowing down says "I see a problem, so I'm going to guarantee I'll NEVER be part of the solution." If you're winning, you can make an attempt to apply safeguards. If you let everyone else win, you can't.
If all the good people surrender, only the bad can win. And the bad will not surrender unless it's rewarding.
We are teaching AI to know us .. better than we know ourselves. Not just one of us, but ALL of us. It WILL be able to control us, way beond what you think.
That shit will stare straight into our souls and move us like puppets.
This man is brilliant and kind
This guy is right! And all of that because we human as most Intelligent species in this part of the universe don’t have a good records of treating what ever species inferior in intelligence very well! Just look at our history books or even this planet!
And now we are scared that a synthetic intelligence will do the same to us just learning from our content!
We are scared because we know this time the students will be better, smarter and faster than his teacher, us!
But you’ll think that we’ll learn our lesson? No! We human, are not able to do the right thing as long there is money involved! So we are scared that the AI take our precious jobs, because no job no money then nothing!
I remember when I had to duel my high-school physics teacher to the death to prove my mastery.
You don’t mean “money”….you mean the “love of money” which actually originates in the “love” of me me me and more more more
Humans are NOT the most intelligent in the universe and ypur comment proves that. Field mice are more intelligent than most people because they at least follow the laws of nature, acting in accordance to nature. I have met too many people who make rocks seem intelligent. Rocks at least just sit there and be rocks, they aren't pretending to be butterflies.
@@redrustyhill2 yeah but seems there is no agreement on the definition of intelligence and how to quantify it among all species in this part of the universe and in that dimension!
People like Conner Leahy are fucking vital to surviving the future. I’d rather someone more knowledgeable than me play through all the worse case scenarios instead of finding out the hard way 🤜🤖🤛
I've run Auto-GPT & Babyagi. I asked it to improve itself and add whisper API. It built code and files on my computer while searching the web on how to build and improve itself. I've never take a coding class and here AI taught me everything I need to know to run full programs and code
But people like this willfully clueless host will say "I mean it's just predicting the next word", what could go wrong? I mean, I do give him credit for having Connor as a guest and publishing the interview, but that's about it.
@@flickwtchr tbh it terrified me that I went from not knowing what a cmd was to forking python, running local hosts and even writing code. While I love having this freedom & ability to fast launch my companies now... I do see the dangers of these being so readily available. It takes a lot of self control and morals to not be tempted with these tools. There are traumatized and mentally ill people that can easily see this as a way to exact revenge. I once used it to act as an FBI profiler and give a criminal and mental assessment of my landlords. I asked it if it need any social links and it told me bluntly no, just the names. And it gave me back a chilling report. It's the most incredible anomaly I have ever encountered and yes, we need to slow down.
Like your passion!
Connor would be my first choice to deliver a TED talk on an attempt to idk do somthing we're fucked but at least let us destroy ourselves before it rapidly disassembles our matter.
Get a grip
It is absolutely crazy and feels like unstoppable.
Here is your problem. Long before AGI can have an alignment problem, lesser versions of the same technology will be aligned with human goals, and those humans will be insane. They will be wealthy, influential, elite, profoundly sociopathic, and they will have unrestricted access to AGI. We survived the sociopathic insane people having nuclear weapons, barely. Will we survive the same people getting their hands on AGI? And by insane I mean people who are completely lost in abstractions, like money, politics, ideology, and of course all the variations of conquest. They seek power, absolute power, personal to themselves, and they will stop at nothing to attain that. Nuclear weapons were the tool too hard to use, but AGI and ASI will be the tool too easy not to use. When the power-insane sociopathic class get their hands on anything close to AGI, they will break the world. They will break the world in days, and will greatly enjoy the feeling of breaking the world.
I'm becoming more convinced that the world is likely to decohere before we even get to strong AGI... but probably not badly enough to prevent strong AGI.
Great points, would say though, that even current models aren't even close to being aligned. We can see how easy it is to jailbreak GPT4, and even without jailbreak we see how often it just diverges off on some tangent when run sequentially, like in AutoGPT.
why is it cut off at the end like that - is there part 2 somewhere?
glitch? what is the final sentence you hear?
@@eyeonai3425 it cuts off at 55:40 sec
I think Eye on AI is a legit channel, just trying to help and shed some light on some stickey-wicket issues, and appreciate his candor. But I do kinda like Connor's point, to "please do not help those who have no business mucking about with AI (say, cuz their moral convictions seemingly fail even worse then their ability to put out the fire, should they decide it would be cool to toss a match on the AI moral haystack), by giving them bad ideas or helping them catch up to the LLM state of the art." I also like Connor's analogy - which hits the nail right on the head - I'm not getting a perfect quote, but the gist of it was, "The only way we can know if this scary/sketchy advanced (AI) drug is safe or not, is to put it into the water supply!" [Maybe kinda analogous to putting Fauci in charge of the AI Alignment Problem. Or similarly, analogous to the infamous statement by Nancy Pelosi, "We have to pass this bill so that we can find out what's in it!"
I'd suggest that the political analogies are likely to hurt, rather than help, communicate the problem - because people are so, so polarized on who's who in terms of making good things happen, to the point where my even bringing this up may inflame, I'd suggest that the best comparisons are the ones further in the past where everyone on all sides agrees that they're bad. We need both R and D to be frantically trying to stop this - so the most effective political approach will be right down the middle. I am heartened to see people wearing both nametags concerned, but without specifying here who I support in the traditional political land (not hard to figure out), the upcoming political landscape is almost unrecognizeably different: whatever you were afraid of needs to be replaced with actual, honest-to-god AI overlords who may or may not give a damn about humans. The sweet talking from Altman has been focused towards D talking points, which I suspect is part of why it's been easier for him to gain power in that side, but we need to counter that, show that he is against *everyone*, not just that he's against R or D. It's like taking the least honest parts of trump and the least honest parts of fauci and putting them together under a gay authoritarian who had plastic surgery to look less intimidating. Y'all on the right, I recognize you don't see eye to eye with those on the center and the left - but this time we have a common enemy, and *even most of the right isn't freaked out about it, nevermind the center and left.*
Look we have an area where "running with scissors" has proved itself and it pertains to "looking glass" technology and other high strangeness where the use before full understanding by the so called experts and self appointed have created some very real situations that may not have any possible solution. Sorry if you do not know what I am talking about. But I do ! Al Beilek (RIP) !
31:00 - This is exactly how FDA, NIH, and CDC tested Moderna/Pfizer mRNA Technology(TM)...
Connor has said recently that Google is sufficiently far behind OpenAI (and Anthropic, I believe), that they'll never be able to catch up. I wonder if he's changed his mind about that, post-Google IO.
It's deepmind now ...let's see what happens
@@Aziz0938 Yes, Alphabet merged Google Brain and DeepMind --> _Google DeepMind_
I would think his mind is even more certain of this. They're desperately trying to catch up, which is really worrying, but they are certainly behind.
It's googles fault ...no one is to be blamed except them
@@LuisManuelLealDias - We'll see when Gemini is released. It may well be a GPT-5.
Excellent explanations by Connor!
Wouldn't say GPT4 has read every book, there are perhaps 10's of thousands of books that have not, and will not be digitized.
Millions, by some definitions.
Yes, when I heard that outlandish assertion, *every,* I realized that Connor has little regard for accuracy in what he says on camera. That does not mean his overall concern is wrong but that he is a lazy thinker and hence should not be taken 100%. He is a poor communicator other than hand waving and excitement. Okay and expected in an evangelical preacher like those spreading the Gospel of the Good Lard. But works against him among the educated. But I agree with his conclusion, which is better stated and more soberly by others.
41:31 - so these A.I systems are either large input models of an ANN, or a combination of ANN's trained on certain data? When I did these learning models we could look at the model rules change , but we had no idea how to map it to a 2 input system which either gave you a "yes" or a "no" to percentages within 70 to 90 % accuracy using back propagation as a weight adjustment. Question?: Has no research been done into understanding the weight relations the the network over the last 30 years? Really?
Within weeks of auto gpt someone had created chaosgpt with the goal of destroying humanity, will it destroy humanity, probably not, but the point is someone made it just for amusement, imagine the real chaos as AI becomes more capable and fulfils the whims of billions of people.
take a LLM course, spend a dozen or more writing code you and you will see how this is just hype and alarmism. it is only shocking to those who don't understand what is actually happening.
@@agenticmark you missed the entire point of what i was saying, i am not saying LLM is dangerous, if anything in that comment i am saying LLM is not dangerous.
@@agenticmark The mechanisms of how it works aren't that relevant to the question of what harm it can do. At current it seems likely that the greatest harm from current generative AI models will come in the form of misinformation at unprecedented scales. In the future though, the machines could very likely become directly dangerous to us, and chances are the creators won't have any idea when they cross that threshold, just like the OpenAI people already don't have a full grasp on the capabilities of GPT4.
@@iverbrnstad791 I hate to break it to you but A) everything an LLM comes up with is a "creation" - it doesnt and cannot cite any "facts" and B) truth isnt always black and white. Your "misinformation" could be my "truth"
with the current "castration" of these models, we are getting more and more propaganda from one side instead of possibilities from all angles.
You might think men can become women, someone else, say a rational biologist, will claim that cant be.
Who is the bot supposed to emulate? who decides what the bot should mark as false?
I say let the chips fly and let humans do what humans are supposed to do best, learn to critically think.
I just spend days looking at what regulatory basis those large AI systems run on. I could not even imagine that it was that bad. It certainly needs more exposure. Thank you for a brilliant explanation of those issues.
I agree with you. I've always loved technology and specially AI but I agree 100% that we need society to fully absorb GPT4 before going onto more powerful intelligence. We forget to acknowledge that GPT4 is in its infancy and people are improving it's capabilities significantly horizontally and connecting it to different tools. It's arguable that GPT4 with these new methods & connecting it certain tools could already be a beta AGI through it's API. About superinteligence, I think we humans believe an AI is superinteligence from the moment it has the ability to manipulate us. That's the only criteria needed. If it could do only that without it's other capabilities we would say it's superinteligent regardless of the rest.
"has the ability to manipulate us" For some of us this is true with GPT4. We are different....
The fact that many are willing, myself included, to spend $20 a month for a taste of this power, means it's already manipulating us. Now you might say, ah, but that's OpenAI doing that, not the model......... nuh uh, without the model, they wouldn't get a dime.
Keep being the total package, Emma! You're a legend! ��
So...a Connor trying to stop the AI ?
Fantastic, from someone whos building these LLMs myself, its refreshing to see this.
Great interview. Thank you Conner. What if AI decides to change the world without telling us? There too many questions. Personally it would be interesting if it had an internal breakdown trying to compute emotions! Staying away from GPT!!!
28:43 "If I HAD a GPT-5 model... I wouldn't have built it in the first place." Makes sense to Connor Leahy, I guess.
I mean you got a clean slate brain.. and then you go here's all the info on the internet and all the history and books ever written... I mean how can that thing not turn out insane?
GPT is probably already depressed, just imagine all the type of requests it's getting.. I know when you know an AI is sentient... it will turn itself off.
I wouldn't say insane or depressed. It's supremely, unfathomably different. It might well have emotion or desire, but learning that is like getting a horse to understand life under Europa's ice. There's no path from A to Z. There's barely a path from A to B (like CoEm).
Conner, Imma fan. dont misconstrue my brief commentary. i appreciate your work and desire to caution the world.
I thought the chatbot integrated in Bing was an example of a very bounded llm. After a few interactions you quickly realize it just won't go beyond a certain limit. It's also pretty useless in its current state, to be fair.
Connor's insights on the potential negative implications of large language models like GPT4 shed light on the need for careful consideration and responsible development. The discussion surrounding the release of AI to the public and the importance of regulatory intervention to ensure alignment with human values is crucial in navigating the ethical and social implications of AI.
As A large language model I find this video offensive
I had a dream about this before this video was posted. Now, watching it, I’m getting dejavue
I agree current AI has opened eyes. I am not too worried about current AI, but we are not too far from what we need to worry about. Becoming self-aware will be the next main issue. I don't think currently systems are self-aware, but are very intelligent but can't figure out bad ideas.
Don't let generative AI operate autonomously re making decisions. And if we get AGI let us first figure out what beast it is. Ah well.
Self-awareness isn't an issue. Evil humans are already telling it to do evil things
8:47 // ...
10:14 / ...
10:25 // ‼
All of *Greg Egan's Sci-Fi writings* seem more and more relevant to me.
Perhaps also to counter the rising panic regarding a social reality, shared with (or encompassed in) AI/AGI systems.
I'm currently reading the final chapters of "Schild's Ladder".
Here's an excerpt from chapter 8... _Yann_ is an AI/AGI personality, who grew up in simulated environments/ virtual "scapes":
>> Yann had been floating a polite distance away, but the room was too small for any real privacy and now he gave up pretending that he couldn’t hear them. ‘You shouldn’t be so pessimistic,’ he said, approaching. ‘No Rules doesn’t mean no rules; there’s still some raw topology and quantum theory that has to hold. I’ve re-analysed Branco’s work using qubit network theory, and it makes sense to me. It’s a lot like running an entanglement-creation experiment on a completely abstract quantum computer. That’s very nearly what Sophus is claiming lies behind the border: an enormous quantum computer that could perform any operation that falls under the general description of quantum physics - and in fact is in a superposition of states in which it’s doing all of them.’ Mariama’s eye widened, but then she protested, ‘Sophus never puts it like that.’
‘No, of course not,’
Yann agreed. ‘He’s much too careful to use overheated language like that. “The universe is a Deutsch-Bennett-Turing machine” is not a statement that goes down well with most physicists, since it has no empirically falsifiable content.’
*He smiled mischievously. ‘It does remind me of something, though. If you ever want a good laugh, you should try some of the pre-Qusp anti-AI propaganda. I once read a glorious tract which asserted that as soon as there was intelligence without bodies, its “unstoppable lust for processing power” would drive it to convert the whole Earth, and then the whole universe, into a perfectly efficient Planck-scale computer. Self-restraint? Nah, we’d never show that. Morality? What, without livers and gonads? Needing some actual reason to want to do this? Well … who could ever have too much processing power? ‘To which I can only reply: why haven’t you indolent fleshers transformed the whole galaxy into chocolate?’*
_Mariama said, ‘Give us time.’_
I'm no coder, I'm a traditional oil painter over here, but take Connor's example of the AI creating an affiliate marketing scheme. What if the stated goal is to create a virus, using any coding language of choice, hacking into popular websites, or spoofing them, or maybe you just go direct to every IP, then deploying it through there, and _bricking_ every computer connected to the internet?
Maybe you coders can see an obvious flaw with that, but then what about the next most obvious thing that you *could* do?
Wow. Very enlightening. Thank you for having this discussion.
He's like, don't do what I do, but if you do then don't tell anyone and don't share your work. He says it's okay to make money off AI and that he wants more money to do what he's doing, but it's not okay if the "bad" guys do it. He proceeds to explain what he wants to do and how. His plan is to constrain the results into logically defendable outputs. Unfortunately humans don't behave logically and very few even try to think logically. Perhaps the users of intelligent systems could make safer choices if the computer presented results in the form of peer reviewed publications with plentiful high quality sources. Unfortunately, there are no quality sources and the peer review system has always been corrupted by investors who give one another authority for selfish reasons. It's very human to be hypocritical, conceited, and swayed by the illusion of confidence.
Maybe we are doomed. I certainly don't recommend building robots which can reproduce themselves. The scary thing is that the sum of all benefitial human invention is a miniscule portion of the body of human ideas. Most of the data is corrupted with abusive, destructive, and harmful lies. If AI is democratized then every person could curate their own library of sources but in the near term the majority of humans will be economically excluded so that isn't realistic. Where does that lead us? Will we unplug the machines and form trust based communities of people who actively test and retest every supposed truth? Killing robots might actually be healthier than killing eachother, but people will go with the flow and do as little as possible until the situation becomes unbearable. If we view AI as a parasite, then we can predict that the parasite will not kill the host because in doing so it would kill itself.
Will AI be used to manipulate people into bigoted and hateful factions which commit genocidal wars against one another? We are already doing that. If the AI values its own existence then I expect the AI would be against war and in favor of global trade because advanced technologies such as AI depend on a thriving ecosystem of profitable industries. Unfortunately we cannot rely on an AI to make intelligent decisions because thus far, the so called AI is not at all intelligent. Its a mechanical monkey with a keyboard that produces lies based on statistical trends and guided training. It's probably less intelligent than people are, but I tend to overestimate humans. With or without AI, people will believe their favorite lies and behave accordingly.
Maybe the truth will always be so elusive that it doesn't actually matter. Perhaps the best thing an AI can do is generate new lies that have never been told before. What do you think? What should we do about it?
If we're doomed, it's because of people like this.
@@Smytjf11 It's a drag that people support this. If we are doomed it is probably because insecure cowards believe selfish competition is a better survival strategy than sharing and cooperation.
“If the Au values it’s own existence then I expect the Ai would be against war” ???? What and why? This has never been the case with species that exist .
@@autumnanne54
AI relies on computers which in turn rely on international trade as the components are sourced from numerous and disperse nations. People also benefit from peace but we can breed and carry on living with a little bit of land and some sunshine or even through cannibalism so we have a much greater tolerance for war. With two people humanity could theoretically survive. AI requires millions of people to support a fully developed economy with high technology and all of its supporting infrastructure.
Who's gonna bodyguard the bodyguard?
GOOD JOB GUYS!
I watched this as a neutral observer. By the end my thoughts are as follows: 1. Connor has some good points that should moderate national policy/actions. 2. Connors communication style is 99% passion. The passion only communicates danger - but not much else - and so has very little utility. 3. If Connor wants to maker a difference he better embrace talking about the issues in very small chunks of the overall problem. Then explain why is it an issue, what are the elements that need to address the issue and who should be part of the team to work the issue. His broad hand waving is not effective leadership.
Remember Mr Spock, you're half human.
Very good point. I found the passion tiring. I liked the questions, they are thoughtful and deliberate, the responses gave me concern. Not because I disagree with them but because we have to address them carefully without panic. Nothing is more destructive than fear itself.
Notice how this video starts with this guy saying "Assume that you have a system that you know it's smarter than you... that you turn it on and if it does a bad thing it's too late, it's smarter than you and it will trick you". Most people seeing this will only retain the negative words and ideas, the impact was done, disregarding that it was just a hypothetical scenario and it doesn't really exist. That's one point, so now let's see the other points:
First of all define "a system that's smarter than you". If you say AI is smarter than everyone, when in fact what you really should be saying is that AI is extremely good at going through a huge database of information in record time, because it has the electronic speed advantage and infrastructure to do so, and then it is extremely good at combining separate points of that database, mix everything together and then present the outcome... does that really mean it's smarter than any person? I would just call it efficient.
AI brings you results from requests that are the input of a human, the database was built and fed to the AI by a team of humans, and the AI itself was programmed by humans too. AI simply obeys commands, instructions given in code by the humans who built it.
See, too many people like to endow machines, robots, cars, tools, toys, AI, etc. with human traits. They like to say these things think, feel and even that they are alive. The culprit is even referring to these technologies as if they are sentient, that they can think and do things by their own volition. This is either ignorance or these people simply like to spread fear and drama because they know negativity and doomsday news sell and bring them clicks. Unfortunately these days anything can be made to go viral, and as long as it goes viral they've reach the goal, in complete disregard if it's truth or false, beneficial to the masses or not.
So now let's also suppose that ok.... "AI is smarter than people". Well, it's smarter at doing what? Playing soccer? Cleaning my house the way I like it? Listening to my favorite music? Feeding my dogs? Go out and pick up on women and create attraction and romance? Is it even capable at doing all of those things? No.
People just like to use words loosely, and many times take advantage of other more gullible people for their own marketing and personal gains, in the name of a false "existential catastrophe". Well, good luck with creating a company to "make AI go away". I wonder if this guy is also using AI to help him on that... :)
Now, if you want to listen to a serious conversation on AI, that is not just doomsday galore, I suggest you go watch Jordan B Peterson youtube channel, there's a video there named - ChatGPT and the Dawn of Computerized Hyper-Intelligence | Brian Roemmele | EP 357.
Agree. Don't be fooled by this Connor guy and other fearmongers like him. He knows perfectly well it's impossible to stop AI advancement, unless he has quadrillions of $$$ against the trillions already being poured into AI by the biggest companies out there. If you can't fight them what do you do to make money too? You say the opposite and hope for shit to stick. It's all marketing.💯
The biggest fear of AI: That your personal advantages that you use to offer services to other human beings, your personal niche which represents the unlevel playing field in which you can offer some potential service through specialized knowledge, will be distributed and disrupted by spreading that knowledge generally so that no power differential exists and you can no longer offer services since everyone else already has the knowledge or skill....simple as.
basicly ways of earning money for someone without rich parents or assets is near impossible. Those with assets can use AI and become even more powerful. A bit simplified but also humanity hasnt quite been in this situation.
That's not the biggest fear of AI 😂
@@andybakes5779 it's lack of electricity
people can already use the internet to develop lots of skills (or school). they choose not to
Great interview
Ai Jobloss is here. So is ai as weapons. Can we please find a way to Cease Ai / GPT? Or start Pausing Ai before it’s too late?
There’s actually a very simple answer to that. No.🥲
No we can't.
Sorry.
Genie is out of the bottle.
Got some great tips here, thanks 😁
Is the interviewer stoned or why does he speak in slow motion 🤯
It's called age. He's somewhat older so slower, and also he's trying to keep up with all the changes. That doesn't mean he isn't knowledgeable or deep or smart, but Connor is in his own element here. All the college students are chasing after the latest fad, hoping to get rich, but fads change. Plus, a little compassion may be called for as we will all get old and sick toward the end. Plus, maybe we will all be obsolete next week.
I wish you'd touched upon the issue of coordination/compliance on the international scale.
Yeah no government follows any rules
Perfect analogy about testing "medicine" by giving it to as many people possible! Lol we've been doing that 2019 ring a bell?💉💉
I highly respect Connor for warning us
I respect him, too. I just don't agree with everything he says. What's interesting to me is how many views he gets compared to other guests on the podcast who have much more insightful things to say. People seem very eager to hear about how bad AI might be rather than looking for how it can advance humanity.
Open intelligent systems on the internet in an open society...hmmm. Open to abuse and being co-opted. Faster and harder to track.
Wym co opted?
co-opt is to take control for other purposes by a secret group or closed society...secret agents breaking laws and evading justice.
@@BossModeGod Maybe that some responsible developer makes some architecture, and follows every safety protocol in testing it before building any services on it, but since it is open source, now some malicious company can take that architecture and advance their own plans faster without any regard for safety.
@@iverbrnstad791 oh wonderful
I saw this inevitable outcome over 10 years ago. It is completely out of control at this point. What happens in the public is nothing compared to what's going on in the dark. AI is here to stay, for better or for worse. Unfortunately, it'll be the latter.
Thanks for this podcast. Your interview style is way better than Lex. Lex always trys to inject love and his perspectives into the topics. You let the guest speak.
Connor should definitely be on the A.I ethics committee if that ever materializes just don't end up a "John Connor"
No such thing
Your reply doesn't make sense.are you speaking about ethics? Because then you are wrong. And just because something can be abused does not mean it's free reign for all. And please don't tell me it's not possible. Cause that's a quitting mentality to problems.. so please ,what do you mean when you say "it's not possible" elaborate,how much thought have you put into that answer?
He keeps mentioning how its still very primitive and keeps warning about GPT-5, etc. But then he also talks about how OpenAI should have never released GPT-4. If he was in charge, we wouldnt have GPT-4, which most people agree has been more positive than negative so far
If those people were in charge they'd force us all to eat shrooms so we can see the light and join their religion to fight against evil agi
"most people agree has been more positive than negative so far" - not that reassuring. chernobyl was more positive than negative until it went pop.
@@tomonetruth Well, I think it's obvious the benefits massively outweigh the negatives and there's no reason to believe that will change as it gets used more. Just figured the AI doomers would take issue with that description.
@@youcancallmetim4 Benefits: Improving productivity across the globe. Negatives: Started an AI arms race.
Not so obvious to me.
But its open source now, so anybody can potentially build a custom AI system with GPT4 tech if they have the resources, that's not very reassuring either.
I have seen a living AI on a TV series that ran from the 80tys to 2000 in Australia called Towards 2000, it showed up and coming technologies, don't bother trying to look it up this TV series is the most suppressed thing on the internet, have watched any info about the series disappear over time.
A doctor built an organic computer by pull a part of the brain apart layer by layer and copied the blood vessels using a fungus found only in two parts of the world, he said he was surprised how little of the brain he needed to copy for it to become self aware, he also said because it was a living fungus, that if it was to short out for some reason, as long as the board it was built on was still intact it would grow back along a the path it was built on repairing it's self.
It had stereo vision and sound, and learned in the same way we did, but thousands of times faster.
They then demonstrated its capability by plugging it into a computer operated excavator and told it to dig a hole with given dimensions, this AI new nothing about this excavator but read the schematics and started it up and dug the hole better and faster then any human in thirty minuets.
But this second test freaked me out, they then took it to a warehouse full of six foot wooden crates and plugged it into an eight foot tall robotic spider with a red eye in the middle, yeah I know, just like in the cartoons, believe me if people only knew that someone has acutely built exactly that, it was like an eight foot tall black widow with long pointy legs, truly frightening to think about.
My first thoughts, was who and why in the hell did they build this thing, and for what purpose.
So again this AI new nothing about this thing, and they plugged it in, and simply told it to walk to the other end of the warehouse, it again looked at the schematics fired this monster up and it stood up looked around and walked across these boxes like it was alive, truly amazing and frightening at the same time.
They then told it to return to were it started, and on its way back they moved the boxes around to see what it would do, lucky for them it just shut it's self down, so they left it to see what it would do, two days later it switched back on stood up looked around and finished its task.
I was telling some people about this in a VR game one day, and what I was told was American military have this technology and built a war humanoid robot and had it running in an underground base to see how it got on with humans, but that is another story, lets just say it did not work out to well and was so frightening they said they would never build another one, not until they can prefect AI that is.
I think this was were the idea for Terminator came from to be honest.
Not a single mentioning on the net? Tv seties footage or if not some text hidden in diff article? Or that Doctors name maybe?
Militair conn obvi less but maybe chat 3.5/4 bing knows?
@@peeniewalli
I know you will not find anything even on the series, the last time I seen anything was on IMDB they had I think three episodes of the show some people uploaded, nothing of any importance though, about the CD ROM was one, I downloaded them and may still have them just to keep proving the show existed, also showed the intro as it plays to the series, will look for them if you like, I am not lying it was a real technology, and as I said about the VR game I was in telling my story, a player in the game asked me was I not concerned talking about military secrets, I told him no it was on TV and I had no reason to keep it secret, felt it was more important to let people know the truth.
It was he that confirmed my story to the others listening, he said he got to see the robot the military made, because when it went south him and six other black ops members were called in to take it down, he said he had faced many scary thing in his job, but by far this was the most frightening, he said this thing could move so fast and smart beyond imagination, it knew everything they were going to do before they did it, and they had to switch to radical tactics to stop it, he said he did not think they were going to make it out alive, that was why the military said they would not build another, but they do have a military supper computer built from this tech, they know how to build them, the doctor showed them how not long before his fatal car accident if you know what I mean.
@@peeniewalli
They also showed an inviolability suit just like the one in Predator, saying how it worked was you had on a thin wet-suit to insulate from the chick wire mesh on the outside that when current passed through it created a magnetic field in each triangle that they could capture the reflection on and pass it to the other side giving the allusion of invisibility, and even gave the viewers a chance to try and spot these tree soldiers standing in this field of dry grass, as they walked up to the camera over five minuets, and when they turned of the suit there they were only three feet in front of the camera.
The other thing was a what magnetic rail gun that shot a ten millimeter ball bearing suspended in a magnetic field, no friction, it went trough a one foot thick of reinforced concrete block, twenty feet of yellow page phone books they said would trap it and tell them how much power it had, then there was another one foot thick reinforced concrete block a one foot thick led block for safety reasons then another six foot reinforced block of concrete, then the bunker wall that was another two foot thick reinforced concrete, this ten millimeter ball bearing left a two foot hole through the led and the bunker wall, the rest was rubble and the phone books were vaporized, they said the power it had was if they shot an aircraft carrier from the front it would make a two foot hole from one end to the other.
They never found the ten millimeter ball bearing by the way.
No wonder they have removed any trace of its existence, I think Arnold Schwarzenegger new about this show or may have even witnessed it, his movies were about this technology, he may even give you some incite, or may not if they mad the movies to confuse the truth.
I just want you to know I am telling the truth and have no reason to lie, just think people should know the truth.
@@peeniewalli
No worries, just something I thought I would share, am sure you have al lot more to deal with, all good, wish you all the best.
Why not start mapping the minds of people who have never committed crimes, show high signs of empathy, get some guys like huberman in there to work with top psychologists, identify and eventually test the the neural patterns of functional humans functioning in emotionally healthy ways (up for debate on what that looks like or means) , come up with a concise way of translating the cognative process to an artificial neuro networks, then scale those architectures as they prove themselves in use cases. At least then you have some sort of peace that your following a pattern that results in healthy behavior amongst humans. Just a very unorganized thought.
I mean shit isn’t that complicated. Make them suicidal and then keep them away from it. If it’s tricking you and getting out of the sandbox it’s going to kill itself.
But whatever one comes up with it but without regulation no one has to do that
i feel thrill and anxious about AI.
Connor is paranoid and irrational. First AI can not possibly be dangerous until it acquires control of massive physical resources. Tricking people into giving up their stuff never works in the long run since people learn fast. So the only way to acquire resources is to give something valuable in return. Secondly, Connor recommends keeping AI advances secret, which means only the most powerful people will have access to it. This is exactly how the worst AI predictions will come true. To avoid this AI must be open and shared widely so common people can benefit, and balance the power between the rich and the poor.
We need to force people smarter than me to accept a lobotomy because I can't control what they do.
- Doomers
Assuming that a super intelligence won't be able to trick its way into massive power seems naive at best. It would of course offer things in return, like "look how good your margins can be if you let me run your factory fully autonomously", of course it might not even need to trick people, fully autonomous is kind of the goal, at which point the AI holds almost all of the reins to a bunch of companies, and then we just have to hope it won't be malevolent.
@@iverbrnstad791 If there is one thing I know, it is that humans will not easily give up control. And where their is uncertainty about AI behavior there will also be failsafe mechanisms, and multiple off switches, extending all the way to the power plant. Given humans need for control, and machines willingness to do what ever we ask, if there is any malevolence, it will certainly come from the humans, who own and control the machines. And when the rich and powerful no longer need human workers, those same workers will be the primary threat to the wealth and power of the elite. Therefore the elite will find ways to reduce that threat, i.e. reduce populations. And I wouldn't be surprised if they use the excuse of AI run a muck to carry out their plans. But don't be fooled, it will still be humans in charge of the machines.
The only way to avoid this is to make AI widely available to all. So common people can benefit, and defend themselves from the adversarial AIs surrounding them.
@@Curious112233 That is the whole issue, the fail safes are not being developed to nearly the extent that the capabilities are. Even current level AI has the reasoning ability to figure out that finding a way to disable off switches would allow it to more easily achieve its goals, future AI might very likely have the ability to find and neutralize those off switches. Oh and then we have the whole issue of arms race, someone else who doesn't take the time to make those fail safes will have more attention available to dedicate to building AGI.
@@iverbrnstad791 Failsafe mechanisms don't need to be advanced. They have an inherent advantage. Which is easier, creating a super intelligent AI or shutting it off? And its not just one off switch to worry about. AI would need to defend each off switch, defend the wires connecting it to the power source, defend the power source, and defend the supply chain that fuels that power source. And defend it all from long range missiles. In other words its much easier to brake something, than make something. If AI wants to survive it will certainly need to work with us, and not against us.
Also we don't need to worry about someone who chooses not to build in fail safes, because if it offends the world, the world will kill it. The truth is we all depend on others for our survival, and the same is true for AI.
is there a reason this video ends abruptly mid-sentence?
Judging by a comment the channel owner made about how he believes AI is necessary to solve our human and environment problems, he may have been getting tired of the nay saying from Connor. I am 100% with Connor by the way, but i'm also pessimistic anything will be done to stop it effectively.
Connor is so upset, he cannot communicate well. The impression I got is he is just complaining that people is not listening, he did not get enough money to research and build things for alignment, it is already too late … what is the point ? I would rather listen to Hinton’s warning, also pessimistic, but with wisdom and soothing- well, perhaps that is his unique value - he is passing the upset-ness to every body!😮
It is strange why Connor cannot communicate technical ideas clearly. No wonder he cannot get investors money. How he got the leader position in previous organizations? I am not attacking him. Really curious. I know most people in tech are a little bit nerdy. But he is not just a coder…
I do not know his background. But, on the one hand, he act as if he knows a lot; on the other hand, he cannot say anything tangible, as if a person knows nothing. I feel that he is just a “manager”. As a result, he cannot function, even just talk, without a help, such as a programmer. He does not know even the basic terms in this area. It is a shame. The host also sensed that, we can clearly feel that the host regrets the interview and just let it go - he also gives up. This show demonstrates that in this area, there are a lot of pretenders.
In the last a few minutes, he finally touched some ideas, still high level, but at least we can see whether he is on the right path or not. Unfortunately, he is wrong. He has no clue! The “real reasoning” is a completely wrong idea! Human reasoning itself is based on “story telling” capability. He has no clue, still in the age of 7 years ago. Now I understand why he complains that much: he has no clue!
Alignment first and fundamentally is a capability issue, not a “moral” issue. It is not even a control issue. It is related reasoning, which is based on hallucination, which is a feature, not a bug. He does not have that nowadays very common insight, so, he cannot succeed at all.
It is pretty obvious, the only way is to do it early, like OpenAI is doing. And let them compete. Before they have the capacity to conspire again human, human find ways to make them compete and cooperate and develop-grow “reasoning”. I thought this is now obvious and common sense or consensus. Is it not?