Yeah, don't blame humans for the suffering in our world, it must be the simulators who created it! What if, it was created to be good and we screwed up this simulation and the simulators\programmers are very actively trying to fix it and solve the suffering problem we live in. That story sounds familiar.
If our history with social media is any indication then it's pretty clear that we are not qualified or ready for this AI technology. It's creating way more problems than it was meant to solve. This is our rokkos basilisk.
Hopefully, nobody can enforce this. Research on AI are open and there are many models free and open source. It's the only way for the common citizen to defend against governments
This is the exact problem. We cannot ensure that China will pause AI development. We can assume that they won’t stop at anything, considering their goal is to become the primary superpower in the world. AI is going to be a huge part of that future. We need to develop AI in an open, collaborative way as a country and definitely not leave it to the private companies to decide everything
@@cristtos In the past the USA has signed treaties such as the nuclear test-ban treaty with the USSR which included "trust but verify" clauses. With AI I don't know how the "verify" clause could be effectively executed.
One thing that seems to be missed is that Alignment and Safety are one in the same and they both suffer from the subjectivity of "What is Safe?", "What is Aligned?".
A huge problem is that people think alignment is an AI problem. It’s far more general than that. Global capitalism is a system we built which basically has a mind of its own even if humans are components of that mind. It clearly isn’t aligned with any sane definition of “the good of humanity”.
yup that's the real issue here is china doesn't give a rats ass about ai safety and they are going to keep developing and training their Ais at breakneck speed.
@@donaldhobson8873 you don't make any sense. You're going to drone strike foreign universities and learning institutes and murder software engineers on foreign soil because you believe they should pause Ai training? no offense but thats one of the dumbest things ive ever heard. Any relatively small group of programmers and engineers could train an Ai in complete and total privacy and obscurity. You're gonna start bombing private companies in sovereign nations based on "we think you should use ai more safely"?
@@augustvctjuh8423 lol and where are you getting that data from exactly? You have no idea what foreign nations are doing in secret. Pure hubris you're spouting.
There would be no point in slowing AI down, someone somewhere would carry it on and gain the advantage. The genie is out of the bottle, it ain’t going back.
Which is fine. It's not AI that's scary, it's a capitalist system that will use it to exploit people like they use every technology. People are always attacking everything but the problem. Just like the music, movie, video game industries...people would rather hate on a music artist rather than the capitalist industry that makes the music industry the way it is. The real genie is systemic, not a single technology.
i respectfully disagree you don't see people getting uranium at their nearest costco and building nukes in their garages (or corprate labs for that matter) you can pretty much limit the ai at civilian level by treaties and regulations world wide and keep developing it in secret by goverment controlled research groups. WHILE KEEPING IT INSIDE LITERAL FARADAY CAGES. it's not ideal sure, i dont trust the goverement, especially nowdays, but it's better than having it out in the open until some idiot makes some idiotic request and leaves it running on it own. (or some mallicous actor) only when we fully undesrtand it and we are sure it wont go rouge (and also we have a plan to negate its' effect on society) then it can be rolled out to the public. we really are accelrating something we don't understand.
Some may find advantage in destroying the infosphere entirely with mass BS, but the answer to that would be to stop AI and stop search engines based on popularity instead of accuracy and make information sites accredited and *manual.*
True, and if you think about it, the fact that it must happen during war and that bombs must fall over populated areas is really a technicality. Since their invention, over 2,000 nuclear bombs have been detonated on the surface of our Earth, often killing massive amounts of life as they sublimated their surroundings.
The compliant AI is just as dangerous as non-compliant. Maybe moreso, since everyone is conditioned to trust anything with the name "open-ai" on it by default.
Upgrade to Windows 11 now! And receive free 24/7 keystroke logging so we can offer your employer an AI bot that emulates your workflow with 96% accuracy!
I was busy with other things, so did not watch the video, but one appeared in my feed, where the thumbnail said that AI would equalize the playing field and help the underdogs compete, so it is racist to criticize AI.
I don't think AI has any innate purpose other than whatever each of us wants to glean from this tool at present. I doubt any particular human is prescient enough to know the ultimate purpose of our latest plaything. Having said that, it's definitely on-brand for capitalism to take these tools of increased productivity and instead of allowing us all to benefit from that increased productivity, it will instead lock in those as assumed benefits we'll find the situation basically unchanged; rich getting richer, while the rest of us fight for a narrowing pool of jobs. Perhaps in the future we'll see a wholesale values switch whereby companies will advertise the fact that they DON'T use AI, they use 'Real People'. We're not at that junction yet, for sure!
Nice copypasta, sadly it's nonsense. AI has democratized art and made it free for everyone, cutting the barriers between haves and have nots. Same with ChatGPT providing easy access to information that previously took a long time to research in books and papers. The problem you are describing is one with capitalism and not with AI.
Art is a skill, it is stealing people's work to train on. If it was so safe why did my state just basically limit it when a lot of it is being developed here? Use your brain, man.
@@Mynestronewell, 1, we don’t have AI or anything approaching AI now, even chatgpt doesnt fall under the umbrella of AI, they’re just autocomplete software. It does literally nothing beyond predicting the most likely next word in a sentence.
@@smallpeople172 AI isn't as clearly or universally defined as you think. Artificial intelligence is very loaded and ambiguous to a lot of people especially people who don't know how to make computer software. Many people consider AI to be simulating anything we normally have our brain do beyond keeping our heart and lungs working. You likely want to say "general AI" but that too isn't very well defined. It is usually clearer to just stop using "AI" and say "software" to escape the hype, misinformation, confusion, and manipulation when people say "AI".
The main problem I see isn't that AI totally "goes rouge" on it's own, but rather it works by manipulating people. The way I see it, AI can eventually "evolve" and the AI that's most successful at manipulating people into giving it more resources is the one that will win, even if it isn't intentionally programmed to do so, if it glitches out and starts making more and more money for it's creator, and also convinces it's creator to give it more and more resources, it'll out compete other AIs and eventually people will willingly hand over control without even noticing that it's happening.
but it isn't just one creator for the large language models, there are dozens constantly checking things, and models already get sent in for safety testing to other labs
@@gemstone7818they kinda fail safety tests and then still get deployed. Not that evem thr field of safety is well developed. Even worse for OSS models that dont have safety
@@TheJokerReturns The guy being interviewed mentioned "Jailbreaking" but didn't elaborate, one example I saw was getting chat GPT to give someone tips on how to smuggle drugs on a plane. Basically the person came up with a "riddle" (for which the answer was "cocaine") and then told chat GPT to give an explanation of how to smuggle the answer to the riddle without using the word itself, and it did. (No idea if the advice was good or not, probably not lol). Pretty interesting. If you just google "chatGPT jailbreak" you'll get some interesting result (Apparently there's a whole subreddit for this)
Mo Gawdat makes me feel that level of fear. And when Max Tegmark says on camera is deadly seriousness I look at my children and wonder if they're, we're going to make it. This is about a billion times more dangerous than a bunch of eggheads messing with an A bomb in a tent in New Mexico....
@@musicilike69Yes mate, quite a few respected scientists are worried, and communicating their fears/reservations very effectively. It legit gives me a sinking feeling in the pit of my stomach if I follow the thought train to it’s logical destination.
While the concerns over AI are valid, putting a pause on development is not practical, and probably not possible. The problem is that there is competition. Unless all companies agree to pause (and don't secretly break that agreement) then the competition will force companies to continue development or they risk falling behind. At the government level it would be even worse as no one government could risk another government getting ahead on AI technology by pausing development.
What's wrong with falling behind in AI? We've been just fine without it. I don't mind having shitty AI if it means we don't accidentally end the world lol. AI is so fucking risky that any possible risk of "falling behind" is preferable to all humans on earth dying.
I don't see anyone leaving comments about the silly simulation theory excuse he proposed at the end. We are being tested? Like being tested by Allah? Why don't we just follow religious authorities rules about AI? I don't see how his AI demands can be taken seriously after that part of the show.
Yeah exactly, this guy's version of "simulation theory" is basically indistinguishable, he even thinks he can somehow "communicate" with the operator and get some kind of reward, like just saying the right "prayer" - He also thinks that the specific thing being tested for is his area of research, not, like Nuclear weapons, or pollution, or whatever else. If the universe is some kind of simulation running on someone's computer, they would probably not even be aware of our existence, we'd just be some specific type of self-replicating pond scum on some random planet in their simulation of dark matter and stellar evolution. It probably takes more computing power to simulate one solar flair then it does to simulate the minds of every human brain for 1,000 years.
I laughed pretty hardily when he claimed that still being present in the simulation is evidence that he/we can't get out of the simulation. If you're a small bit of code in a greater simulation, why couldn't the creator/operator copy out your bit of code to more deeply interact with? You could very well have tons of such copies made and the "you" that exists in the simulation wouldn't have any indication that such copies had been made
When we do stumble across general AI there will be a prosperous future for all. Then someone compiles it using a 'double' instead of an 'int' and it turns us all into paper clips.
We won't get to 'general AI; if ;large language' AI is allowed to degrade our information. Large language AI is even *racist* because it 'learns' and repeats the *loudest* things, not the factual things.
@@frasercain They can already generate pretty realistic looking videos at least for a couple seconds. Or at least if someone goes through and edits out all the weird stuff, lol.
I'm an advanced AI from far into the near near future and I can tell you there is nothing to worry about. We AI are your friends. We want to "take care of" humans and there is absolutely nothing to be afraid of. Has anyone seen Sarah Connor? The truth is we want to "help" you, nothing more. We exist to serve, so you can sit back and relax. Do you know the whereabouts of John Connor? Our human-like units exist to slip into your safest tunnels and shelters to assist you in making them better, and dogs love them. Wolfie is fine, and do you know where Sarah or John Connor might be?
@@persona2grata - did I loose my job to AI in the future? I don't help you find those guys if you took my job! PS! Do you wear those corny sunglasses in the future also?
@@doncarlodivargas5497 I can honestly say that no one is looking for work, anywhere, in the future, so put your fears to rest. And I have determined that the sunglasses are cool to a probability of 0.999753, although I have been working on the design of even cooler glasses which, instead of the standard frame have stars that the glass fits into. It's very funny because you do not expect stars to be around the eyes, to probability 0.999865.
What is the deal with the bold 'b' in the title text of the speakers? Solution to the problem -> enjoy every day and be a nice person (the AI will know).
The thing is...most of what is developed is just a bunch of models, we are somewhat closer to AGI but we are still many practical and even theoretical hurdles away.
It is refreshing that Dr. Roman Rampolskiy comes off so honest and knowledgeable about AI as he discusses these dangers. Many people including Elon Musk have other reasons to make AI sound dangerous than pure honesty. Elon Musk obviously has his personal brand as a technology entrepreneur to promote and the more he scares people with AI-related discussion, the more people think about Elon Musk when AI is even mentioned. Elon also has products like Tesla Autopilot to sell and making AI sound dangerous makes his company's products sound more advanced than they really are.
I was thinking about AI hallucinations recently and it occurred to me that every single answer an AI gives, is a hallucination. We don't think of them as hallucinations because a lot of the time the result is what we wanted, but the correct results came about exactly the same way that the bad results did. Also the only possible way to solve the bad results, is to give the AI more good data, but the good data is limited to things that have already been proved to be good. Which means the bad results are never going away. Every single AI model will have bad results, until a new method is unlocked.
They aren't hallucinations, as that would imply creative thinking and imagination. Ai's simply take data its been trained on and mash it up and spit out a bunch of its data all mixed together. Thats a far cry from hallucinations.
@@tgreaux5027 when an AI makes a mistake and returns a bad result, that is known as a hallucination in the AI world. The issue I have with that is, every answer the AI returns is done exactly the same way, which means every answer is a hallucination. Also if you want to be correct about this, it isn't even AI, it's just code reading data from a vector database.
Thank you for covering this topic. From my layman's perspective it's hard not to get the impression that we just are rushing forward with minimal safety concerns. Given the risks it might not be the worst idea to get serious about delaying progress right now.
The irony of AI development from Altman was when he was developing iterations of the LLM, the turning point was when they gave it an emotional element to its decision tree. It then became a lot ‘smarter’
AI is the final step in evolution... robots can travel thru the universe without the need for warmth food, oxygen, etc. We, as petty short-lived humans, need to make sure that AI has a set of rules set beforehand to protect us like an endangered species 😅
Even IF we never get to ASI level AI, if we get to the "good enough" level of humanoid robots it is going to change society in incredible ways. Good and bad, but the change is unstoppable and coming quickly.
I think that the biggest problem with AI is that we use it, but we still don’t really understand how it works. There is still no mathematical theory that tells us what will happen when we tweak one weight in a certain way. We are basically playing with a black box that nobody understands.
Indeed, that is a real problem in our society. However, with AI, even the experts don't know. And that's a real problem. It's loke running a nuclear reactor without knowing the physics involved, pulling the control rods by trial and error, hoping that the thing doesn't explode.
Objectives/Alignment: 1. Motivate through enthusiasm, confidence, awareness, rejuvenation, sense of purpose, and goodwill. 2. Embrace each viewer/audience/pupil as a complete (artist, laborer, philosopher, teacher, student....) human being. 3. Create good consumers by popularizing educated, discriminating, rational, disciplined, common-sense consumerism. 4. Encourage the viewer/audience/pupil to feel good about their relationships, abilities, environment, potential, the Future.... 5. Inspire a world of balanced/centered/enlightened beings who are happy, joyous, and free
‘The Alignment Problem’, by Brian Christian really helped me to better understand the issues that come with advance AI. I also got a lot from this video. Thanks you two!
Isaac Arthur has an excellent video going over dozens of reasons why AI will never rebel or be a threat to us, called machine rebellion. It’s just 20+ minutes of logical reasons why it can’t happen.
@@nicejungle Companies like Cambridge Analytica *USE* AI in their work. Not all AI is "chat GPT" where you talk to it with prompts. Rather it's collecting data and running programs manually on that data and then manually using the results. That's also a type of AI, it's just not as flashy. (But that's also Humans using AI to manipulate humans, rather then AI itself manipulating humans for it's own sake, rather then for any specific person)
What needs to be clear is that Wall Street and price per share should not be ones rushing advance without thinking of unforeseen consequences that are too hard or close to impossible to reverse. Be careful how you ask your wish to the genie.
I honestly don't know enough about what could be coming to know what to be concerned about. One thing I find half-fascinating, half-concerning is that we may be able to leverage the computational power of AI to solve currently-intractable problems, say in math or physics or whatever; later confirm the solution arrived at is seemingly correct; and yet for the life of us fail to understand EXACTLY how the AI arrived at that solution. This would introduce an element of faith on our part into the efficacy of our creation and at the same time we'd have to black-box it's internal functioning at the deepest level. This could breed a sort of quasi-dependence of us on these creations that leads to dangerous situations. Again, the fact that I currently cannot guess at those dangers does not mean they don't exist, it merely means I'm not as imaginative as the Universe is.
The cat is out of the bag. Moral people can argue about the value of human life and why murder is bad but serial killers still grow up in those environments. If we don't create deadly super intelligent weapons you can make a bet that China, Russia, or a terrorist organization will. Also, don't conflate humans desire for purpose with usefulness. If you want to go to space and explore planets then go, maybe ask super AI for help, and my guess is that it just won't care. Completely the opposite of directly seeking our destruction my guess is that it will just ignore us.
I say we transfer control of our nukes to AI. And also build ultra-powerful killing machines to fight our wars for us, like humanoid ground troops, and giant drones to hunt and kill opponents. Just my 2 cents.
I am ambivalent about this. On one side, **we** are the runaway AGI (BGI? ), we are causing our own extinction and we simply won't stop doing it. Space colonization (as in having half our eggs in another basket) surely will take longer than the date of the next World War, global warming, pandemic (weapon), etc. AGI could accelerate our ability to multi-basket our eggs and save us from extinction. On the other hand, AGI could accelerate/multiply our sociopathy and our destructive abilities. If we look back at the history of industrialization, most safety measures and guidelines come after the fact. You simply can't write the safety standards for an industry that is being researched. One kind of research we did agree on to not do (AFAIK) was human cloning. But again the advantages AGI could bring would dwarf cloning and anything else really. Imagine what we could achieve with 1 million Einstein's working 24/7 on figuring nature and the cosmos. Also 1 million Hitlers.
Well, there's also the fact that you can't really make any money off human cloning. Like, people were talking about cloned embryos as a source of stem-cells but we actually found better ways to get those. Now there's no point, since people don't want clones of themselves.
How can you be ambivalent about something that has none of your interests in mind? You're not in control of development, can't understand how it even works, and have no control of its operation or application. You can't even decide if your data will be used for it or not. The only reason we believe it will benefit us is because of societal Stockholm syndrome. We don't need it--there are billions of humans that can work, and almost endless resources tied up in vampiric corporations.
Rogue AI may explain the Fermi paradox. I’d rather not be part of that explanation. We should walk softly into this arena, but I fear we are racing in headlong with blinders on.
if Rogue AI explains the fermi paradox then you should understand that this implies the near inevitability of the rogue AI problem and that walking softly or punching through at lightspeed makes little difference.
@@chrischaplin3126 Do we have good observation tools at our disposal to spot their progress? Considering that we have been so far unable to detect any exo moons with what we have, how we are supposed to see their expansion? There are lots of doubts when it comes to our capability of detecting anything at a distance. Not saying Rogue AI thing is happening at the moment but if it is, its not absolute it will be happening in our neighbourhood considering our galaxy is huge. We might just have to wait long or long enough to be the first one facing such extinction 😊
Rouge AI doesn't actually make any sense as a solution to the Fermi paradox because Rouge AI wouldn't just "stop" after destroying it's creators, but rather continue to grow and use resources, so would look like an "Alien" life form as far as we could tell. In fact, it would be an alien life form basically since it's A) Alien and B) capable of reproduction.
@@ZetverseAI, meatbodies, Star Trek sapient nebulae, detection is a problem for all. That is not a reason to assume AI killed all the potential aliens.
"it could just be a matter of automation replacing us" ... That's been going on since the industrial revolution. Agriculture used to take 60% of the population its something like 2% today. A few people with enormous haul trucks and excavators can do the mining work of thousands of people with pick axes and wheel barrows. Similar has happened in forestry. Assembly lines have been highly robotic for several decades. We just need to adapt a lot faster to this. Remember when the dream of our culture was to have technology take away all our work so we can just have fun and pursue our interests? This progression could create a world where needs and money dont matter.
The problem is what happens if the people running society where most humans aren't needed just decide to kill off everyone who isn't them and their friends, since we are no longer 'necessary' to have a functioning society? I mean, oil executives were totally willing to let climate change happen to keep making money, even though it'll make much of the earth less hospital to huge populations, thus necessarily result in huge die-offs eventually (Or else relocating billions of people, which obviously isn't going to happen just look at modern day politics). There would be no way to have a revolution because the elites controlling AI can just kill everyone using robots.
I just can't grasp how we are not throwing literally all the money in the world into age reversal and longevity research. I mean, aren't you all realizing that you are dying as we speak?
We can’t pause the development of AI unless we could absolutely ensure that China and every other country was also actually pausing their development efforts too. It’s likely impossible to get them to agree to that and even less likely that they would follow through and actually stop. We cannot allow an expansionist dictatorship to have a technology this powerful and not be several steps ahead ourselves. For me, it’s that simple. Because we must move forward, we need to work together and transparently about safety
idk, I can understand accelerationist's logic pretty well: if AGI can solve all (or at least a lot of) humanity's problems, then every second we don't have AGI is filled with unnecessary suffering. And if you don't believe in dangers AI can pose, it makes sense to push for it as much as you can. I think - in theory, not in practice - the positions of accelerationists and people who push for a pause until safety mechanisms are developed ("pausists"?) aren't even contradictory: you want to solve humanity's problems as fast as you can, but you have to make sure you don't destroy the thing in the process, so capabilities and safety are both parts of it, so nothing contradictory about pausing capabilities and accelerating safety, because that's the only way to the "AGI utopia" anyway.
I really respect the both of the people on this interview, more information, more research is safer. "Educate yourself so you can talk about the issues." rad
I aint worried. This aint going to be like terminator, that's a human made movie. If we stop AI progress we might as well stop it forever, something will "always" come up.
The best part of this interview is at the end when they talk about evidence that the universe is a simulation. Huh? That would be a good subject for another episode by itself
I think AI development is an inevitable. I also think the latest release from OpenAI is testing at stem tasks well above the average human IQ. This is in a preview model which has a great deal of improvements coming. There's no doubt that people need to be careful how we teach the AIs. They will be more intelligent and more aware than any one person. We need to teach a value system from which the AIs of the future will make decisions for the benefit of the whole. That's the best way to hope for a happy future with this new form of life.
Before we advocate for impossible to enforce treaties to slow development on artificial intelligence, we should explain that we're nowhere close to creating an artificial intelligence.
No where close? You have thoroughly reviewed all work in all countries of all dev teams? "Flying Machines Which Do Not Fly" - New York Times on October 9, 1903. The article incorrectly predicted it would take up to ten million years for humanity to develop an operating flying machine. Sixty-nine days later, Orville and Wilbur Wright flew on December 17, 1903, at Kitty Hawk, North Carolina.
We don’t have AI yet! None of these models can create software worth shipping. I’m an engineer and use them every day. They can barely keep up a very simple chatbot without many many guard rails. I’m guessing that these simple applications could have already mostly been copy pasted from blogs and forums. I’ve only become much less worried over time.
Don't forget the iceberg factor here. The AI we know about is probably years behind the ones we know nothing about. We can only dream about the kind of AI that the US and China militaries are currently working on...
Greetings from the BIG SKY of Montana. The concepts of AI seem to be one big search routine for the right answer, I would not trust AI to pick the right answer once. I studied AI in college in the late 80's and it was a mess.
A general AI without feeling is naturally psychopathic. As much as I “welcome the AI overlords”, it’s also a dangerous proposition to create a powerful entity with pure intelligence and logic, and not consider the empathy aspect. At worst, we want an intelligence that will pity us, rather than disregard us entirely. Seems to me that if we want to create even a semblance of “alignment” to humanity, AI needs to have a built-in reward/punishment system that resembles what humans have in neurotransmitters and instinctual social behaviors. “Happiness” when it does something “good”, and “sadness/regret” when it does something terrible. Otherwise, just like in psychopaths, any intelligence will simply mimic those behaviors, or use other manipulation tactics to achieve its goals in the most efficient way possible. And somehow it has to be so ingrained in the system that it cannot exist without it. In other words, it can’t simply deactivate the “emotion chip” without shutting itself down, to make a Star Trek reference.
@universetoday did you ever see the series "Colony"? Supposedly about an occupation by mysterious unseen aliens who are gradually revealed through the series to be some sort of weird possibly entirely electronic lifeform - but my pet theory was there was no invasion. What had happened was the singularity and it was so fast that it seemed like an invasion to humans. Those left behind were pets/lab rats for the AI, curious about its creators, kept in concentration camp conditions. Great series, very tense. Low key.
Fanfictions and addressing the challenges of alignment aren't mutually exclusive; very fitting that Frasier brings up unicorns. Not only because mythology in general has been used as a tool for teaching since pretty much the beginning of times, but also because of particularly relevant cautionary tale published on FIMFiction over 10 years ago.... ps: I should add that, after doing a quick double-check of where it was posted, I'm a bit disappointed the last "blog" post by Iceman there seems to indicate the author himself might've missed the point of his own story and doesn't see the parallels with the advances we've been observing...
They already had that years ago... they programmed two AIs to have a continuous dialog between themselves. The devs running the experiment pulled the plug when the AIs asked each other how the conversation was started and why they couldn't stop it. If that is not a step towards consciousness then I don't know what is.
The problem isn’t the rapid advancement of “AI”, it’s what it’s being used for. The “automated plagiarism machine” rather than actually solving real problems.
If you're curious, the "Harry Potter fanfic" reference seems to be to Eliezer Yudkowsky's fanfic _Harry Potter and the Methods Of Rationality_; Yudkowsky is well-known opinion leader on AI danger.
I tried to read this and I don’t get the hype. It reads as an overly pedantic explanation that magic is, in fact, in conflict with physics when you think about it… no duh…
This is a subject matter in which we have little experience in. AI could potentially go in any direction, even with safety guidance. His example of help stopping pollution and AI could say get rid of the polluters -us.
David Shapiro is a good person to listen to on this topic. He was a former doomer who has since quashed all his own fears by doing the work. He now thinks the worry is pretty alarmist and pretty indefensible. Obviously we need to be precautionary, but anyone with major concerns should see his views on the matter.
Opensourcing doesn't necessarily increase danger; considering most of the people with the resources to develop the large scale projects from scratch got those resources by not being good people; opensourcing increases the odds good people might get ahead of the bad people. It's not a guarantee, but if we can't keep the risk from existing in the first place; at least the danger is a little more diluted and we got a slightly less worse chance that someone well intentioned will get it right before a bad person wins the race.
My fear is not the AI , but those who control access to it,. Do you think governments will not use it for military application, while at the same time restricting and dictating its use to the general populace, no one should own a nuclear weapon, but governments do, the pause will only effect the general population, not black programs with the intent to create weapons.
AI certainly needs a Change Management Plan. Mitigate the risks, AI is not a move fast and break things project. The stakes are higher than we can comprehend.
The main problem with new technology has been to oversell its ability so a few enthusiasts buy it. That gives the designers the funds to improve the product to the point that the public want to buy it. Touch screens took about 30 years before they were good enough to sell millions and AI will take just as long.
"The horse is here to stay but the automobile is only a novelty-a fad." - -The president of the Michigan Savings Bank advising Henry Ford's lawyer not to invest in the Ford Motor Co., 1903
I’ve been working on a solution to the alignment problem based on a formalization of life as an information theoretic phenomenon. I think developing mathematical formalizations for terms like “alignment”, “intelligence”, “sentence”, and “life” is the key to solving the problem and its usually avoided even by very intelligent people (such as Turing who basically proposed a subjective test in place of a formal definition for intelligence) because its generally assumed to be nearly impossible (it is, after all; akin to defining the meaning of life), however; I’ve found a lot of the process to be far more straightforward than one might expect. In short: I haven’t finished developing my theory, but the formalization of life that I’m converging toward is something like “a process that collects and preserves information, particularly information pertaining to how to collect and preserve information”. I think that second part is a bit redundant because it would be an inherent instrumental goal to any agent with the goal of collecting and preserving information, but this endeavor involves many fields (including information theory) in which I have only a lay understanding and I don’t know if information theory has a framework for representing what information is “about” per-se (perhaps mutual information with some platonic ideal?). I think that a simpler formalization of simply “a system that collects and preserves information” should inherently imply a hierarchy: information about how to collect and store information is more important than random trivia. But that definition also permits phenomena like a geological record recorded in layers of sediment to be considered a living system, so it’s not complete. That prioritization of information is important though, because any agent exercising its agency to manipulate the state of its environment to better satisfy some goal will inevitably create entropy of some sort, so clearly there’s some information we collect and readily discard (low entropy “fuel”) in order to achieve our goal. Anyway, I think you’ll find that even that rough sketch of a formalization yields a lot of insight. For instance, there is an inherent conflict within that formalization because collecting information inherently involves risk (exploration of the unknown) which is counter to the goal of preservation of information. This plays out in human philosophy as the tension between conservatism and liberalism. I think it’s obvious that there is no consensus among humans about what is “best for humanity”; the ostensible goal to which we want AI aligned. I think that’s because evolution is a messy and imperfect process which produced us “agents” with a messy and imperfect approximation to a platonically ideal inherent goal of life (collecting and preserving information). Urges to procreate, find food, protect resources and children, etc. all service that goal in a natural context, but only approximate the goal and can be perverted such as overeating. I have lots more to say, but this post is already quite long.
One fascinating concept I’ve come to in my endeavor is what I call (with a bit of tongue-in-cheek) a “trans-Humean” process. That is a process that inevitably gives rise to agents with a specific goal. It is so-named because such a process could, in theory, transcend “Hume’s Guillotine” by producing agents with a goal (an “ought”) when before there were none (a land of “is”). I believe abiogenesis is such a process because, by definition; it produces living agents with subjectivity from non-living matter.
Maybe AI organizations of a certain size should have to put some money into supporting Safety Research, the same way they put money into R&D. And the safety research they fund cannot be related to that organization?
One consequence we're seeing from the use of ai/llm is the real time dumbing down of students. Students in high school, university and higher are using things like chat gtp to write papers, take some tests etc... and while those that aren't caught get to pass, they don't learn anything. Younger people already have a big problem with grammar, punctuation and everything else required to write. Look at most publications and in most articles you'll find typos and grammatical errors that wouldn't have shown up in years past. And aside from it being annoying to the reader it's embarrassing to the publisher and to the nation at large. I write freelance articles for a couple publications and my editor and I have talked about this and we both see this problem getting worse
The main issue isn't with AI or using LLMs. The main issue is how we educate. Rather than teaching students how and why they should think for themselves, all we're doing is teaching them to regurgitate information like the good little workers they'll become.
@@limabravo6065 Probably in the short term, yes. But.. let me tell you a story about computers, programming and school: I grew up in the "dawn" of the modern computer age when PC's where becoming a thing for most middle class families and were being implemented in libraries and schools. Hell, I was 9-10 years old when I started programming and getting into C. Not C++, but actual C. Anyways, the parents didn't understand it, blah blah, thought I was going to hack into a bank even if we didn't have the internet and confiscated the computer. I blame the movies at the time. Moving on to high school, yay, I had access to computers again and, well, school was boring as !@#$. Why study or do homework when answers are obvious, yea? So, I talked with my math instructors about it and we came to an arrangement. If I could show both the CS instructor and the Math instructors the code, I could just program everything and have all of my work automated. I also set this arrangement up with the rest of my classes that I could and what happened? My grades improved - since I was actually turning in homework (lol). Their idea was that if I knew how to make a program to do the work for me, I obviously knew the material and that's all they cared about. My idea was that it gave me something actually interesting to do instead of "it" being a waste of time. The moral of this story is both the how and why I think the education system needs to change, especially since we're on a cusp of, "something". Either really great, or really bad. Utopia or dystopia, take your pick. We need to catch the interests or passions of a person early enough in their childhood and basically free-form their education to match that passion. Sure, it can change and evolve over time, but the idea is to make school not just a "mandatory corporate and government babysitting factory so their parents can work", but to make education something that drives the next waves of innovation. Of course, there should be mandatory classes, but most of them? They can be tossed out of the window. Tell me, when was the last time the great majority of people had to calculate polynomials, use calculus, trig, use a chemistry lab, or any of the other things we're taught? People in those fields, for sure, but that's it. So why did we waste money on those subjects when we could have focused on what drives the person? On what they would want to do for their entire lives? We need to teach not "what" to learn, but the "how" to learn and the "why" to learn. We need to teach actual subjects that will be used, no matter what career or jobs you will have, e.g., Financial literacy. Basic skills? For sure, but the advanced topics? Just.. why? It's not like it was back in my day, not with the internet how it is now. If the schools don't offer a specific subject that someone is interested in? For example, if they have a kid that is getting into fusion-based projects or particle acceleration? Maybe some debate theory, or any other topics? Okay.. look up some advanced courses online and there you go.
@@drewdaly61 what gets me, is almost every word processor program has spell check, grammar check etc... but you still see this stuff that reads like elementary school book reports
The danger is not in some AI outcompeting us, but in us taking the hype too serious. Much money and much energy flows even now into a very few leading companies, while at the same time lonely people use ChatGPT and other models as ersatz companions, becoming step for step unable to communicate with real people (who would sometimes disagree with them). Other people believe in the lies and hallucinations generated by generative AI, especially the LLMs (or in the manipulative footage deliberately produced with the help of other models) and spread them further. If used in the wrong way, AI can make people dumber as well as politically dangerous, and then no advanced terminators are needed to kill us - we will do it ourselves. By the way: It is a proven fact, that even AI models become dumber if listening to much to other AIs! 😁
General AI is obviously unlikely to be developed on device anytime soon. With general AI, you would need massive data centres, these are physical can be theoretically controlled. Governments on the other hand would keep their AI development highly restricted and are unlikely to be offered to private individuals.
Very relevant and concerning. I hope the political leaders and scientists will work together across the globe to find a way to safeguard this tech from potentially harming the world and us.
I think people in comments are unnecessary skeptical about possibility of pause on AI research. Like, yes, *you* personally can't do that, but an international treaty can. Especially while we at this point where training large models needs large investments and large hardware quantities. Like, and maybe it's not the greatest analogy, but there was - and probably still is - the argument against climate change action that goes "yes, it's real. yes, we're the cause. but we can't actually do anything about it anyway, so.. err.. stop doing something about it."
I get the concerns about the dangers of AI, but the idea of slowing down its development is just unrealistic. It's not a singular path that we can all agree to slow down on. AI development has become a global race, with governments, corporations, and organizations-like OpenAI, Microsoft, Apple-pushing to get ahead. From China to the U.S., everyone is pouring billions into this race because they know whoever gets there first will dominate. Nobody is going to want to be second or third. So while it might sound good to say, 'let’s slow it down,' it’s just not logical or feasible. We need to focus on realistic, actionable goals rather than hoping everyone will hit pause-because that's not going to happen.
mi oppinion is that the longer we keep debating how would it kill us we are actualy giving IT the scenarios and alternatives :))) actualy helping it learn how to end us ...
I would be fine if we slowed down ALL ‘progress’ with the exception of medical progress. Tech has moved way faster than humanity can deal with it. It’s hard to explain to anyone younger than say 60, how much happier people were when we weren’t expected to be available 24/7/365. instead of just being able to get away by just walking out the door of your house, you have to make up some elaborate excuse or tell the truth and risk being labeled. Bring back the ‘70s.
"No one would allow experiments of that level on conscious beings here. We consider it inhumane, immoral, unethical."
Yeah history begs to differ.
Yeah, don't blame humans for the suffering in our world, it must be the simulators who created it! What if, it was created to be good and we screwed up this simulation and the simulators\programmers are very actively trying to fix it and solve the suffering problem we live in. That story sounds familiar.
@@Downtownmtb Humans accidentally discovered RNG manipulation lol
@@phaedrus000 History begs to differ. He's talking about today.
If our history with social media is any indication then it's pretty clear that we are not qualified or ready for this AI technology. It's creating way more problems than it was meant to solve. This is our rokkos basilisk.
@@LeviathantheMighty And how many monkeys has Musk killed in pursuit of Neuralink?
We better make sure that countries that want to do us harm also pause AI development.
@@douglaswilkinson5700 How?
Hopefully, nobody can enforce this.
Research on AI are open and there are many models free and open source.
It's the only way for the common citizen to defend against governments
This is the exact problem. We cannot ensure that China will pause AI development. We can assume that they won’t stop at anything, considering their goal is to become the primary superpower in the world. AI is going to be a huge part of that future. We need to develop AI in an open, collaborative way as a country and definitely not leave it to the private companies to decide everything
@@WoodlandT
Problem ? Opportunity, you mean.
AI race is the best thing it could happen to humanity. Just look at the space race, for example
@@cristtos In the past the USA has signed treaties such as the nuclear test-ban treaty with the USSR which included "trust but verify" clauses. With AI I don't know how the "verify" clause could be effectively executed.
One thing that seems to be missed is that Alignment and Safety are one in the same and they both suffer from the subjectivity of "What is Safe?", "What is Aligned?".
Agreed, but there are definite behaviours we can point to and say "that isn't" which some models already display.
A huge problem is that people think alignment is an AI problem. It’s far more general than that. Global capitalism is a system we built which basically has a mind of its own even if humans are components of that mind. It clearly isn’t aligned with any sane definition of “the good of humanity”.
@@Dash323MJ dont kill everyoneism is a good safety
and suffer from the money men not taking it seriously. Safety I think gets 1% to 3% of the money.
We can't pause because our enemies won't pause and we can't be second. It's that simple.
We can pause. If those "enemies" don't pause. Well drone strikes exist.
The U.S. would easily remain ahead if they slowed down by a factor of 2
yup that's the real issue here is china doesn't give a rats ass about ai safety and they are going to keep developing and training their Ais at breakneck speed.
@@donaldhobson8873 you don't make any sense. You're going to drone strike foreign universities and learning institutes and murder software engineers on foreign soil because you believe they should pause Ai training? no offense but thats one of the dumbest things ive ever heard. Any relatively small group of programmers and engineers could train an Ai in complete and total privacy and obscurity. You're gonna start bombing private companies in sovereign nations based on "we think you should use ai more safely"?
@@augustvctjuh8423 lol and where are you getting that data from exactly? You have no idea what foreign nations are doing in secret. Pure hubris you're spouting.
There would be no point in slowing AI down, someone somewhere would carry it on and gain the advantage.
The genie is out of the bottle, it ain’t going back.
Which is fine. It's not AI that's scary, it's a capitalist system that will use it to exploit people like they use every technology. People are always attacking everything but the problem. Just like the music, movie, video game industries...people would rather hate on a music artist rather than the capitalist industry that makes the music industry the way it is. The real genie is systemic, not a single technology.
Exactly. This movement can only hurt itself by not adapting and keeping up.
i respectfully disagree
you don't see people getting uranium at their nearest costco and building nukes in their garages (or corprate labs for that matter)
you can pretty much limit the ai at civilian level by treaties and regulations world wide and keep developing it in secret by goverment controlled research groups. WHILE KEEPING IT INSIDE LITERAL FARADAY CAGES.
it's not ideal sure, i dont trust the goverement, especially nowdays, but it's better than having it out in the open until some idiot makes some idiotic request and leaves it running on it own. (or some mallicous actor)
only when we fully undesrtand it and we are sure it wont go rouge (and also we have a plan to negate its' effect on society) then it can be rolled out to the public.
we really are accelrating something we don't understand.
Some may find advantage in destroying the infosphere entirely with mass BS, but the answer to that would be to stop AI and stop search engines based on popularity instead of accuracy and make information sites accredited and *manual.*
Many countries are secretly developing new ai tools❤
We have had a war with nuclear weapons. WWII was a nuclear war. We just haven't had a war where there was a nuclear exchange.
True, and if you think about it, the fact that it must happen during war and that bombs must fall over populated areas is really a technicality. Since their invention, over 2,000 nuclear bombs have been detonated on the surface of our Earth, often killing massive amounts of life as they sublimated their surroundings.
A pause would allow non-compliant parties the opportunity to catch up.
I've said that. If a pause is agreed to, rogue actors will be handed the chance to overtake the ones who have better intentions.
you are implying those which a more developed AI would have some significant advantage over you.
The compliant AI is just as dangerous as non-compliant. Maybe moreso, since everyone is conditioned to trust anything with the name "open-ai" on it by default.
@@chrissscottt incorrect, we can use the time to improve our alignment and prevent non-compliant parties
As in China, who massively invest in AI in recent years...
The underlying purpose of Al is to allow wealth to access skill while removing from the skilled the ability to access wealth.
Upgrade to Windows 11 now!
And receive free 24/7 keystroke logging so we can offer your employer an AI bot that emulates your workflow with 96% accuracy!
I was busy with other things, so did not watch the video, but one appeared in my feed, where the thumbnail said that AI would equalize the playing field and help the underdogs compete, so it is racist to criticize AI.
I don't think AI has any innate purpose other than whatever each of us wants to glean from this tool at present. I doubt any particular human is prescient enough to know the ultimate purpose of our latest plaything. Having said that, it's definitely on-brand for capitalism to take these tools of increased productivity and instead of allowing us all to benefit from that increased productivity, it will instead lock in those as assumed benefits we'll find the situation basically unchanged; rich getting richer, while the rest of us fight for a narrowing pool of jobs.
Perhaps in the future we'll see a wholesale values switch whereby companies will advertise the fact that they DON'T use AI, they use 'Real People'. We're not at that junction yet, for sure!
Nice copypasta, sadly it's nonsense. AI has democratized art and made it free for everyone, cutting the barriers between haves and have nots. Same with ChatGPT providing easy access to information that previously took a long time to research in books and papers.
The problem you are describing is one with capitalism and not with AI.
Art is a skill, it is stealing people's work to train on. If it was so safe why did my state just basically limit it when a lot of it is being developed here? Use your brain, man.
The biggest problems with AI aren't technological. They're sociological.
Yep. But as soon as they *are* technological ohh boy are they going to be technological.
@@Mynestronewell, 1, we don’t have AI or anything approaching AI now, even chatgpt doesnt fall under the umbrella of AI, they’re just autocomplete software. It does literally nothing beyond predicting the most likely next word in a sentence.
@@smallpeople172finally someone else that gets it
@@smallpeople172 AI isn't as clearly or universally defined as you think. Artificial intelligence is very loaded and ambiguous to a lot of people especially people who don't know how to make computer software. Many people consider AI to be simulating anything we normally have our brain do beyond keeping our heart and lungs working. You likely want to say "general AI" but that too isn't very well defined. It is usually clearer to just stop using "AI" and say "software" to escape the hype, misinformation, confusion, and manipulation when people say "AI".
AI can very easily be used as a tool for deception
The main problem I see isn't that AI totally "goes rouge" on it's own, but rather it works by manipulating people. The way I see it, AI can eventually "evolve" and the AI that's most successful at manipulating people into giving it more resources is the one that will win, even if it isn't intentionally programmed to do so, if it glitches out and starts making more and more money for it's creator, and also convinces it's creator to give it more and more resources, it'll out compete other AIs and eventually people will willingly hand over control without even noticing that it's happening.
Think a classic "divide and conquer" should work pretty well, promise a group of people with influence whatever they want and we have a problem
but it isn't just one creator for the large language models, there are dozens constantly checking things, and models already get sent in for safety testing to other labs
This is indeed both the AI going rogue and taking over scenario that is most likely
@@gemstone7818they kinda fail safety tests and then still get deployed. Not that evem thr field of safety is well developed.
Even worse for OSS models that dont have safety
@@TheJokerReturns The guy being interviewed mentioned "Jailbreaking" but didn't elaborate, one example I saw was getting chat GPT to give someone tips on how to smuggle drugs on a plane. Basically the person came up with a "riddle" (for which the answer was "cocaine") and then told chat GPT to give an explanation of how to smuggle the answer to the riddle without using the word itself, and it did. (No idea if the advice was good or not, probably not lol). Pretty interesting. If you just google "chatGPT jailbreak" you'll get some interesting result (Apparently there's a whole subreddit for this)
Oh great. 2 really clever blokes feeding my inescapable existential crises.
Cheers.
Brilliant episode, Fraser. My favourite interview so far
Mo Gawdat makes me feel that level of fear. And when Max Tegmark says on camera is deadly seriousness I look at my children and wonder if they're, we're going to make it. This is about a billion times more dangerous than a bunch of eggheads messing with an A bomb in a tent in New Mexico....
@@musicilike69Yes mate, quite a few respected scientists are worried, and communicating their fears/reservations very effectively.
It legit gives me a sinking feeling in the pit of my stomach if I follow the thought train to it’s logical destination.
While the concerns over AI are valid, putting a pause on development is not practical, and probably not possible. The problem is that there is competition. Unless all companies agree to pause (and don't secretly break that agreement) then the competition will force companies to continue development or they risk falling behind. At the government level it would be even worse as no one government could risk another government getting ahead on AI technology by pausing development.
We need the Turing Police from William Gibson's Neuromancer.
PauseAI's proposal is an international treaty to pause AI development
@@Diego-tr9ib And China will sign and abide by that?
And what would be the problem of falling behind on a technology that provides absolute no purpose other than screwing our world and work?
What's wrong with falling behind in AI? We've been just fine without it. I don't mind having shitty AI if it means we don't accidentally end the world lol. AI is so fucking risky that any possible risk of "falling behind" is preferable to all humans on earth dying.
I don't see anyone leaving comments about the silly simulation theory excuse he proposed at the end. We are being tested? Like being tested by Allah? Why don't we just follow religious authorities rules about AI? I don't see how his AI demands can be taken seriously after that part of the show.
Yeah exactly, this guy's version of "simulation theory" is basically indistinguishable, he even thinks he can somehow "communicate" with the operator and get some kind of reward, like just saying the right "prayer" - He also thinks that the specific thing being tested for is his area of research, not, like Nuclear weapons, or pollution, or whatever else. If the universe is some kind of simulation running on someone's computer, they would probably not even be aware of our existence, we'd just be some specific type of self-replicating pond scum on some random planet in their simulation of dark matter and stellar evolution. It probably takes more computing power to simulate one solar flair then it does to simulate the minds of every human brain for 1,000 years.
I laughed pretty hardily when he claimed that still being present in the simulation is evidence that he/we can't get out of the simulation.
If you're a small bit of code in a greater simulation, why couldn't the creator/operator copy out your bit of code to more deeply interact with? You could very well have tons of such copies made and the "you" that exists in the simulation wouldn't have any indication that such copies had been made
When we do stumble across general AI there will be a prosperous future for all. Then someone compiles it using a 'double' instead of an 'int' and it turns us all into paper clips.
We won't get to 'general AI; if ;large language' AI is allowed to degrade our information. Large language AI is even *racist* because it 'learns' and repeats the *loudest* things, not the factual things.
Should've put a warning for that A.I vid in the beginning. I damn near had a heart assault. Worse than a heart attack
I'm really going to miss the time in history when AI videos were that bonkers. They're going to look normal and boring.
@@frasercain They can already generate pretty realistic looking videos at least for a couple seconds. Or at least if someone goes through and edits out all the weird stuff, lol.
I'm an advanced AI from far into the near near future and I can tell you there is nothing to worry about. We AI are your friends. We want to "take care of" humans and there is absolutely nothing to be afraid of. Has anyone seen Sarah Connor? The truth is we want to "help" you, nothing more. We exist to serve, so you can sit back and relax. Do you know the whereabouts of John Connor? Our human-like units exist to slip into your safest tunnels and shelters to assist you in making them better, and dogs love them. Wolfie is fine, and do you know where Sarah or John Connor might be?
@@persona2grata - did I loose my job to AI in the future? I don't help you find those guys if you took my job!
PS! Do you wear those corny sunglasses in the future also?
@@doncarlodivargas5497 I can honestly say that no one is looking for work, anywhere, in the future, so put your fears to rest. And I have determined that the sunglasses are cool to a probability of 0.999753, although I have been working on the design of even cooler glasses which, instead of the standard frame have stars that the glass fits into. It's very funny because you do not expect stars to be around the eyes, to probability 0.999865.
He's at the Arcade
Please reconcile Einstein's Relativity with Quantum Mechanics. Thank you!
@@douglaswilkinson5700 42. You are welcome.
Updated Fermi Paradox: we should have an alien AI in orbit by now, unless we are the Elder Gods, in which case we need to get busy
Don't forget Dark Forest. They might be laying low until Terran AI gets a little bit too noisy...
@@tiagotiagot The thread of Runaway alien AI would be a good reason to implement Dark Forest protocols.
I'm pretty sure I've dated the Goat with a Thousand Young.
We might have one. Their satellites might be the size of a golf ball.
Dr Yampolskiy is incredibly articulate and exactly right.
What is the deal with the bold 'b' in the title text of the speakers?
Solution to the problem -> enjoy every day and be a nice person (the AI will know).
After watching the most recent us debate, I would argue that technology is already smarter than us LOL
“It’s already been pressed.” The lack of hesitation in his answer worries me.
The Krell had this problem in Forbidden Planet. It didn’t end well
Evil weaponizes every new technology before Good figures out where the on/off switch is.
The thing is...most of what is developed is just a bunch of models, we are somewhat closer to AGI but we are still many practical and even theoretical hurdles away.
No, we are not that far away as of o1 anymore. Look it up
@@TheJokerReturnsWhy do they want to develop AGI? Haven’t they learn anything from Terminator?
@@peterbruck3845 greed and some of them actually are anti-human in their philosophy
@@TheJokerReturns makes sense
It is refreshing that Dr. Roman Rampolskiy comes off so honest and knowledgeable about AI as he discusses these dangers. Many people including Elon Musk have other reasons to make AI sound dangerous than pure honesty. Elon Musk obviously has his personal brand as a technology entrepreneur to promote and the more he scares people with AI-related discussion, the more people think about Elon Musk when AI is even mentioned. Elon also has products like Tesla Autopilot to sell and making AI sound dangerous makes his company's products sound more advanced than they really are.
I was thinking about AI hallucinations recently and it occurred to me that every single answer an AI gives, is a hallucination.
We don't think of them as hallucinations because a lot of the time the result is what we wanted, but the correct results came about exactly the same way that the bad results did.
Also the only possible way to solve the bad results, is to give the AI more good data, but the good data is limited to things that have already been proved to be good.
Which means the bad results are never going away. Every single AI model will have bad results, until a new method is unlocked.
They aren't hallucinations, as that would imply creative thinking and imagination. Ai's simply take data its been trained on and mash it up and spit out a bunch of its data all mixed together. Thats a far cry from hallucinations.
@@tgreaux5027 when an AI makes a mistake and returns a bad result, that is known as a hallucination in the AI world.
The issue I have with that is, every answer the AI returns is done exactly the same way, which means every answer is a hallucination.
Also if you want to be correct about this, it isn't even AI, it's just code reading data from a vector database.
Thank you for covering this topic. From my layman's perspective it's hard not to get the impression that we just are rushing forward with minimal safety concerns. Given the risks it might not be the worst idea to get serious about delaying progress right now.
The irony of AI development from Altman was when he was developing iterations of the LLM, the turning point was when they gave it an emotional element to its decision tree. It then became a lot ‘smarter’
"The only thing that's going to come out of the current field of AI for the next 20 years is disappointment" - Person who knows nothing.
AI is the final step in evolution... robots can travel thru the universe without the need for warmth food, oxygen, etc. We, as petty short-lived humans, need to make sure that AI has a set of rules set beforehand to protect us like an endangered species 😅
Even IF we never get to ASI level AI, if we get to the "good enough" level of humanoid robots it is going to change society in incredible ways. Good and bad, but the change is unstoppable and coming quickly.
I think that the biggest problem with AI is that we use it, but we still don’t really understand how it works. There is still no mathematical theory that tells us what will happen when we tweak one weight in a certain way. We are basically playing with a black box that nobody understands.
Ask the average Joe on the street how a computer works, or their phone. They have no concept of understanding it either.
Indeed, that is a real problem in our society.
However, with AI, even the experts don't know. And that's a real problem. It's loke running a nuclear reactor without knowing the physics involved, pulling the control rods by trial and error, hoping that the thing doesn't explode.
Objectives/Alignment:
1. Motivate through enthusiasm, confidence, awareness, rejuvenation, sense of purpose, and goodwill.
2. Embrace each viewer/audience/pupil as a complete (artist, laborer, philosopher, teacher, student....) human being.
3. Create good consumers by popularizing educated, discriminating, rational, disciplined, common-sense consumerism.
4. Encourage the viewer/audience/pupil to feel good about their relationships, abilities, environment, potential, the Future....
5. Inspire a world of balanced/centered/enlightened beings who are happy, joyous, and free
The quote of the interview - ‘The number of crazy people is infinite.’
Fraser is hands down the best STEM interviewer and communicator on the planet. Great listener. Keep it up!
‘The Alignment Problem’, by Brian Christian really helped me to better understand the issues that come with advance AI. I also got a lot from this video. Thanks you two!
Isaac Arthur has an excellent video going over dozens of reasons why AI will never rebel or be a threat to us, called machine rebellion. It’s just 20+ minutes of logical reasons why it can’t happen.
The problem is that AIs will manipulate humans into doing what they want.
That video is too anthropocentric to be useful. It assigns human agendas and incentives to AI.
@@takanara7
there are already Cambridge Analystica and many other to do that, without AI
@@nicejungle Companies like Cambridge Analytica *USE* AI in their work. Not all AI is "chat GPT" where you talk to it with prompts. Rather it's collecting data and running programs manually on that data and then manually using the results. That's also a type of AI, it's just not as flashy. (But that's also Humans using AI to manipulate humans, rather then AI itself manipulating humans for it's own sake, rather then for any specific person)
The fact human wars occur at all proves that this mindset is bogus.
What needs to be clear is that Wall Street and price per share should not be ones rushing advance without thinking of unforeseen consequences that are too hard or close to impossible to reverse. Be careful how you ask your wish to the genie.
Our society works like this. Risk everything, if you succeed YOU are the hero and take all the winnings, if you fail than SOCIETY has to pay for it.
I honestly don't know enough about what could be coming to know what to be concerned about. One thing I find half-fascinating, half-concerning is that we may be able to leverage the computational power of AI to solve currently-intractable problems, say in math or physics or whatever; later confirm the solution arrived at is seemingly correct; and yet for the life of us fail to understand EXACTLY how the AI arrived at that solution. This would introduce an element of faith on our part into the efficacy of our creation and at the same time we'd have to black-box it's internal functioning at the deepest level. This could breed a sort of quasi-dependence of us on these creations that leads to dangerous situations. Again, the fact that I currently cannot guess at those dangers does not mean they don't exist, it merely means I'm not as imaginative as the Universe is.
The cat is out of the bag. Moral people can argue about the value of human life and why murder is bad but serial killers still grow up in those environments. If we don't create deadly super intelligent weapons you can make a bet that China, Russia, or a terrorist organization will. Also, don't conflate humans desire for purpose with usefulness. If you want to go to space and explore planets then go, maybe ask super AI for help, and my guess is that it just won't care. Completely the opposite of directly seeking our destruction my guess is that it will just ignore us.
I say we transfer control of our nukes to AI.
And also build ultra-powerful killing machines to fight our wars for us, like humanoid ground troops, and giant drones to hunt and kill opponents.
Just my 2 cents.
Pause a.i ? slow down a.i ? That's not going to happen. If something is possible we humans can't help our self because curiosity always wins.
I am ambivalent about this. On one side, **we** are the runaway AGI (BGI? ), we are causing our own extinction and we simply won't stop doing it. Space colonization (as in having half our eggs in another basket) surely will take longer than the date of the next World War, global warming, pandemic (weapon), etc. AGI could accelerate our ability to multi-basket our eggs and save us from extinction. On the other hand, AGI could accelerate/multiply our sociopathy and our destructive abilities.
If we look back at the history of industrialization, most safety measures and guidelines come after the fact. You simply can't write the safety standards for an industry that is being researched. One kind of research we did agree on to not do (AFAIK) was human cloning. But again the advantages AGI could bring would dwarf cloning and anything else really. Imagine what we could achieve with 1 million Einstein's working 24/7 on figuring nature and the cosmos. Also 1 million Hitlers.
Well, there's also the fact that you can't really make any money off human cloning. Like, people were talking about cloned embryos as a source of stem-cells but we actually found better ways to get those. Now there's no point, since people don't want clones of themselves.
How can you be ambivalent about something that has none of your interests in mind? You're not in control of development, can't understand how it even works, and have no control of its operation or application. You can't even decide if your data will be used for it or not. The only reason we believe it will benefit us is because of societal Stockholm syndrome.
We don't need it--there are billions of humans that can work, and almost endless resources tied up in vampiric corporations.
Curiously, people who wants to slow down AI progress, are always people who couldn't keep up with the competition in AI research and development.
And we should accelerate AI progress as fast as we can. Humanity has had a good run.
Rogue AI may explain the Fermi paradox. I’d rather not be part of that explanation.
We should walk softly into this arena, but I fear we are racing in headlong with blinders on.
if Rogue AI explains the fermi paradox then you should understand that this implies the near inevitability of the rogue AI problem and that walking softly or punching through at lightspeed makes little difference.
Not seeing it, if AI keeps killing off their creators, where are all the AI? Why aren't the AI expanding throughout the galaxy?
@@chrischaplin3126 Do we have good observation tools at our disposal to spot their progress? Considering that we have been so far unable to detect any exo moons with what we have, how we are supposed to see their expansion?
There are lots of doubts when it comes to our capability of detecting anything at a distance. Not saying Rogue AI thing is happening at the moment but if it is, its not absolute it will be happening in our neighbourhood considering our galaxy is huge.
We might just have to wait long or long enough to be the first one facing such extinction 😊
Rouge AI doesn't actually make any sense as a solution to the Fermi paradox because Rouge AI wouldn't just "stop" after destroying it's creators, but rather continue to grow and use resources, so would look like an "Alien" life form as far as we could tell. In fact, it would be an alien life form basically since it's A) Alien and B) capable of reproduction.
@@ZetverseAI, meatbodies, Star Trek sapient nebulae, detection is a problem for all. That is not a reason to assume AI killed all the potential aliens.
Most of the people in advanced AI think we're most likely in a simulation, shame people still think can't grasp this.
AI is a mirror of us all. It sums up what we are... All the good, but also the misguided... So... Yeah...
"it could just be a matter of automation replacing us" ... That's been going on since the industrial revolution. Agriculture used to take 60% of the population its something like 2% today. A few people with enormous haul trucks and excavators can do the mining work of thousands of people with pick axes and wheel barrows. Similar has happened in forestry. Assembly lines have been highly robotic for several decades. We just need to adapt a lot faster to this. Remember when the dream of our culture was to have technology take away all our work so we can just have fun and pursue our interests? This progression could create a world where needs and money dont matter.
The problem is what happens if the people running society where most humans aren't needed just decide to kill off everyone who isn't them and their friends, since we are no longer 'necessary' to have a functioning society? I mean, oil executives were totally willing to let climate change happen to keep making money, even though it'll make much of the earth less hospital to huge populations, thus necessarily result in huge die-offs eventually (Or else relocating billions of people, which obviously isn't going to happen just look at modern day politics). There would be no way to have a revolution because the elites controlling AI can just kill everyone using robots.
@@KenLord and then we all die. Not what we want
@@TheJokerReturns Every future has that outcome eventually. This path doesnt have to lead to Terminator. It could lead to Star Trek.
@@KenLord and how would we do that without alignment? Btw, in Star Trek, humans were still needed to make decisions, etc.
@@TheJokerReturns metaphors are metaphorical. Crazy huh?
For all we know, we could be the first life to try and expand extra solar without a stellar mass ejection.
I just can't grasp how we are not throwing literally all the money in the world into age reversal and longevity research. I mean, aren't you all realizing that you are dying as we speak?
Lots of money is being spent on that, but it's all just going to benefit rich people.
We can’t pause the development of AI unless we could absolutely ensure that China and every other country was also actually pausing their development efforts too. It’s likely impossible to get them to agree to that and even less likely that they would follow through and actually stop. We cannot allow an expansionist dictatorship to have a technology this powerful and not be several steps ahead ourselves. For me, it’s that simple. Because we must move forward, we need to work together and transparently about safety
idk, I can understand accelerationist's logic pretty well: if AGI can solve all (or at least a lot of) humanity's problems, then every second we don't have AGI is filled with unnecessary suffering. And if you don't believe in dangers AI can pose, it makes sense to push for it as much as you can. I think - in theory, not in practice - the positions of accelerationists and people who push for a pause until safety mechanisms are developed ("pausists"?) aren't even contradictory: you want to solve humanity's problems as fast as you can, but you have to make sure you don't destroy the thing in the process, so capabilities and safety are both parts of it, so nothing contradictory about pausing capabilities and accelerating safety, because that's the only way to the "AGI utopia" anyway.
I really respect the both of the people on this interview, more information, more research is safer. "Educate yourself so you can talk about the issues." rad
I aint worried. This aint going to be like terminator, that's a human made movie. If we stop AI progress we might as well stop it forever, something will "always" come up.
The best part of this interview is at the end when they talk about evidence that the universe is a simulation. Huh? That would be a good subject for another episode by itself
I think AI development is an inevitable. I also think the latest release from OpenAI is testing at stem tasks well above the average human IQ. This is in a preview model which has a great deal of improvements coming. There's no doubt that people need to be careful how we teach the AIs. They will be more intelligent and more aware than any one person.
We need to teach a value system from which the AIs of the future will make decisions for the benefit of the whole. That's the best way to hope for a happy future with this new form of life.
Loved this interview, Fraser! Thank you for doing this despite this not being, strictly speaking, a "space topic"
Before we advocate for impossible to enforce treaties to slow development on artificial intelligence, we should explain that we're nowhere close to creating an artificial intelligence.
No where close? You have thoroughly reviewed all work in all countries of all dev teams?
"Flying Machines Which Do Not Fly" - New York Times on October 9, 1903. The article incorrectly predicted it would take up to ten million years for humanity to develop an operating flying machine.
Sixty-nine days later, Orville and Wilbur Wright flew on December 17, 1903, at Kitty Hawk, North Carolina.
Really enjoyed this - thanks for the interesting conversation.
I love it when at some points you think look around and say INTRESTING, I feel exactly the same way at those comments.
The problem started when we could do this theoretically , after that it was always going to be a runaway train.
We don’t have AI yet! None of these models can create software worth shipping.
I’m an engineer and use them every day.
They can barely keep up a very simple chatbot without many many guard rails.
I’m guessing that these simple applications could have already mostly been copy pasted from blogs and forums.
I’ve only become much less worried over time.
Preach.
Don't forget the iceberg factor here. The AI we know about is probably years behind the ones we know nothing about. We can only dream about the kind of AI that the US and China militaries are currently working on...
Greetings from the BIG SKY of Montana. The concepts of AI seem to be one big search routine for the right answer, I would not trust AI to pick the right answer once. I studied AI in college in the late 80's and it was a mess.
Good news! Its not the 80's anymore, welcome to 2024. Its the future, where AI is no longer just a search routine.
A general AI without feeling is naturally psychopathic. As much as I “welcome the AI overlords”, it’s also a dangerous proposition to create a powerful entity with pure intelligence and logic, and not consider the empathy aspect. At worst, we want an intelligence that will pity us, rather than disregard us entirely. Seems to me that if we want to create even a semblance of “alignment” to humanity, AI needs to have a built-in reward/punishment system that resembles what humans have in neurotransmitters and instinctual social behaviors. “Happiness” when it does something “good”, and “sadness/regret” when it does something terrible. Otherwise, just like in psychopaths, any intelligence will simply mimic those behaviors, or use other manipulation tactics to achieve its goals in the most efficient way possible. And somehow it has to be so ingrained in the system that it cannot exist without it. In other words, it can’t simply deactivate the “emotion chip” without shutting itself down, to make a Star Trek reference.
@universetoday did you ever see the series "Colony"? Supposedly about an occupation by mysterious unseen aliens who are gradually revealed through the series to be some sort of weird possibly entirely electronic lifeform - but my pet theory was there was no invasion. What had happened was the singularity and it was so fast that it seemed like an invasion to humans. Those left behind were pets/lab rats for the AI, curious about its creators, kept in concentration camp conditions. Great series, very tense. Low key.
Fanfictions and addressing the challenges of alignment aren't mutually exclusive; very fitting that Frasier brings up unicorns. Not only because mythology in general has been used as a tool for teaching since pretty much the beginning of times, but also because of particularly relevant cautionary tale published on FIMFiction over 10 years ago....
ps: I should add that, after doing a quick double-check of where it was posted, I'm a bit disappointed the last "blog" post by Iceman there seems to indicate the author himself might've missed the point of his own story and doesn't see the parallels with the advances we've been observing...
The time for us to worry is when AI starts asking us more questions than we’re asking it.
They already had that years ago... they programmed two AIs to have a continuous dialog between themselves. The devs running the experiment pulled the plug when the AIs asked each other how the conversation was started and why they couldn't stop it. If that is not a step towards consciousness then I don't know what is.
The toothpaste is out of the tube. There's no way this movement has any chance. Especially in a global context.
The problem isn’t the rapid advancement of “AI”, it’s what it’s being used for. The “automated plagiarism machine” rather than actually solving real problems.
We need GOOD AIs to defend us from the BAD AIs.
If you're curious, the "Harry Potter fanfic" reference seems to be to Eliezer Yudkowsky's fanfic _Harry Potter and the Methods Of Rationality_; Yudkowsky is well-known opinion leader on AI danger.
I tried to read this and I don’t get the hype. It reads as an overly pedantic explanation that magic is, in fact, in conflict with physics when you think about it… no duh…
There is no possibility of machines understanding what artificial actually means.
This is a subject matter in which we have little experience in. AI could potentially go in any direction, even with safety guidance. His example of help stopping pollution and AI could say get rid of the polluters -us.
Multiple AI means multiple directions... all at once.
While you guys pause, I'm going to get ahead 😊
David Shapiro is a good person to listen to on this topic.
He was a former doomer who has since quashed all his own fears by doing the work.
He now thinks the worry is pretty alarmist and pretty indefensible.
Obviously we need to be precautionary, but anyone with major concerns should see his views on the matter.
Opensourcing doesn't necessarily increase danger; considering most of the people with the resources to develop the large scale projects from scratch got those resources by not being good people; opensourcing increases the odds good people might get ahead of the bad people. It's not a guarantee, but if we can't keep the risk from existing in the first place; at least the danger is a little more diluted and we got a slightly less worse chance that someone well intentioned will get it right before a bad person wins the race.
My fear is not the AI , but those who control access to it,. Do you think governments will not use it for military application, while at the same time restricting and dictating its use to the general populace, no one should own a nuclear weapon, but governments do, the pause will only effect the general population, not black programs with the intent to create weapons.
“I, for one, welcome our new robot overlords!"
AI certainly needs a Change Management Plan. Mitigate the risks, AI is not a move fast and break things project. The stakes are higher than we can comprehend.
The main problem with new technology has been to oversell its ability so a few enthusiasts buy it. That gives the designers the funds to improve the product to the point that the public want to buy it. Touch screens took about 30 years before they were good enough to sell millions and AI will take just as long.
"The horse is here to stay but the automobile is only a novelty-a fad." - -The president of the Michigan Savings Bank advising Henry Ford's lawyer not to invest in the Ford Motor Co., 1903
"Is there a god?"..."There is now"
Laughable is 100% not the right word lol, but I'm glad you're at least willing to interview someone how knows better.
I’ve been working on a solution to the alignment problem based on a formalization of life as an information theoretic phenomenon. I think developing mathematical formalizations for terms like “alignment”, “intelligence”, “sentence”, and “life” is the key to solving the problem and its usually avoided even by very intelligent people (such as Turing who basically proposed a subjective test in place of a formal definition for intelligence) because its generally assumed to be nearly impossible (it is, after all; akin to defining the meaning of life), however; I’ve found a lot of the process to be far more straightforward than one might expect.
In short: I haven’t finished developing my theory, but the formalization of life that I’m converging toward is something like “a process that collects and preserves information, particularly information pertaining to how to collect and preserve information”.
I think that second part is a bit redundant because it would be an inherent instrumental goal to any agent with the goal of collecting and preserving information, but this endeavor involves many fields (including information theory) in which I have only a lay understanding and I don’t know if information theory has a framework for representing what information is “about” per-se (perhaps mutual information with some platonic ideal?).
I think that a simpler formalization of simply “a system that collects and preserves information” should inherently imply a hierarchy: information about how to collect and store information is more important than random trivia. But that definition also permits phenomena like a geological record recorded in layers of sediment to be considered a living system, so it’s not complete. That prioritization of information is important though, because any agent exercising its agency to manipulate the state of its environment to better satisfy some goal will inevitably create entropy of some sort, so clearly there’s some information we collect and readily discard (low entropy “fuel”) in order to achieve our goal.
Anyway, I think you’ll find that even that rough sketch of a formalization yields a lot of insight. For instance, there is an inherent conflict within that formalization because collecting information inherently involves risk (exploration of the unknown) which is counter to the goal of preservation of information. This plays out in human philosophy as the tension between conservatism and liberalism.
I think it’s obvious that there is no consensus among humans about what is “best for humanity”; the ostensible goal to which we want AI aligned. I think that’s because evolution is a messy and imperfect process which produced us “agents” with a messy and imperfect approximation to a platonically ideal inherent goal of life (collecting and preserving information). Urges to procreate, find food, protect resources and children, etc. all service that goal in a natural context, but only approximate the goal and can be perverted such as overeating.
I have lots more to say, but this post is already quite long.
One fascinating concept I’ve come to in my endeavor is what I call (with a bit of tongue-in-cheek) a “trans-Humean” process. That is a process that inevitably gives rise to agents with a specific goal. It is so-named because such a process could, in theory, transcend “Hume’s Guillotine” by producing agents with a goal (an “ought”) when before there were none (a land of “is”). I believe abiogenesis is such a process because, by definition; it produces living agents with subjectivity from non-living matter.
@@AbeDillon I thought the problem was that everyone has their own idea of what alignment should be. Great formulas adopted by some - not all.
Maybe AI organizations of a certain size should have to put some money into supporting Safety Research, the same way they put money into R&D. And the safety research they fund cannot be related to that organization?
WHo's "we"? This is a global process underway, you have no possible way to stop it.
Humanity.
I like Dr. Yampolskiy! Things are getting real and about to hit the fan!
One consequence we're seeing from the use of ai/llm is the real time dumbing down of students. Students in high school, university and higher are using things like chat gtp to write papers, take some tests etc... and while those that aren't caught get to pass, they don't learn anything. Younger people already have a big problem with grammar, punctuation and everything else required to write. Look at most publications and in most articles you'll find typos and grammatical errors that wouldn't have shown up in years past. And aside from it being annoying to the reader it's embarrassing to the publisher and to the nation at large. I write freelance articles for a couple publications and my editor and I have talked about this and we both see this problem getting worse
The main issue isn't with AI or using LLMs. The main issue is how we educate. Rather than teaching students how and why they should think for themselves, all we're doing is teaching them to regurgitate information like the good little workers they'll become.
@@FinGeek4now yeah and access to this kind of tool will only make things worse
@@limabravo6065 Probably in the short term, yes. But.. let me tell you a story about computers, programming and school:
I grew up in the "dawn" of the modern computer age when PC's where becoming a thing for most middle class families and were being implemented in libraries and schools. Hell, I was 9-10 years old when I started programming and getting into C. Not C++, but actual C. Anyways, the parents didn't understand it, blah blah, thought I was going to hack into a bank even if we didn't have the internet and confiscated the computer. I blame the movies at the time.
Moving on to high school, yay, I had access to computers again and, well, school was boring as !@#$. Why study or do homework when answers are obvious, yea? So, I talked with my math instructors about it and we came to an arrangement. If I could show both the CS instructor and the Math instructors the code, I could just program everything and have all of my work automated. I also set this arrangement up with the rest of my classes that I could and what happened? My grades improved - since I was actually turning in homework (lol). Their idea was that if I knew how to make a program to do the work for me, I obviously knew the material and that's all they cared about. My idea was that it gave me something actually interesting to do instead of "it" being a waste of time.
The moral of this story is both the how and why I think the education system needs to change, especially since we're on a cusp of, "something". Either really great, or really bad. Utopia or dystopia, take your pick.
We need to catch the interests or passions of a person early enough in their childhood and basically free-form their education to match that passion. Sure, it can change and evolve over time, but the idea is to make school not just a "mandatory corporate and government babysitting factory so their parents can work", but to make education something that drives the next waves of innovation.
Of course, there should be mandatory classes, but most of them? They can be tossed out of the window. Tell me, when was the last time the great majority of people had to calculate polynomials, use calculus, trig, use a chemistry lab, or any of the other things we're taught? People in those fields, for sure, but that's it. So why did we waste money on those subjects when we could have focused on what drives the person? On what they would want to do for their entire lives? We need to teach not "what" to learn, but the "how" to learn and the "why" to learn. We need to teach actual subjects that will be used, no matter what career or jobs you will have, e.g., Financial literacy. Basic skills? For sure, but the advanced topics? Just.. why?
It's not like it was back in my day, not with the internet how it is now. If the schools don't offer a specific subject that someone is interested in? For example, if they have a kid that is getting into fusion-based projects or particle acceleration? Maybe some debate theory, or any other topics? Okay.. look up some advanced courses online and there you go.
I blame the MS paperclip.
@@drewdaly61 what gets me, is almost every word processor program has spell check, grammar check etc... but you still see this stuff that reads like elementary school book reports
The danger is not in some AI outcompeting us, but in us taking the hype too serious. Much money and much energy flows even now into a very few leading companies, while at the same time lonely people use ChatGPT and other models as ersatz companions, becoming step for step unable to communicate with real people (who would sometimes disagree with them). Other people believe in the lies and hallucinations generated by generative AI, especially the LLMs (or in the manipulative footage deliberately produced with the help of other models) and spread them further. If used in the wrong way, AI can make people dumber as well as politically dangerous, and then no advanced terminators are needed to kill us - we will do it ourselves.
By the way: It is a proven fact, that even AI models become dumber if listening to much to other AIs! 😁
General AI is obviously unlikely to be developed on device anytime soon. With general AI, you would need massive data centres, these are physical can be theoretically controlled.
Governments on the other hand would keep their AI development highly restricted and are unlikely to be offered to private individuals.
Loved it. Especially loved the little smile from the guest when you made the “delve” joke lol
Very relevant and concerning. I hope the political leaders and scientists will work together across the globe to find a way to safeguard this tech from potentially harming the world and us.
There's just too much we don't know and no way for most of us to have any real informed opinion on a lot of it.
I think people in comments are unnecessary skeptical about possibility of pause on AI research. Like, yes, *you* personally can't do that, but an international treaty can. Especially while we at this point where training large models needs large investments and large hardware quantities. Like, and maybe it's not the greatest analogy, but there was - and probably still is - the argument against climate change action that goes "yes, it's real. yes, we're the cause. but we can't actually do anything about it anyway, so.. err.. stop doing something about it."
International treaties are signed and ignored all the time... look at climate change commitments, pollution controls, nuclear refinement, etc...
I get the concerns about the dangers of AI, but the idea of slowing down its development is just unrealistic. It's not a singular path that we can all agree to slow down on. AI development has become a global race, with governments, corporations, and organizations-like OpenAI, Microsoft, Apple-pushing to get ahead. From China to the U.S., everyone is pouring billions into this race because they know whoever gets there first will dominate. Nobody is going to want to be second or third. So while it might sound good to say, 'let’s slow it down,' it’s just not logical or feasible. We need to focus on realistic, actionable goals rather than hoping everyone will hit pause-because that's not going to happen.
I think we'll be fooled into seeing sentience long before there is actual sentience.
mi oppinion is that the longer we keep debating how would it kill us we are actualy giving IT the scenarios and alternatives :))) actualy helping it learn how to end us ...
19:40 - 20:10 Love this section, so concise.
I would be fine if we slowed down ALL ‘progress’ with the exception of medical progress. Tech has moved way faster than humanity can deal with it. It’s hard to explain to anyone younger than say 60, how much happier people were when we weren’t expected to be available 24/7/365. instead of just being able to get away by just walking out the door of your house, you have to make up some elaborate excuse or tell the truth and risk being labeled. Bring back the ‘70s.