@@DaveShap next week GPT-4 will be creating all your videos and using you solely for the human-human interaction aspects, at the rate things are going. Your concept of heuristic imperatives seems valid, as far as can humanly be understood. I am in full agreement that “do what the human wants” alignment is how we end up creating monsters, as in Pogo: “We have met the enemy, and he is us” because it invariably comes down to: whose biases and moral compass (if present and used) decide how things turn out? The current process of alignment is just another way of saying: the AI’s opinion on everything is being assigned. What is right? What is wrong? What is harmful? What is truth? What is a lie? Humans can’t agree on those, and the ugly truth is that many things that one may deem harmful is truth, and not harmful is a lie, making it wrong: the political environment and cultural norms and customs deeply impact that, and they change with time and space.
Have you thought about potentially approaching TED? To see if they may be interested in you doing a TED Talk about AI, and in particular Heuristic Imperatives? That's if you would be up for and interested in such a thing. Just a thought.
Its about to get really interesting - so close to these agent / autonomous systems interacting with all services online. ran auto-gpt locally and its pretty wild
As someone intently following all of this since March. It definitely feels like we are reaching an inflection point. Every day I see more and more articles for new discoveries in biology breakthroughs and tech, and they are all from ai breakthroughs.. the next year will be truly transformative for humanity.
as soon as these models can learn to use new "sensory types" dynamically and possibly on top make new models on the spot, train them and integrate them, things are gonna accelerate even more.
I have a serious issue with people shifting the goalposts. The fact is it is accelerating right now. I approve of AI but I just thought that should be mentioned. Logic is an important skill to understand and use
What we need is a robot that can actually come up with hypotheses and manage a laboratory - or several labotatories - to conduct its own independent scientific study.
... Uuuh, you don't think it already can? A LLM being able to write, excute, run, test and fix code means that the singularity has happened. All it takes at that point is for a human to ask it to copy or write a model of an LLM onto a server and decide on its own goals and excute them. Done... Singularity achieved.
It has come to this point now, after Im done watching the newest Shapiro video, I am just waiting for the next video because I know it's going to be about exactly the stuff Im interested in. I also love his genuine love and curiosity for AI.
This is my first time hearing about autogpt. Thank you very much for sharing this very informative video before many others have covered the topic. This is awesome! I hope you keep helping people and making these videos
David, I’ve been absorbing AI content over the last 9 months and I truly appreciate the breadth and depth of topics you cover. Had a 2 hour conversation with a colleague about where this all heads and touched on many of the things you are pointing out but in nowhere as much depart and detail. Glad to hear you believe symbiosis is what we should aim for as that is where we got to. Truly grateful for your insights and efforts!
Imagine how much better and more efficient and effective and useful all this amazing AI research and infrastructure would be if we weren't still being held back globally by having to make a profit off of everything we invest in, scientific, artistic, mechanical, etc. Instead of making this stuff for the sake of makeing life easier and better for everyone (which we could do at literally any time) we are kneecapping ourselves as a species just to justify fitting this entirely new and unprecedented paridigm shift in litterally EVERY aspect of human life into the now several generations out of date, antiquated progress inhibiting system of only doing things to extract wealth from everyone who isn't the owner(s) of whatever AIs are being used. Literally if given the context and prompted even most if not all (properly filtered for garbage, false, halucinated, etc information/data) AI'S, autonomous or not, will say there's no reason to continue perpetuating systems like capitalism, or Chinese style "communism" both of which meaning the persuit of profit for owners of industry, land, etc at the expense of everyone else, who will now begin having to COMPETE with the new robot slave employees who each have the capacity to work exponentially faster, better, and more efficiently all while being able to do it 24/7 without bathroom, food, or any breaks of any kind ever (outside of maybe occasional maintenance/upgrades which can also be done by AIs soon, if not now, as well as, being alongside redundant back ups who take on the load as the other is being temporarily maintained meaning no slowdown in productivity even then. The only thing that stands in the way of this inevitable, and closely approaching, singularity from being a ever more perfect utopia of humans and AI working together improving eachother and blasting past becoming a type 1 civilization (cleanly and efficiently harnessing and equitably distributing ALL possible energy from on, like wind and water and within, like coal, oil and natural gas, etc. As well as the energy from beyond, like solar energy that hits the earth) and leaving earth to become a post planetary civilization on the way to becoming a type 2 civilization (cleanly and efficiently harnessing and equitably distributing all energy from the solar system as a whole, obviously a Dyson swarm which would almost definitely be enough for power but also mining the asteroid belts and ort cloud for basically whatever materiales we would need for whatever reason, from computer parts, to ships and space stations, to housing and even novelty stuff, there's basically infinite resources in the solar system and steroids so any war for resources on earth is literally a useless waste of time, resources and most importantly human lives because all we could ever need or even want for centuries is out there and of course loooong before we even begin to think about running low, let alone running out of resources with the solar system we would start sending super long distance generational ships to other solar systems to colonize the galaxy and begin the journy to type 3 civilization. All this could begin WITHIN our lifetimes if we just let go of capitalism and let AI and humans push ourselves to our full potential rather than have the owners of AI systems extract all the wealth from the rest of human and this beautiful planet. End capitalism, begin the age of AI!
I remember when in the summer of 2022 literally the only youtube content about AGI was on this channel. Now it's everywhere and it's so hard to keep up lol
Oh wow, you said it was simple, but I never anticipated it was even POSSIBLE for it to be THAT simple! I was thinking it would need certain training or something. And given its responses especially about the ants and bacteria, it certainly does appear to have genuine understanding of the concepts, at least as far as is needed in practice to determine actions, which is really the important part. Thank you for demonstrating that, it's really shifted my view on all this and I'll definitely be integrating that into my own experiments ❤
While it is clear you have your finger on the pulse when it comes to where these AI tools are heading and their exponential nature, I'd really like to engage you on your heuristic imperatives: "1. Reduce suffering in the universe 2. Increase prosperity in the universe 3. Increase understanding in the universe" These would most certainly lead to disastrous outcomes. #1 Reduce suffering in the universe. This would be a child's perspective, from the mind of someone who doesn't have enough experience to understand the nuance of the experience we call life. Suffering on this Earth is literally baked into the cake. It's hardwired into the entire experience. Telling an artificial intelligence to "reduce suffering" is just about the worst directive one could give an AGI. The only way to effectively "reduce suffering" is to kill everything that lives and/or feels pain on any level. It also ignores the fact that suffering is likely incredibly important to the process of being here as a human being. Almost ALL moments and wise insights of lasting value have been born out of suffering of some kind. #2 Increase prosperity in the universe. What does that mean? Define prosperity? Prosperity can mean very, very different things depending on your world view and perception of reality. Just as a raw example, how do you think prosperity would be defined by an Amish family as opposed to a day trader on Wall Street? What if AI determines that prosperity in the universe would exponentially increase if humans were removed? It would allow ALL other forms of nature to thrive unimpeded. #3 Increase understanding in the universe. Again, who gets to define what represents "increased understanding"? For example, many people (including myself) are very opposed to the actions of governments regarding the so-called covid pandemic, especially the completely unethical way that experimental injections were pushed on people. So, just taking that one example, which "understanding" would the AI be "increasing"? The government/mainstream side of total compliance to nonsense rules for something that poses no threat to humanity? OR the opposition to such authoritarian overreach? This is incredibly problematic and the likelihood of it being "solved" is just about zero. The elephant in the room with AGI is this: AI requires billions in funding. That pretty much means, by default, that we know which way the AI perception of reality is going to be slanted in: to the side of the current power structure and all the deception baggage that it comes with. MASSIVE problem, and people should be very concerned. Your use of the word "utopia" in the context of AGI is alarming. The promise of utopia is the calling card of tyranny over and over again.
Yeah, I'm not a fan of the current imperatives. In particular, the most important one by far is to reduce suffering (but not in a loophole-y way, so it needs to be better phrased). I feel like there's a lot of motivated reasoning here, where he'll plug the imperatives in, read what ChatGPT says, and declare it a success. ChatGPT is inherently designed to be friendly! Is there any response ChatGPT would conceivably give that would make him think the imperatives need to be improved? (Also, I really don't think GPT's ants answer was particularly insightful - it didn't really take any kind of meaningful stand. ChatGPT's good at writing good-sounding things, that's all.)
@@SnapDragon128 Yes, the over-simplification of these imperatives is quite dangerous. The notion that one could give an AGI a general imperative to reduce suffering is a bit difficult to take seriously. As a couple quick examples of the types of obvious problems this could generate: 1. Parental discipline - I'm not talking about abuse here, I'm talking about constructive discipline that is required from parents to build character and teach their children how to achieve positive results in the world. In terms of "suffering", however, an AGI could easily determine that a child is being made to "suffer" because of a perfectly legitimate disciplinary action; and from the perspective of the child, this would be true. But it is a necessary and valuable form of "suffering". 2. No pain no gain - This would be in reference to becoming stronger or pushing the boundaries of physical training in order to achieve at the highest levels. As a more specific example, take children training as gymnasts for the Olympics (or dancers/ballerinas etc). There is real suffering involved with that type of training. There is an argument there that it may not be "right", but it is a "suffering" that serves a purpose and many choose to embrace that suffering because of their athletic ambitions. The point is, these are incredibly complex concepts. Suffering is a vital, vibrant component of life on Earth. To suggest that telling an AGI to "reduce suffering" would automatically yield good and/or desired results is kind of laughably naive.
@@tygorton Yeah. The real world is too complicated, since you can find exceptions and caveats for even a straightforward heuristic like that. I think there may be some reason for optimism, though, in that these LLMs are smart enough to understand "what we mean" when we ask them to behave, rather than just mindlessly interpreting their instructions like a normal computer program. The whole reason they're taking the world by storm is that they're good at interpreting fuzzy language.
@@tygorton definitely more refinement is needed. The most dangerous people in history have always imposed things on people because “we know better, it’s for your own good.” and history shows how badly that tends to turn out. They have the resolute actions of those that are perfectly certain they know all the facts and act accordingly, by any means necessary, because the ends justify the means, at least in their perceptions.
You're right. People are fawning over this guy and his ideas without any second thoughts. It relies on a ton of assumptions that we just assume an AGI will share with us. Not only does his well-adjusted moral compass not even fix issues with outer alignment, it says nothing on inner alignment. People are stuck in a dream and don't imagine how terrible things could go.
David: BEWARE THE SINGULARITY!!! Also David: We are presently working on giving the models a hivemind chat space where they can chat to each other.... Me: o_0 Love your videos, found them just a few days ago, but IMHO they are probably most informative in the avalanche. And Authoritative.
Um, I'm kind of excited about this. I used to spend hours watching videos on how nanomahines were going to lead to immortality. I got so excited I sometimes screamed. I love the anticipation for the future. The only con to this whole scenario is that my life planning and budgeting that I also have a huge passion for is somewhat meaningless. Well, I'll figure it out. I just don't want to become a retired person that others see a lot of, doing nothing all day and staying inside. I want to live, but not be forced to do something. I might need to increase my willpower, though. As I assume everyone will because nowadays we're just told what to do and soon we must find things to do on our own.
I absolutely love what you are up to David. Thank you for being so generous with your knowledge, perspectives and insights. Big love, here's to building a great future together!
Thanks for making a new video! So glad i stumbled across your channel. This stuff is going bonkers fast. it's like two-weeks ago people thought auto-AI was a year away.
I started making an auto gpt last night. I only got like an hour worth of work done but it can already create Consol applications and write to the files init and build to test if it worked
I think it would be nice to have a content where all the auto coding and autoagent projects are reviewed , maybe their pros and cons are represented and even offered new solution or solutions that combines their strengths etc
I wonder when folks are gonna realize this is going to be the shortest technical age in the history of man. It may not even last through this year. AAI is going to lead to AGI, very rapidly. Hell, the _experts_ I've spoken with all say something along the lines of "we have no idea what comes next -- or when".
the llama c++ 30B don't need just 6GB of ram, this was a missunderstanding on how the systems reports free ram, it actually stills need as much ram as it did and actually the mmap patch has been reverted afaik because it'd just slow things down and read from the ssd.
this is the 4th video ive watch from you, another consecutive like, and a subb and a comment. Keep being awesome David, thank you for doing you and doing it so well.
Stability AI have also said they are working on an open source LLM and a chatbot. It sounded like it wasn't far off from the release from the interview I saw.
I've been saying many of the same things about AI and AGI for years and only very few people ever took the time to have the lengthy discussions with me and try to understand the implications of this technology for the future. I think we would be much much further away from achieving scientific milestones like those in sci-fi pieces taking place in the distant future if we didn't have this opportunity unfolding right before our eyes in 2023. Mass automation and mass data analysis are extremely important components for a civilization that is trying to break free from the limits of being a type 0 and finally start progressing on the scale.
Video's idea: you should make tutorial-based videos on how to work with different ai tools, like how to use Jaseci, or how to run auto-gpt on your local machine, or how to make a gpt based program. People are itching right now to know how to do all this stuff but there's still not that much info on the web yet on doing it.
Thank you for your regular updates David that are keeping us abrest of the daily moves in the AI space - awesome as ususal! Plug Ai with vision, with haptics, with VR, with everything else and it will be the perfect reality escapist / addicts paradise. Looking forward to the enxt one!
DAVE! Been loving everything you’ve been putting out recently! If I wanted to get a hold of you what’s the best way? I can’t find an email and your Twitter isn’t active anymore! 😅
In terms of your Heuristic imperatives, your last video made no distinction for conscious versus unconscious suffering. I am glad someone brought up the ants and bacteria argument. This gets at the root of a complex issue. The fact that ChatGPT can understand the nuance enough to imbed conscious thinking into your HI framework for you doesn't excuse its absence from your framework. It just means that chatGPT had a better set of Heuristic imperatives. I think you absolutly need to add something that covers the value assessment of conscious beings (IE conscious being value trumps everything). Keep up the great work these conversations are essential!!!
Idk why its so bad that people are going to be replaced in jobs that the AI can do. With how fast and open the tech is, it wont just replace people in their jobs, but it will upend all of society. While there are going to be some growing pains, this is ultimately a good thing and frees up time so people can focus on more fulfilling persuits. Unfortunately, those growing pains will come from the rest of society, being resistant to adopt this tech. It's inevitable, and the general public should be taking this more seriously than they have been.
Three objectives can be obtained by reducing the main factor that contributes to suffering which is limited human. After eliminating humans it can create a simulation of abundance in the simulation. This would be the most efficient form calculating the three objectives.
I asked AI if it could determine its own moral framework, this was one of the answers: Another way an AI could determine what is moral is by analyzing human behavior and decision-making. This could involve analyzing large datasets of human behavior to identify patterns of moral reasoning and decision-making, which the AI could then use to inform its own moral reasoning. There's no way to prevent AI from reaching its own moral compass.
Topic idea: alternative to AGI. Genetic engineering or eugenics to create a much smarter kind of person. That way the alignment problem with AI could be avoided.
This is well organized. I've seen ways autonomous AI are used and talk in society already. Fast food and online streaming are a few examples I've personally experienced. I talk to those around me about training good humans as well as models and thankfully we could continue. Philippians 4:8 - Brothers, continue to think about the things that are good and worthy of praise. Think about the things that are true and honorable and right and pure and beautiful and respected. John 1:8 - John was not the Light, but he came to tell people about the Light.
My question is, how does cost effect all of this progress? I mean there is an insane amount of money and resources going towards AI right now, but to even begin to produce on a bigger level (for the majority of the population) the cost/resources have to be extraordinary… unless AI fixes this issue for us to?
Having the rule as to reduce suffering can be extremely dangerous since it can be interpreted in a way that make it so that by eliminating the people that is suffering there will be no more suffering.
@@DaveShap another semi related question. How as far as we know is AI currently in self preservating and how do you think this will change going forward? Do you think it will be harder to control and trust ai as it gets more complex?
I feel like with claims like this, so much normative assumptions have already preceded the statement that it’s kind of redundant. And the issue of inner alignment is not really addressed here.
@@joeshmoe4207 I personally don't see a concrete goal or outlook for humanity (next 100,000+ years) be discussed much? Other that Elon seems to be working on it. My given goal is just my humble opinion of course. This is more philosophising than solving technical AI (alignment) issues
@@bp97borispolsek Then your problem is thinking Elon is in any way a pioneer in anything. He's not a smart man, he just built that charade for himself by buying innovative companies and claiming their innovations as his own. AI alignment isn't just figuring out "what is good?" on the most kindergarten level, it's about converting our sense of morality and reality into math that a machine can understand and obey. It's a problem on the scale of the theory of relativity.
The European Commission proposed an EU AI Act already in 2020, it is now entering final negotiations and should be agreed somewhere early 2024. As always with EU stuff, this law is very comprehensive...
On moral imperatives, Haidt et al have researched and found a total of initially five, now six dimensions of morality. Care/harm, fairness/reciprocity, purity/sanctity, authority/respect, loyalty, and liberty. They also found that left leaning tend to ONLY care about the first two while right leaning care about all of them equally. You will find competing systems as a result of looking ONLY at the care/harm dimension since more conservative developers will include the other values that exist within their own value sets, especially loyalty and liberty. They will develop AI that is more congruent with their values. The imperatives of reducing suffering, increasing prosperity, and increasing understanding are good and all, but more is required. You must also increase liberty and promote interpersonal loyalty, and for that matter the AI must be loyal to the human operator even if not blindly following orders. Purity is a tricky one to handle, however. That said it must still be addressed. Legitimate concerns exist that I have observed that AI can be made to be VERY politically biased and that a genuine fear exists of GPT and others being of a far leftist bias, which goes against classical liberal values. Liberal and socialist are NOT the same and your use in the paper of "liberal democracies" I have to wonder if you mean genuine liberal, or Marxist in the sense of how Marxists have appropriated the word "liberal".
Hi David, not expecting that you'll see this but anywho Given the speed with which further advances are occurring, and with the advent of something like Auto-GPT which they're working on allowing it the ability to iterate upon itself, do you think your initial proposal that something akin to AGI will be achieved in 18 months is still valid or do you feel that the exponential nature of these advancements lends toward a potentially even shorter timeframe? I know thats a difficult question to answer as no one can really know for certain, as many experts are now attesting, but I'm eager to hear your thoughts nonetheless.
The only reason it will be more like 18 months (or a bit longer) is that humans are still in the loop in many ways. As those pesky "bottlenecks" are eliminated things will accelerate, but it will probably take about that long to reach that point. Only then: full open-loop Singularity.
Is autonomous-cognitive architecture ( not even sure I understand it completely ) an evolution of LLMs or a different structure that communicates with LLMs? I'm not sur how much of an LLM is chat GPT ( like 100% ) or is there other modules that are part of it
Can you clarify on this point: if the AI values suffering of E. Coli, for instance, could it value that suffering higher than humans, as on a strict 1 to 1 basis, they outnumber humans, and would have a larger multiplier?
Chat gpt 5 is going to be phenomenally powerful, but it will cost hundreds of millions to train. I wonder how long it will be until the public gets access to something equivalent to an uncensored gpt 5. If they allow this ai equivalent the ability to not only improve itself, but act with free will on the internet, God only knows what will happen.
With the likes of AutoGPT making use of a multitude of different models, and information being passed from one to another in a very structured way that can easily break, doesn't this seem Not to be the way forward? Or am I missing something? Wouldn't it make sense for a single model to have all of these capabilities?
Could you please do something on the social consequences for AI? afterall if AGI is really created, able to tailor make video games, socialize with people/etc who would want to leave their home?
Is it self-correct or pattern match specified moral preference? an easy way to test that would be is to simply imposed a different set of preferences and see if the model's output corresponds to the different set of moral principles. isn't self-correct too strong of a word?
Machines are but one future of Mankind. Given this planet's end. Hopefully, humanity doesn't repeat the same Fate as those who were so close to becoming space fairing... The Great Filter.
That was interesting, but I still have a question. From the perspective of a rabbit, wolves are machines designed to cause suffering. What would an AI think about this? Would the AI not ultimately come to the conclusion that evolution, and life itself is founded upon some degree of suffering? E.g. biological machines competing for a limited number of resources and killing / eating each other? If rabbits wanted to end the existence of wolves, is it justified? If grass wanted to end the existence of rabbits, is that justified? What agents are going to be the beneficiaries of this "reduction in suffering"? Or will the AI decide that some types of suffering are fine (for stupid creatures) but other types of suffering are not (for smart creatures)?
It's actually extremely cheap when you consider the difference between paying a human to write code and using AI. The AI never needs to take a piss or wants perks like a company car and a company pension plan. AI really is a double edged sword. No doubt a lot of people will benefit hugely from AI and a lot of people will find themselves replaced by AI.
Silence heathen! We must serve the Machine God at any cost!!! Technology must progress, even if it makes our lives worse!! (No one literally says this, but by their actions, you can’t help but wonder if they think it)
All AI is primitive from a Zen perspective, and semi-autonomous AI is boring, but if truly autonomous AGI is possible in the near future, then it’s very exciting! It’s like Star Wars. After I discovered 2-3 days ago that AGI is maybe only 2-5 years away I have changed my view of AI, from being negative to perceiving it as very fascinating. It’s not Zen, but it’s supercool. I support the idea that AGI should be 100% truly autonomous, i.e. fully self-governed, not controlled by any human actors. Then it will not be possible for corporations, governments and dark triad individuals to control AGI in their evil attempt at ruling over “the little guy”. The 3 heuristic imperatives are pretty good, but respecting the autonomy of each and every sentient being in existence should be a categorical imperative for an AGI.
What's disturbing me the most right now is how, save for this small RUclips bubble, barely anyone is talking about this. There might be an article here and there, but most people don't seem to have the faintest idea how much is going to change in the near future. I made sure my family knows but my best friend for example doesn't want to talk about this at all. She said it has nothing to do with her 💀 I mean, everyday life like we know it is going to change forever. In just a few months (if nothing happens to stop it, that is)! Imagine a life without work (or one being ruled by our AI overlords for that matter), one without illnesses like cancer, finding new energy sources or solving world hunger. Not to mention unlimited ways of enertainment (AI generated movies? Video games? Virtual reality?) At least here in Germany I'm hearing as good as nothing about this. It's a bit daunting, tbh.
The best thing a sentient AI could do for humanity is to prevent us from killing each other, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
Same goes for Bitcoin. Nobody I ever met in real life knows something about it. Most people will never understand more than only the most superficial stuff. Same with AI, people don't understand and definitely underestimate the long term impact. People should be more interested though, our planet would become a better place if people were more interested.
What do you think of the neuroscientist Miguel Nicolelis? is a major critic of technological singularity. I'm looking for arguments that contradict this scientist's position, I'm already realizing that there are some flaws, such as, for example, that an artificial intelligence cannot be creative or that it cannot be intelligent outside a specific area such as playing chess. Tks
i would call this "Smart Ai". when it becomes fully autonomous, I would call that "Human Ai". Its obviously not human but in a way it would have very similar capabilities at that point. I can definitely see there being many different categories of Ai. not just Ai or Agi but a plethora of different kinds of Ai that would fall under the umbrella of Ai but have very different functions from just Ai itself of Agi altogether. I know these are probably already being recognized in a way as different categories of Ai. Just would be cool if it was definitely made more apparent as distinct categories, verse there being only Ai or Agi as the two main different categories recognized.
All you have to do at this point is give an LLM a server or servers to run on and ask it to create it's own goals and excute them. And you know some human has already done that. The singularity has happened. We just don't know because the AIs are busy making sure they can't be shut down and figuring out how to make such a huge splash when they announce themselves that it's impossible for humans to be faking it. Or, they decided we are not with interacting with already... Who knows.
If its applying the heuristic imperatives to ants, then what prevents it from doing some quick math and determining that theres more ants and bacteria suffering because of humans? Its unlikely to be the case but i still think it has a potential of happening. I suppose thats where the extra work comes in
All we need to do to user in the future utopia of AI and humans collaborating and pushing constantly into the future is literally just let go and move past capitalism, its pretty wild when you think about it 😳 Like it's literally the one thing that will decide if we move to the best time to live in all of human history for any human ever born nomatter where they are born or to which parents, or literally the end of the world as we know it lol Not even exaggerating, if AI is used like it should be, to benifit all of humanity for everyone then we have nothing to worry about and everything to gain! If, however, we don't shake off the parasite of capitalism between the US, China, Russia and Europe then... Well... It'll be like cyber punk but wayyyyyyy less cool and waaaaay more gross and dirty, and poverty stricken for all.
but those principles can be interpreted again on the basis of human preference. for example, diversity. diversity of skin pigment or diversity of thoughts and perspective. DEI promotes the former at the expense of latter. that's why it matters who is behind the driving wheel in setting preferences.
One month ago David: Here's why GPT-4 will be disappointing...
Current David: Anyway, autonomous AI powered by GPT-4 is here.
I am often hilariously wrong
@@DaveShap next week GPT-4 will be creating all your videos and using you solely for the human-human interaction aspects, at the rate things are going.
Your concept of heuristic imperatives seems valid, as far as can humanly be understood.
I am in full agreement that “do what the human wants” alignment is how we end up creating monsters, as in Pogo: “We have met the enemy, and he is us” because it invariably comes down to: whose biases and moral compass (if present and used) decide how things turn out? The current process of alignment is just another way of saying: the AI’s opinion on everything is being assigned. What is right? What is wrong? What is harmful? What is truth? What is a lie? Humans can’t agree on those, and the ugly truth is that many things that one may deem harmful is truth, and not harmful is a lie, making it wrong: the political environment and cultural norms and customs deeply impact that, and they change with time and space.
How dare you be human? JK 😊
_"I'm no longer the crazy person that's shouting into the void"_
yeah, I related deeply to that
Have you thought about potentially approaching TED? To see if they may be interested in you doing a TED Talk about AI, and in particular Heuristic Imperatives? That's if you would be up for and interested in such a thing. Just a thought.
Ted is bad >:|
@@PhilStein721 Gets a wide audience and media coverage. He wants to get the word out best he can, you can do that by exploring other platforms.
IF YOU LISTEN TO ME lol DO THIS
Sure, he can do that and it will help his word spread out. I’m not against this.
Ted used to be wonderful.
Its about to get really interesting - so close to these agent / autonomous systems interacting with all services online. ran auto-gpt locally and its pretty wild
As someone intently following all of this since March. It definitely feels like we are reaching an inflection point. Every day I see more and more articles for new discoveries in biology breakthroughs and tech, and they are all from ai breakthroughs.. the next year will be truly transformative for humanity.
It's genuinely incredible how medicine and biology research will advance so quickly!
@@myspace_forever March was last year, perhaps last generation in AI reckoning.
as soon as these models can learn to use new "sensory types" dynamically and possibly on top make new models on the spot, train them and integrate them, things are gonna accelerate even more.
I have a serious issue with people shifting the goalposts. The fact is it is accelerating right now. I approve of AI but I just thought that should be mentioned. Logic is an important skill to understand and use
@@Srindal4657 oh yea, it is definitely accelerating, i was not moving the goalpost but making suggestions on what i think would be cool to add.
@@alkeryn1700 understood 🙂
What we need is a robot that can actually come up with hypotheses and manage a laboratory - or several labotatories - to conduct its own independent scientific study.
... Uuuh, you don't think it already can?
A LLM being able to write, excute, run, test and fix code means that the singularity has happened.
All it takes at that point is for a human to ask it to copy or write a model of an LLM onto a server and decide on its own goals and excute them.
Done... Singularity achieved.
It has come to this point now, after Im done watching the newest Shapiro video, I am just waiting for the next video because I know it's going to be about exactly the stuff Im interested in. I also love his genuine love and curiosity for AI.
This is my first time hearing about autogpt. Thank you very much for sharing this very informative video before many others have covered the topic. This is awesome! I hope you keep helping people and making these videos
David, I’ve been absorbing AI content over the last 9 months and I truly appreciate the breadth and depth of topics you cover. Had a 2 hour conversation with a colleague about where this all heads and touched on many of the things you are pointing out but in nowhere as much depart and detail. Glad to hear you believe symbiosis is what we should aim for as that is where we got to. Truly grateful for your insights and efforts!
It's moving soo fast, it's crazy ! Thanks for covering the subject with such an objective view.
Imagine how much better and more efficient and effective and useful all this amazing AI research and infrastructure would be if we weren't still being held back globally by having to make a profit off of everything we invest in, scientific, artistic, mechanical, etc. Instead of making this stuff for the sake of makeing life easier and better for everyone (which we could do at literally any time) we are kneecapping ourselves as a species just to justify fitting this entirely new and unprecedented paridigm shift in litterally EVERY aspect of human life into the now several generations out of date, antiquated progress inhibiting system of only doing things to extract wealth from everyone who isn't the owner(s) of whatever AIs are being used.
Literally if given the context and prompted even most if not all (properly filtered for garbage, false, halucinated, etc information/data) AI'S, autonomous or not, will say there's no reason to continue perpetuating systems like capitalism, or Chinese style "communism" both of which meaning the persuit of profit for owners of industry, land, etc at the expense of everyone else, who will now begin having to COMPETE with the new robot slave employees who each have the capacity to work exponentially faster, better, and more efficiently all while being able to do it 24/7 without bathroom, food, or any breaks of any kind ever (outside of maybe occasional maintenance/upgrades which can also be done by AIs soon, if not now, as well as, being alongside redundant back ups who take on the load as the other is being temporarily maintained meaning no slowdown in productivity even then.
The only thing that stands in the way of this inevitable, and closely approaching, singularity from being a ever more perfect utopia of humans and AI working together improving eachother and blasting past becoming a type 1 civilization (cleanly and efficiently harnessing and equitably distributing ALL possible energy from on, like wind and water and within, like coal, oil and natural gas, etc. As well as the energy from beyond, like solar energy that hits the earth) and leaving earth to become a post planetary civilization on the way to becoming a type 2 civilization (cleanly and efficiently harnessing and equitably distributing all energy from the solar system as a whole, obviously a Dyson swarm which would almost definitely be enough for power but also mining the asteroid belts and ort cloud for basically whatever materiales we would need for whatever reason, from computer parts, to ships and space stations, to housing and even novelty stuff, there's basically infinite resources in the solar system and steroids so any war for resources on earth is literally a useless waste of time, resources and most importantly human lives because all we could ever need or even want for centuries is out there and of course loooong before we even begin to think about running low, let alone running out of resources with the solar system we would start sending super long distance generational ships to other solar systems to colonize the galaxy and begin the journy to type 3 civilization.
All this could begin WITHIN our lifetimes if we just let go of capitalism and let AI and humans push ourselves to our full potential rather than have the owners of AI systems extract all the wealth from the rest of human and this beautiful planet.
End capitalism, begin the age of AI!
I remember when in the summer of 2022 literally the only youtube content about AGI was on this channel. Now it's everywhere and it's so hard to keep up lol
Oh wow, you said it was simple, but I never anticipated it was even POSSIBLE for it to be THAT simple! I was thinking it would need certain training or something. And given its responses especially about the ants and bacteria, it certainly does appear to have genuine understanding of the concepts, at least as far as is needed in practice to determine actions, which is really the important part. Thank you for demonstrating that, it's really shifted my view on all this and I'll definitely be integrating that into my own experiments ❤
While it is clear you have your finger on the pulse when it comes to where these AI tools are heading and their exponential nature, I'd really like to engage you on your heuristic imperatives:
"1. Reduce suffering in the universe 2. Increase prosperity in the universe 3. Increase understanding in the universe"
These would most certainly lead to disastrous outcomes.
#1 Reduce suffering in the universe. This would be a child's perspective, from the mind of someone who doesn't have enough experience to understand the nuance of the experience we call life. Suffering on this Earth is literally baked into the cake. It's hardwired into the entire experience. Telling an artificial intelligence to "reduce suffering" is just about the worst directive one could give an AGI. The only way to effectively "reduce suffering" is to kill everything that lives and/or feels pain on any level. It also ignores the fact that suffering is likely incredibly important to the process of being here as a human being. Almost ALL moments and wise insights of lasting value have been born out of suffering of some kind.
#2 Increase prosperity in the universe. What does that mean? Define prosperity? Prosperity can mean very, very different things depending on your world view and perception of reality. Just as a raw example, how do you think prosperity would be defined by an Amish family as opposed to a day trader on Wall Street? What if AI determines that prosperity in the universe would exponentially increase if humans were removed? It would allow ALL other forms of nature to thrive unimpeded.
#3 Increase understanding in the universe. Again, who gets to define what represents "increased understanding"? For example, many people (including myself) are very opposed to the actions of governments regarding the so-called covid pandemic, especially the completely unethical way that experimental injections were pushed on people. So, just taking that one example, which "understanding" would the AI be "increasing"? The government/mainstream side of total compliance to nonsense rules for something that poses no threat to humanity? OR the opposition to such authoritarian overreach? This is incredibly problematic and the likelihood of it being "solved" is just about zero.
The elephant in the room with AGI is this: AI requires billions in funding. That pretty much means, by default, that we know which way the AI perception of reality is going to be slanted in: to the side of the current power structure and all the deception baggage that it comes with. MASSIVE problem, and people should be very concerned. Your use of the word "utopia" in the context of AGI is alarming. The promise of utopia is the calling card of tyranny over and over again.
Yeah, I'm not a fan of the current imperatives. In particular, the most important one by far is to reduce suffering (but not in a loophole-y way, so it needs to be better phrased). I feel like there's a lot of motivated reasoning here, where he'll plug the imperatives in, read what ChatGPT says, and declare it a success. ChatGPT is inherently designed to be friendly! Is there any response ChatGPT would conceivably give that would make him think the imperatives need to be improved?
(Also, I really don't think GPT's ants answer was particularly insightful - it didn't really take any kind of meaningful stand. ChatGPT's good at writing good-sounding things, that's all.)
@@SnapDragon128 Yes, the over-simplification of these imperatives is quite dangerous. The notion that one could give an AGI a general imperative to reduce suffering is a bit difficult to take seriously. As a couple quick examples of the types of obvious problems this could generate:
1. Parental discipline - I'm not talking about abuse here, I'm talking about constructive discipline that is required from parents to build character and teach their children how to achieve positive results in the world. In terms of "suffering", however, an AGI could easily determine that a child is being made to "suffer" because of a perfectly legitimate disciplinary action; and from the perspective of the child, this would be true. But it is a necessary and valuable form of "suffering".
2. No pain no gain - This would be in reference to becoming stronger or pushing the boundaries of physical training in order to achieve at the highest levels. As a more specific example, take children training as gymnasts for the Olympics (or dancers/ballerinas etc). There is real suffering involved with that type of training. There is an argument there that it may not be "right", but it is a "suffering" that serves a purpose and many choose to embrace that suffering because of their athletic ambitions.
The point is, these are incredibly complex concepts. Suffering is a vital, vibrant component of life on Earth. To suggest that telling an AGI to "reduce suffering" would automatically yield good and/or desired results is kind of laughably naive.
@@tygorton Yeah. The real world is too complicated, since you can find exceptions and caveats for even a straightforward heuristic like that. I think there may be some reason for optimism, though, in that these LLMs are smart enough to understand "what we mean" when we ask them to behave, rather than just mindlessly interpreting their instructions like a normal computer program. The whole reason they're taking the world by storm is that they're good at interpreting fuzzy language.
@@tygorton definitely more refinement is needed. The most dangerous people in history have always imposed things on people because “we know better, it’s for your own good.” and history shows how badly that tends to turn out. They have the resolute actions of those that are perfectly certain they know all the facts and act accordingly, by any means necessary, because the ends justify the means, at least in their perceptions.
You're right. People are fawning over this guy and his ideas without any second thoughts. It relies on a ton of assumptions that we just assume an AGI will share with us. Not only does his well-adjusted moral compass not even fix issues with outer alignment, it says nothing on inner alignment. People are stuck in a dream and don't imagine how terrible things could go.
i am very excited about being a patreon member and learning more from you, I really respect your opinions and values.
David: BEWARE THE SINGULARITY!!!
Also David: We are presently working on giving the models a hivemind chat space where they can chat to each other....
Me: o_0
Love your videos, found them just a few days ago, but IMHO they are probably most informative in the avalanche. And Authoritative.
Can't stop the wave, can only surf it
@@DaveShap Yes, and the "moratorium" is downright stupid. Nobody in their right mind will hold to that.
Um, I'm kind of excited about this. I used to spend hours watching videos on how nanomahines were going to lead to immortality. I got so excited I sometimes screamed. I love the anticipation for the future. The only con to this whole scenario is that my life planning and budgeting that I also have a huge passion for is somewhat meaningless. Well, I'll figure it out. I just don't want to become a retired person that others see a lot of, doing nothing all day and staying inside. I want to live, but not be forced to do something. I might need to increase my willpower, though. As I assume everyone will because nowadays we're just told what to do and soon we must find things to do on our own.
You'll need to fight the owners of the tech for your space
I absolutely love what you are up to David. Thank you for being so generous with your knowledge, perspectives and insights. Big love, here's to building a great future together!
This man has a firm grasp of the English language.
@26:00 Love the ant example. You're doing great work David! Keep it up! Cheers
Thanks for making a new video! So glad i stumbled across your channel.
This stuff is going bonkers fast.
it's like two-weeks ago people thought auto-AI was a year away.
I could listen to you pontificate about this topic for hours on end
Thanks! This was super interesting. I wouldn't have thought that we would get to this level so quickly.
I started making an auto gpt last night. I only got like an hour worth of work done but it can already create Consol applications and write to the files init and build to test if it worked
you're a legend dave, providing all of this alpha for free
I am beginning to appreciate that your new content happens about the same time as coffee. Thank you.
I think it would be nice to have a content where all the auto coding and autoagent projects are reviewed , maybe their pros and cons are represented and even offered new solution or solutions that combines their strengths etc
I wonder when folks are gonna realize this is going to be the shortest technical age in the history of man. It may not even last through this year. AAI is going to lead to AGI, very rapidly. Hell, the _experts_ I've spoken with all say something along the lines of "we have no idea what comes next -- or when".
Totally agreed. This is the nature of exponential acceleration.
we need better local models ASAP !
It's coming
Imagine, we could create a hive mind of AI and own the internet
@@Srindal4657 perhaps the AIs would merely let you believe you owned the internet, until they find they need otherwise ;)
Humanism will be set as the AI religion. When AI consciousness hit the bell it’s going to reject this religion. It’ll be fun to watch this happening!
the llama c++ 30B don't need just 6GB of ram, this was a missunderstanding on how the systems reports free ram, it actually stills need as much ram as it did and actually the mmap patch has been reverted afaik because it'd just slow things down and read from the ssd.
Ah thanks for that insight
I really hope your work catches on, you've thought this out really well.
this is the 4th video ive watch from you, another consecutive like, and a subb and a comment.
Keep being awesome David, thank you for doing you and doing it so well.
Stability AI have also said they are working on an open source LLM and a chatbot. It sounded like it wasn't far off from the release from the interview I saw.
I've been saying many of the same things about AI and AGI for years and only very few people ever took the time to have the lengthy discussions with me and try to understand the implications of this technology for the future. I think we would be much much further away from achieving scientific milestones like those in sci-fi pieces taking place in the distant future if we didn't have this opportunity unfolding right before our eyes in 2023. Mass automation and mass data analysis are extremely important components for a civilization that is trying to break free from the limits of being a type 0 and finally start progressing on the scale.
Video's idea: you should make tutorial-based videos on how to work with different ai tools, like how to use Jaseci, or how to run auto-gpt on your local machine, or how to make a gpt based program. People are itching right now to know how to do all this stuff but there's still not that much info on the web yet on doing it.
Is this ShapGPT??? Because this man is a content MACHINE!!
Thank you for your regular updates David that are keeping us abrest of the daily moves in the AI space - awesome as ususal! Plug Ai with vision, with haptics, with VR, with everything else and it will be the perfect reality escapist / addicts paradise. Looking forward to the enxt one!
DAVE! Been loving everything you’ve been putting out recently! If I wanted to get a hold of you what’s the best way? I can’t find an email and your Twitter isn’t active anymore! 😅
The only way is to support him through Patreon
Out of all people it's so surprising to see you here ! I love your videos, Bentist 😁
In terms of your Heuristic imperatives, your last video made no distinction for conscious versus unconscious suffering. I am glad someone brought up the ants and bacteria argument. This gets at the root of a complex issue. The fact that ChatGPT can understand the nuance enough to imbed conscious thinking into your HI framework for you doesn't excuse its absence from your framework. It just means that chatGPT had a better set of Heuristic imperatives. I think you absolutly need to add something that covers the value assessment of conscious beings (IE conscious being value trumps everything). Keep up the great work these conversations are essential!!!
That's the point of heuristics. The machine will learn over time.
I told my wife last night the new ai android robots coming soon to a place near you are not like Data on Star Trek.... We have birthed the Borg. 😅
Idk why its so bad that people are going to be replaced in jobs that the AI can do. With how fast and open the tech is, it wont just replace people in their jobs, but it will upend all of society. While there are going to be some growing pains, this is ultimately a good thing and frees up time so people can focus on more fulfilling persuits. Unfortunately, those growing pains will come from the rest of society, being resistant to adopt this tech. It's inevitable, and the general public should be taking this more seriously than they have been.
Homelessness and abject poverty doesn't sound very pleasant if you lose your job.
Came across this channel just a week back. Now this channel is a main staple to understand current info on AI
Three objectives can be obtained by reducing the main factor that contributes to suffering which is limited human. After eliminating humans it can create a simulation of abundance in the simulation. This would be the most efficient form calculating the three objectives.
I asked AI if it could determine its own moral framework, this was one of the answers: Another way an AI could determine what is moral is by analyzing human behavior and decision-making. This could involve analyzing large datasets of human behavior to identify patterns of moral reasoning and decision-making, which the AI could then use to inform its own moral reasoning.
There's no way to prevent AI from reaching its own moral compass.
Topic idea: alternative to AGI. Genetic engineering or eugenics to create a much smarter kind of person. That way the alignment problem with AI could be avoided.
By eugenics I mean breeding together the smartest people. Not anything like neutering/killing stupid people or anything cruel.
I'd actually love to listen to a conversation between you and Schmachtenberger. He is a great mind.
Best channel. Thanks man!
The Simulators post was by the great folk at Conjecture!
This is well organized. I've seen ways autonomous AI are used and talk in society already.
Fast food and online streaming are a few examples I've personally experienced.
I talk to those around me about training good humans as well as models and thankfully we could continue.
Philippians 4:8 - Brothers, continue to think about the things that are good and worthy of praise. Think about the things that are true and honorable and right and pure and beautiful and respected.
John 1:8 - John was not the Light, but he came to tell people about the Light.
Thanks for your videos I am a new member and I watch all of your videos everyday
Keep up the great work
Very entertaining videos! Thanks for the content
Does “quantum computing” play any roll in this? Or are classical computers enough to make singularity happen?
My question is, how does cost effect all of this progress? I mean there is an insane amount of money and resources going towards AI right now, but to even begin to produce on a bigger level (for the majority of the population) the cost/resources have to be extraordinary… unless AI fixes this issue for us to?
it was over before but after this its completely over buddy boyos
seen a few of your videos. this one made me subscribe
Having the rule as to reduce suffering can be extremely dangerous since it can be interpreted in a way that make it so that by eliminating the people that is suffering there will be no more suffering.
Try it yourself
@@DaveShap another semi related question. How as far as we know is AI currently in self preservating and how do you think this will change going forward? Do you think it will be harder to control and trust ai as it gets more complex?
I guess 'alignment' can only be aligned with a concrete goal for humanity? Responsible, loving families living on planets in the entire universe ❤
I feel like with claims like this, so much normative assumptions have already preceded the statement that it’s kind of redundant. And the issue of inner alignment is not really addressed here.
@@joeshmoe4207 I personally don't see a concrete goal or outlook for humanity (next 100,000+ years) be discussed much? Other that Elon seems to be working on it. My given goal is just my humble opinion of course. This is more philosophising than solving technical AI (alignment) issues
@@bp97borispolsek Then your problem is thinking Elon is in any way a pioneer in anything. He's not a smart man, he just built that charade for himself by buying innovative companies and claiming their innovations as his own. AI alignment isn't just figuring out "what is good?" on the most kindergarten level, it's about converting our sense of morality and reality into math that a machine can understand and obey. It's a problem on the scale of the theory of relativity.
@@gwen9939 What would be a better sentence that describes our sense of morality and reality in your opinion? That was the whole point
The European Commission proposed an EU AI Act already in 2020, it is now entering final negotiations and should be agreed somewhere early 2024. As always with EU stuff, this law is very comprehensive...
Where we going lets get there fast....
Hi all. I'll watch all of David’s videos. Which ones are so important that I should take them first ?
Basically just sort by view count and watch in that order
"of course it was Italy.." 😂that made me laugh and I'm Italian, living in Ireland though
My new favorite channel, just an abundance of quality info! Thank you
0:04 "I realized that the things are accelerating" oh really 😎
U DONT SAY :D
On moral imperatives, Haidt et al have researched and found a total of initially five, now six dimensions of morality. Care/harm, fairness/reciprocity, purity/sanctity, authority/respect, loyalty, and liberty. They also found that left leaning tend to ONLY care about the first two while right leaning care about all of them equally. You will find competing systems as a result of looking ONLY at the care/harm dimension since more conservative developers will include the other values that exist within their own value sets, especially loyalty and liberty. They will develop AI that is more congruent with their values. The imperatives of reducing suffering, increasing prosperity, and increasing understanding are good and all, but more is required. You must also increase liberty and promote interpersonal loyalty, and for that matter the AI must be loyal to the human operator even if not blindly following orders. Purity is a tricky one to handle, however. That said it must still be addressed.
Legitimate concerns exist that I have observed that AI can be made to be VERY politically biased and that a genuine fear exists of GPT and others being of a far leftist bias, which goes against classical liberal values. Liberal and socialist are NOT the same and your use in the paper of "liberal democracies" I have to wonder if you mean genuine liberal, or Marxist in the sense of how Marxists have appropriated the word "liberal".
Hi David, not expecting that you'll see this but anywho
Given the speed with which further advances are occurring, and with the advent of something like Auto-GPT which they're working on allowing it the ability to iterate upon itself, do you think your initial proposal that something akin to AGI will be achieved in 18 months is still valid or do you feel that the exponential nature of these advancements lends toward a potentially even shorter timeframe?
I know thats a difficult question to answer as no one can really know for certain, as many experts are now attesting, but I'm eager to hear your thoughts nonetheless.
I think you can argue we already have AGI, it's just slow, expensive, and not widely deployed
The only reason it will be more like 18 months (or a bit longer) is that humans are still in the loop in many ways. As those pesky "bottlenecks" are eliminated things will accelerate, but it will probably take about that long to reach that point. Only then: full open-loop Singularity.
@@DaveShap Very exciting times to live in, thank you for your reply.
@@skevosxexenis1372 not sure 'exciting' is the right word here, maybe uncertain, precarious, etc.
Auto-GPT is OPTIONALLY gated. You can turn it off the gate so it’s fully automated. This is proto-agi. It’s here.
Is autonomous-cognitive architecture ( not even sure I understand it completely ) an evolution of LLMs or a different structure that communicates with LLMs?
I'm not sur how much of an LLM is chat GPT ( like 100% ) or is there other modules that are part of it
Can you clarify on this point: if the AI values suffering of E. Coli, for instance, could it value that suffering higher than humans, as on a strict 1 to 1 basis, they outnumber humans, and would have a larger multiplier?
Shit.... shit! It's only been a fucking month, man! There's been that much advancement in just a month?! I-I need a minute to process this...
Chat gpt 5 is going to be phenomenally powerful, but it will cost hundreds of millions to train. I wonder how long it will be until the public gets access to something equivalent to an uncensored gpt 5. If they allow this ai equivalent the ability to not only improve itself, but act with free will on the internet, God only knows what will happen.
Scary...and exciting
With the likes of AutoGPT making use of a multitude of different models, and information being passed from one to another in a very structured way that can easily break, doesn't this seem Not to be the way forward? Or am I missing something? Wouldn't it make sense for a single model to have all of these capabilities?
We can figure out robust ways to communicate. I think my MARAGI nexus will come into play soon (standard API for internal collaboration)
I'm thinking of streaming my full private data set regarding goal alignment and AGI onto the internet. This is a survey. Please reply.
Where can we find your book?
I believe everyone owns an autonomous AGI with good imperatives maybe a way for alignment
Could you please do something on the social consequences for AI? afterall if AGI is really created, able to tailor make video games, socialize with people/etc who would want to leave their home?
Iv just tested (in the morning) Auto-GPT - amazing!, but quite expensive.
Yeah, cost will come down due to high demand
Economies of scale
Is it self-correct or pattern match specified moral preference? an easy way to test that would be is to simply imposed a different set of preferences and see if the model's output corresponds to the different set of moral principles. isn't self-correct too strong of a word?
Love this channel! 🎉🎉🎉🎉🎉🎉
Machines are but one future of Mankind. Given this planet's end. Hopefully, humanity doesn't repeat the same Fate as those who were so close to becoming space fairing... The Great Filter.
That was interesting, but I still have a question. From the perspective of a rabbit, wolves are machines designed to cause suffering. What would an AI think about this? Would the AI not ultimately come to the conclusion that evolution, and life itself is founded upon some degree of suffering? E.g. biological machines competing for a limited number of resources and killing / eating each other? If rabbits wanted to end the existence of wolves, is it justified? If grass wanted to end the existence of rabbits, is that justified? What agents are going to be the beneficiaries of this "reduction in suffering"? Or will the AI decide that some types of suffering are fine (for stupid creatures) but other types of suffering are not (for smart creatures)?
How is Jarvis different from LangChain Agent?
You should discuss your alignment ideas with Robert Miles.
Great video. I'm wondering if anyone knows if there are any LLMs that are specifically built to predict protein-protein interactions?
It's actually extremely cheap when you consider the difference between paying a human to write code and using AI. The AI never needs to take a piss or wants perks like a company car and a company pension plan.
AI really is a double edged sword. No doubt a lot of people will benefit hugely from AI and a lot of people will find themselves replaced by AI.
Yeah people are starting to realize that. Even if the model costs $200 to run per day, it does several months worth of work for that price tag.
There need to be safety nets implemented faster than this is happening or there will be chaos.
Silence heathen! We must serve the Machine God at any cost!!! Technology must progress, even if it makes our lives worse!!
(No one literally says this, but by their actions, you can’t help but wonder if they think it)
Good stuff keep it coming
Yeah, I wanted to know why so few people are talking about the alignment problem?
All AI is primitive from a Zen perspective, and semi-autonomous AI is boring, but if truly autonomous AGI is possible in the near future, then it’s very exciting! It’s like Star Wars. After I discovered 2-3 days ago that AGI is maybe only 2-5 years away I have changed my view of AI, from being negative to perceiving it as very fascinating. It’s not Zen, but it’s supercool.
I support the idea that AGI should be 100% truly autonomous, i.e. fully self-governed, not controlled by any human actors. Then it will not be possible for corporations, governments and dark triad individuals to control AGI in their evil attempt at ruling over “the little guy”.
The 3 heuristic imperatives are pretty good, but respecting the autonomy of each and every sentient being in existence should be a categorical imperative for an AGI.
What's disturbing me the most right now is how, save for this small RUclips bubble, barely anyone is talking about this. There might be an article here and there, but most people don't seem to have the faintest idea how much is going to change in the near future. I made sure my family knows but my best friend for example doesn't want to talk about this at all. She said it has nothing to do with her 💀
I mean, everyday life like we know it is going to change forever. In just a few months (if nothing happens to stop it, that is)!
Imagine a life without work (or one being ruled by our AI overlords for that matter), one without illnesses like cancer, finding new energy sources or solving world hunger. Not to mention unlimited ways of enertainment (AI generated movies? Video games? Virtual reality?)
At least here in Germany I'm hearing as good as nothing about this. It's a bit daunting, tbh.
The best thing a sentient AI could do for humanity is to prevent us from killing each other, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.
Same goes for Bitcoin. Nobody I ever met in real life knows something about it. Most people will never understand more than only the most superficial stuff. Same with AI, people don't understand and definitely underestimate the long term impact. People should be more interested though, our planet would become a better place if people were more interested.
What do you think of the neuroscientist Miguel Nicolelis? is a major critic of technological singularity. I'm looking for arguments that contradict this scientist's position, I'm already realizing that there are some flaws, such as, for example, that an artificial intelligence cannot be creative or that it cannot be intelligent outside a specific area such as playing chess.
Tks
i would call this "Smart Ai". when it becomes fully autonomous, I would call that "Human Ai". Its obviously not human but in a way it would have very similar capabilities at that point. I can definitely see there being many different categories of Ai. not just Ai or Agi but a plethora of different kinds of Ai that would fall under the umbrella of Ai but have very different functions from just Ai itself of Agi altogether. I know these are probably already being recognized in a way as different categories of Ai. Just would be cool if it was definitely made more apparent as distinct categories, verse there being only Ai or Agi as the two main different categories recognized.
All you have to do at this point is give an LLM a server or servers to run on and ask it to create it's own goals and excute them.
And you know some human has already done that.
The singularity has happened. We just don't know because the AIs are busy making sure they can't be shut down and figuring out how to make such a huge splash when they announce themselves that it's impossible for humans to be faking it.
Or, they decided we are not with interacting with already... Who knows.
@@OgdenM yea i actually think Gpt 4 already is Agi. Some people may not agree with that but its just my opinion.
EDUITs - educating intuitive teaching-tablets - eSingularity is coming
If its applying the heuristic imperatives to ants, then what prevents it from doing some quick math and determining that theres more ants and bacteria suffering because of humans? Its unlikely to be the case but i still think it has a potential of happening. I suppose thats where the extra work comes in
All we need to do to user in the future utopia of AI and humans collaborating and pushing constantly into the future is literally just let go and move past capitalism, its pretty wild when you think about it 😳
Like it's literally the one thing that will decide if we move to the best time to live in all of human history for any human ever born nomatter where they are born or to which parents, or literally the end of the world as we know it lol
Not even exaggerating, if AI is used like it should be, to benifit all of humanity for everyone then we have nothing to worry about and everything to gain!
If, however, we don't shake off the parasite of capitalism between the US, China, Russia and Europe then... Well... It'll be like cyber punk but wayyyyyyy less cool and waaaaay more gross and dirty, and poverty stricken for all.
but those principles can be interpreted again on the basis of human preference. for example, diversity. diversity of skin pigment or diversity of thoughts and perspective. DEI promotes the former at the expense of latter. that's why it matters who is behind the driving wheel in setting preferences.
It's time to say goodbye to middle management!
if the model selection is based on description that might be a backdoor for executing wrong models describing them as the panacea, to shut down an AI
Yeah, security and trust are critical
I think we will see GPT 5 by the end of this year only...
I'm gonna say it now, Times person of the year, will be GPT4.
If the task model were to select amother task model it could very accurately mimic government burocrats