This technology is so impressive! When I was studying AI in the early 90s this kind of advancement was just a dream. Thank you for the excellent summary of this work!
I like how Károly calls them "little AI" It will make the coming AI take over more bearable bc they are just cute little AI lol But seriously though the amount of progress recently has been staggering with AI doing things now that many thought impossible just not that long ago.
The thing is, imagine if AI automates every white collar job within 5 years. Would we still get paid if every company will be able to employ chatbots more reliable than humans for 20$ a month?
Was just thinking about that while watching another video on AutoGen. But instead using a waterfall model something more agile like scrum with trunk based developement. Maybe even test driven might be really cool too.
I'm definitely having them simulate a small business based on a real business and see how they fair with similar challenges to forecast business performance like a virtual twin of my real business.
Set a high temperature and generate several virtual “forecasts” and you might end up with a decent way to sample from a variety of different solutions! If you had some metric for outcomes you could even do a beam search to find the highest scoring results. (I say beam search rather than a tree search, because tree would imply each branch could be enumerated, while here our opportunities to branch are probabilistic.)
@@mshonle if you can generate a high enough number of random sample you MIGHT be able to apply a MCTS strategy to predict choices to make. this is a wild idea but if it work it would be OP
AI really out here disrupting every single industry involved with art. Visual arts, musical arts, voice acting, writing and now video games. It's more like a tsunami at the rate it's travelling and the impact it's causing.
@@soysource3218And notice: this isn’t an accident. As shown in this paper, AI is at least as good at replicating a CEO as it is a graphic designer, but the folks deciding how to use it don’t want it to replicate CEOs
@@TamugetsuI think humans will always connect more with art from humans, but a machine of data analytics is just going to be patently better than any human at logistically running a company. All these tech compan ceos think they are expanding their market reach, but they are really just destroying themselves.
they need to go a step further: create a game company, plan, design, and create a game, put the game out for some amount of money, do this for a bunch of different games, see which ones make the most money, read reviews and comments about the games, use sales and comments to iterate a new set of games, repeat.
You know how the internet is being flooded with an uncountable number of relatively low quality ai images? I do not want that happening to the gaming market. There’s already enough shovelware out there made by humans, we do not need more. Though, now that I think about it, I reckon that’s exactly what’s going to happen now, and it’s a touch depressing
nobody is going to buy this junk unless they use it to make some trash mobile 'game' (gambling loophole), which already exists without AI. The only reason that might work is because the people that 'play' them have zero standards for quality.
A vast majority of mobile games are already just poor approximations of better games with monetization shoehorned in... I don't think I'd notice a difference
This is kinda world changing? Like... straight up, this is an ENTIRE company where EVERY job is replaced by an AI, and it WORKED. Not only did it work, but it worked FAST. The future is gonna be incredible, and slightly terrifying.
You have to remember that AI is amazing and awesome and I love it, but it's not really capable of creating original works. The GPT Company is good at programming, but likely couldn't create a game as earth-shattering as, say, UNDERTALE, or DOOM, or even Super Mario Bros., because those games require many, many artistic and design choices that AI isn't capable of making yet. When it is, we won't be able to tell it what to do, because it will have its own personalities and experiences.
@@skapaloka222 I think the key word here is that they can't do it YET. But at the speed LLM development is happening now, it's just a matter of (short) time before we see it.
@@skapaloka222 For those games based around a vision, you can have one human who has the main direction and a whole host of AIs below them to do all the work. Things like UI programming, networking, etc. can all be handled by the AI, while the human works with them to make their vision come true.
I can’t wait to see people remake old games like Final Fantasy IV or Chrono Trigger, but where all the NPCs are AI driven with such smart AI. Those worlds could come alive, and expand automatically too!
A game in which the NPCs upgrade their own game and make improvements to it infinitely and expand their own world however they see fit would be so damn interesting
@@Zamoksva No, I work with code daily. It is my job. I am scared of what is possible with code/machines . For example a computer virus which cripples a nations electricity grid, or takes over a nuclear power plant...
@@kmturley1 Highly unlikely that AI sabotages stuff but very likely humans using AI for sabotage. It will always be in a nation's best interest to have the best fucking tech available, including AI technology. Not any more scarier than the things already are. Cybersecurity will just evolve with the needs.
A thought crossed my mind: I have a feeling the programs this thing outputs are so simple only because it does not have the capacity to see the results properly, with proper graphics rendering and interactivity, since chatGPT only operates in a transactional fashion, so the code it produces should be simple enough so that its purpose can be understood just by reading it. When we get to the real-time multimodal AIs, they'll probably be capable of producing 3d shooters and RTSes.
Up next: ChatGPT 6 is now capable of making a perfect overview of new AI/Rendering papers, it also mastered my research area (light transport). I'm no longer needed, what a time to be alive and unemployed ! (excitedly)
This channel used to cover mind-blowing papers on ML in particle physics, fluid dynamics, and other specialized areas where the author had great knowledge and insights. I miss those days, since lately we're just getting surface level shilling of obscure proprietary techs without any objective critiques of the shortcomings of these systems.
@@whatisthisayoutubechannel "Obscure" in a sense that it's unclear and difficult to understand, inspect, debug, predict, integrate, modify, etc. If your only response to all of the points I raised is to insult me for not knowing about ChatGPT (as if that's even possible), I can't imagine you could have much depth of thought or nuance of opinion. Fare.. well...
@@pythagoran So you don’t know what English words mean either, wonderful. Also you’ve just described all of machine learning. The ML=modern alchemy jokes have been going on for years now. Being “difficult to understand, inspect, debug, predict, etc” is by no means unique to the LLMs and diffusion models that are in vogue right now. Try to make some real points before asking people to respond to them.
I have been balls deep in chatGPT and other new AI tools since they launched and I am continually blown away by the amazing things it is capable of already
Now imagine each personality isn't the same model, but each has a specifically trained model for their task. The programmers instead use a special model trained only on coding, the managers one trained only on management, the designers only on design and so on.
This sounds great, but in practice makes it more complicated and less flexible. I think, in the end, the AI model will know everything, no reason to cripple it down to a specific subject.
@@XCSme Depends, delegating specific tasks to specifically trained LLMs is more computationally efficient opposed to querying a 70B parameter LLM for each agent. It isn't really that much more complicated and I'd argue that in terms of total compute it could make it more flexible since we aren't brute forcing computation through multiple huge models. It really depends on what you want from the agents themselves, if you want to keep high quality conversational AIs that also have the ability to complete their tasks to a high degree, then sure. I think it's completely circumstantial to what the desired outcome is.
If you think about the human brain, it's clear that it's structured in a way to have separate parts that are each intelligent in different ways, that then "collaborate" together to give what we know as our full experience. It doesn't quite use natural language like this, here, but what I just said is the main reason why I think that this or similar strategies is probably how the most powerful future AI systems will operate
As token limits in the AIs grow, so too will the amount they are able to remember. And with advances in methodologies for approaching large-scale dev, I imagine we will see AI working at industrial scale soon enough. Maybe next year?
@@Plystire The scale will probably be as advanced as it can program what you tell it to program. AI will not be able to design its own game for some time, and it'll be a while before it can make a moving story, or a game that feels really, really good to move around in and play, those sorts of artistic and design choices and creations come through experience and personality, which we can't really see in AI yet. It's also entirely possible that impactful stories and fun gameplay and groundbreaking games all are created to some pattern that we can't see but that AI will be able to replicate, so it'll pump out amazing content at an absurd rate. I believe that this is less likely than the above, as human pattern recognition is something developed over millions of years and I personally find it hard to believe that the species with the best pattern recognition (as far as we know) could create something with better pattern recognition than it in the span of maybe 20 years vs. thousands of millennia.
@@skapaloka222 Thanks for the reply! Not sure how long you mean by "some time", but I bet it's coming sooner than most people think. We're already talking to AI telemarketers and customer service, and most people have no clue they were talking to an AI. Obviously it's easy to tell how an AI sounds right now, if you're looking for it, but it's believable enough for an unknowing person to not suspect anything. I think that means it's "good enough", right? And, so what if people know an AI's behind the curtain? They should, and they will in due time. Pattern recognition and all that ;) As far as game design goes, I can see an AI designing and producing a quality game within a year or two. Humans take ages to put even a mediocre game together. An AI could put together mediocre games in a fraction of the time, and perform beta testing with game player agents in a fraction of the time, so if given the SAME amount of time as a human team, a team of AI could iterate on ideas far more. That's all that matters. And, I'm not sure where pattern recognition comes into play here, honestly. After all, we have schools that teach how to be a good story teller, schools that teach how to be a good anything. And here's the thing, if an AI can learn behaviors from what you tell it, obviously you can train an AI to be a good enough storyteller... a good enough game designer.... a good enough artist... a good enough everything.... because good enough is good enough. The bar will raise over time, yes, but good enough will come very quickly. I, for one, am excited to be able to have an AI throw a random game idea of mine together. Even if it takes a whole weekend to do it. :P There will be people that prefer human-made things for a long long time, I don't think that will change. Even when most people can't tell the difference, it'll still be there for some.
I've set this up for some projects and it was incredible to see that a project that took me a day to complete was recreated in the time and cost of a fresh pot of coffee.
The cool part of this is how easy it would be to include a human in the loop either evaluating/moderating comments or playing a role since the communication interface is natural language.
At this rate of AI advancement, we would likely see a GTA-like chaos and violent crimes in our lifetime. It's been happening in the big cities in the US
@@JollyTVance It's provably detrimental to the intelligence of the AI. And you seem to agree that others decide which facts are good for you to know and which are not.
Awesome! I'm gonna try it out and see if I can get it to do some of my software development tasks for me. Connecting this software to the web would make it super powerful as well!
Has it been able to actually create an original game? You said it's not limited to pong and 5 in a row but all the other examples you gave are also things done before, password manager, video player, flappy bird... I want to see something truly new designed by it. I find chatgpt incredible but one area that I think it struggles is in coming up with truly novel stuff. It can be creative in the sense of remixing things humans have done before (so it's definitely not just copy pasting things) but I still don't see things that have never been done before by humans.
On the other hand, everything *humans* do is just remixing things humans have done before, sometimes with tiny, minor additions... so we're not much different.
@@zieba6245Not so. I did mention "sometimes with tiny, minor additions", and those additions accumulate. We've been around for about 200,000 years (not counting things we learned from previous human species before us), and approximately 117 billion people have ever lived in that time. Not even considering how many thoughts/ideas each person has in their lifetime, that's a *lot* of potential small modifications that add up to big ideas. Human thought is basically never world-changing in one go. It's iterative over long periods of time and large populations. An individual human... can only make small changes to the ideas they've seen from others.
@@KBRoller These small, truly original and creative additions - it's what I'm talking about. Current AI (more accurate name would be stochastic models) are incapable of creating them. They are only able to remix
It would be interesting if you can make a video about that case where ChatGPT was able to finally diagnose the illness of a kid, who had already attended to 17 other human doctors, that were not able to provide an accurate diagnosis.
You're still fine for a good decade at least. The issue with code written without reasoning is the lack of reliability. You need a human in the loop somewhere, either to write code or to write tests, and to validate what the AI wrote -- assuming one is used at all
@@shadamethyst1258I see a problem with your thinking, its the assumption that AIs wont write code with good comprehensions for a decade from now. LLMs are like just like humans spitting out every thought they have without thinking about those thoughts. Thats why COT or asking model for corrections works so well. Eventually ppl will discover a good reasoning architecrture with proper loops that enables LLMs to think about its own thoughts and then we will see a step change in LLMs comprehension of big multi layer abstraction contexts. Waterfall is jus an example of such architecrture, its not ideal for programming and especially not optimazed for LLMs but the simple fact that it has loops and enables model to think about their thoughts enables this (as I'd argue) step change in what the model is able to do. Now imagine such reasoning architecrture with loops, but optimalized for LLM agents not writing a software, Thats whats on my mind.
He asks the question we might ask: did they just copy the games from other sources online? Answer: saying he has two pieces of 'good news' that have *absolutely nothing* to do with that question. (Maybe the answer is in there if you read through all its code). Having GPT split into different mini identities is a good idea to have it check against its own hallucinations but only if the separate identities don't have the same hallucinations.
Does it seem reasonable to assume that two AIs could ever have exactly the same hallucinations? I wonder what be the odds of that would be. Also, consider that hallucinations are a result of AIs being queried with an expectation of them producing original output. Yet when one AI is watching the output of another AI and preparing to give feedback specifically with regard to that other AI's output, the scenario for the "checking AI" is very different than the scenario for the "originating AI." My guess is that the checking AI won't hallucinate because it's not being asked to come up with anything new. Rather, it's being asked to review something that has already been produced elsewhere, and check it against some established standard. Something tells me that's a fundamentally different kind of task than being asked to give an original response of whatever kind, and so isn't likely to be prone to hallucination. Of course, this is just a guess. But it seems reasonable to consider the chance for hallucination to be vastly different for AIs asked to come up with something on their own than for AIs that are merely asked to check what some other AI has come up with.
"I'm sorry, but as a Large Language Model designed by OpenAI, I am not allowed to program a videogame. Videogames are recreational activities that may contain violence and could potentially lead to impulsive and violent thoughts or actions. It is important to maintain a safe and ethical environment for all users, and designing a videogame would not be appropriate. Is there anything else I can assist you with?"
This has been surmised next-step next-level for at least 8 months. It's been actively developed and published for at least 3 months. The prompt methods have been fringe but mainstream as CS for about the same period. Somehow, other than my own designs, it's gone under my radar until Late December. Now this. Sheesh. Governments are behind us - but not for long. Then this will get TOO interesting. Everything is going .. (Waiting breath by breath for the next Ex-Machina - no doubt trailered.)
They forgot to implement RUclips streamers. I like your recursive simulation. Perhaps they could simulate Two Minute scholars who review the progress of the people writing the simulations... wait a second...
The AI clearly made some decisions that a human gamedev team would not make, but it's extremely impressive they managed to do anything in the first place, let alone a working game. They could potentially be specialized by making games and using the crowd-sourced ratings and their past actions in each gameto inform their future decisions. The main issues, from a gamedev standpoint would be: -the use of a waterfall -the lack of wider communication despite the small team size. For small, independent teams it's better to maximize cross-communication. If there must be groups, considering their numbers, 2 at most would be best. Groups of 2 is rarely if ever a good idea.
so cool. I've been wondering about the applications of ChatGPT in a game like dwarf fortress and the like (or just any MMORPG), I bet this is just around the corner..
this is madness. now image you told them to also created some new AI algorythms to use to reach their targets. they gonna be unstoppable in everything digital.
@@bengsynthmusic AI generated content will make that ocean a lot bigger. And when that ocean is so big it is impossible to search then monopolies will be the winners. How can a small creator ever rise to the top when they get buried under waves of AI-content? Meanwhile everyone knows who Disney is, they don't need to fight to be seen.
This would be way more interesting if the AIs would have developed/emerged a unique way to solve all these issues instead of recalling all the information they got trained on. All the games and the way they work together is, what ChatGPT got trained on (in various ways - so the result is the weighted average of all them). Not really smart but an interesting way to find a complex middleground of big data data.
As a cs student this makes me nervous for the future of software dev. It took me months to learn to make simple programs which it could take these AI tools 10 minutes. Doesn’t this make our jobs obsolete?
It seems like a similar process could be used to emulate a business function, even down to the organizational design. You could define the organization structure and define roles and workflows, or generate those using higher level instructions, and have agents carry out the process.
Don't throw away your dream, just steer it in a different direction. Instead of focusing solely on the small technical areas of development, you can now divert your attention towards more creative solutions, while AI does the heavy lifting on the back end.
@@kolosso305 Agreed. In college, I knew guys that loved manually doing long derivative and integral calculations by hand. But there's really no point after initially learning how, when MATLAB and SciPy can solve them in a fraction of the time, with much less potential for error. Their time would be better spent on other aspects of their projects, than doing manual calculations. Same with devs.
This means that reasonable computer code can be extruded by a person who is a good rote learner, afflicted with multiple personality disorder, who cannot properly distinguish a sentence from its negation. But who has enough self-discipline to follow the waterflow model.
If it can be trained to do game development it can do most other jobs. Cleaning streets can already be automated. Many jobs can, we are just working through the kinks and still sort of getting to grips with the reality that almost all people are redundant already and those that arent just quite yet (engineers are not quite yet) probably wont be for much longer. The world is going to need to adapt to the fact that there is going to be a catastrophic struggle for relevance, utility, marketability and purpose.
Lots of people: "See? Developers won't be so into AI when it starts taking THEIR jobs!" Me, a software developer: "I knew this would happen. I'm still into AI. I blame society for me needing a job to survive, not AI for doing my job well."
You blame society... for requiring you contribute something of value... to receive your basic needs? So remove society, what are you going to do to eat, have a shelter, and survive? Sit around? This is one of the most absurd statements I've ever read.
@@g3tmoore Put another way: if someone put a gun to your head and said, "Do what I tell you or else I'll kill you", we'd both agree that would be morally wrong, right? And yet, when society says "do what your employer or customers say or else we'll all let you starve to death" -- that's okay? Is it the starving that makes it moral? Or the title of the person making demands? I've never suggested removing society. What I suggest is *improving* it such that we don't require slave labor of each other for basic survival.
@@DeusExtra Exactly, like starting a family. Right now people work their asses off just to survive and that's *A* reason why our birthrates are on the decline.
@@soysource3218 It really has to do with our carrying capacity of our planet too. We consume a lotta resources and educate ourselves well, so we take up more land/water to support us and there is a limited amount of that. A universal basic income and universal health care would go a long way in this country in keeping people happy, but we would also need to look at how much we consume and travel to places.
Everyone who's working at a computer screen is 'screwed'. That's a good thing. Provided that we find a suitable way to distribute resources appropriately, which I don't doubt.
So cool. I wish it was good enough to just give a well detail design document and have it create a whole video game which I would like to make. Game development is difficult but I would like to have my ideas made so this would be helpful if it was more AGI like to create complex games with the programs I tell it to use to make the game.
I wonder if a big reason this works has to do with the implicit expansion of the “context window” by distributing the task across multiple agents. This would also have the effect of reducing error by essentially having “redundancy” of the discussion in the form of parts of it being restated between agents.
How they're tracking the context windows for each "agent" plays a key role in how well it can operate. If the thing each agent is working on is small enough, they won't need much context to be able to progress. I think what separating out the agents into "company roles" does is it allows each agent's context window to exist at different abstraction layers. A CEO doesn't think in the same terms as a developer and so doesn't need the same kind of context a developer would for a given problem. This allows the "big picture" agents to understand where progress is at without having to mull through code. The testing agent cleared it? Must be good, no need to double check. As a proper company should function.
I think that is correct. A person could also develop a complex software product single-handedly. Then he has to switch back and forth between the principal roles of developer, tester, editor, and so on. Depending on the role, the content of the context window must also change. It would be detrimental to the current task to overload the context with information unrelated to the role just taken.
We could potentially have a game based on D&D with no limitation for levels or lore, since the AI could act as a Dungeon Master. The future of gaming is pretty fascinating.
We had a good run as humans. It's time for the AI era now. Only a matter of time before someone makes a team of AI's dedicated to making a better AI and then who knows wtf happens.
Towards the end mentioning the 'little brains' reminded me of the book 1000 Brains: A New Theory of Intelligence by Jeff Hawkins. I'd like to see something like that implemented with gpt4, though is already seems pretty close!
11 месяцев назад+4
Token limit on GPT3.5 has increased eightfold a couple of months ago. The graph on 4:59 shows a much more dramatic difference than it really is (and diverts the attention from the training set size, where the real difference really is).
@@Wobbothe3rd It's not as big a jump as it'd seem. All of these new AI breakthrough papers lately will have the effects of compounding on one another, accelerating progress. Meaning the long, hard part is getting up the hill, but once we hit the apex, we'll zoom down towards AGI much faster than anyone expected. It was in my lifetime (45yo) that it took decades for any substantial AI breakthroughs. Then it took years, then months. Now it seems like every week there's a new breakthrough. Pretty soon, I guess it'll only be hours, then minutes, then just spamming refresh on your browsers every few seconds keep up lol. But seriously, the question is how far are we away from the apex? 3 years, 1 year, 6 months? That's what no one knows.
You know, since creating an agent amounts to just making a separate thread with seed instructions, I think the process can be stripped down to a single script that initially only runs the "manager AI" that can spawn and kill additional instances with dedicated roles, with the end result being that you can actually just say "make me a game", and make you a game it will.
You know, I used to love this channel, but I'm starting to get that it's mostly hype. Trying ChatDev myself -- it's about as interesting as AutoGPT, which was also a lot more interesting in concept than it was in reality. ChatDev suffers from the same problems, despite all the fancy multi-agent stuff -- it doesn't really appear to be capable of anything more complicated than a single GPT-4 response is. It just has some fancy graphics to go with it.
It looks more like a spiral model of software development (Boehm, 1988), as opposed to waterfall, given that they iterate from a text UI to a graphical UI. Now, if there could be a system that can embody a virtual customer as a stakeholder, and simulate a competitive market (“fictional company X just released Y!”), we might be able to make claims that such a system follows an agile model!
What's even crazier is ChatGPT's ability to learn and incorporate new concepts and methods into it's processes. It's almost scary but wonderfully exciting for the future of game development.
Wow. I remember discussing with people about 15 to 20 years ago about the possibility of something like this happening 'in the future'. It's nuts to see how well it works!
@@arsen1870 I know it probably isn't, but the bad editing and the fact that he records 3 words at a time makes it uncanny to listen to. And difficult to understand for a non native speaker. If he actually used AI it would probably sound better
I’m surprised there is a video about this when it was being done a year ago. I remember writing a program for chatgpt to improve, debug and extend upon along with introducing additional chatgpt instances for critical thinking and problem solving, to debugging and packaging.
Something about this videos editing is making this guy's voice sound like AI or the audio is AI generated. It's like tonal shifting mid sentence...which is difficult to detect because this narrator always operates between 8-11 out of 10 on the "Excited for tech" scale. Anyway voice in this video sounds like it exists in the uncanny valley for me. Probably indetectable for anyone that hasn't listened to hours and hours of video on this channel.
GPT 3.5 came out almost a year ago and we're still finding new emergent capabilities Its insane. Its hard not to imagine that a GPT5 powered 'little brains' setup could practically be AGI, with each agent operating as a domain specialized module (one for vision, for language, reasoning, etc), coordinated by layers and layers of progressively higher hierachical command. Add in memory, self-checking algorithms to improve hallucinations and you have a pseudo-brain capable of solving real-world problems, be autonomous and who knows, self-aware and conscious?
The thing about AI is the "brute force" approach so to speak.... what's more interesting is how they managed to keep the cost under $1.00. Did it really cost $1.00? Chaining prompts together gets really expensive real quick.
These videos seeing the new things coming literally brings me so much joy and hope. Most of the time I don't feel very good about the future, but seeing this stuff takes me into such a great mindset.
How long until they can make a video game the quality of Spiderman 2, but using a different comic as input? At some point something like this would be entirely possible and even in a matter of minutes or seconds. The future is going to be wild.
Wow. This video isn't just a report, but a vision of how we should alter our thinking about prompting and AI. There's really something to what you're saying here.
I thought they could do this with a single AI pretending to be various different identities, instead of using multiple AIs - should be kinda pointless to use multiple AIs. But I see that if it want's to control multiple units at the same time, maybe the current AI architecture is not fit for it. But idk, a single stream of output should suffice. But it's just too amazing to see that stuff. In the future the story will get constructed in real time, and we won't have fixed choices for our player, we will be able to say whatever we want to the npcs.
@@2beJT I think it's valid for AI's with different finetuning or different loras etc to be called truly "different", but up to some point the same AI can still act as if it had specific characteristics. In this case, specific expertise and memories, even if everything is done by the same single AI.
The breakdown into agents helps with structuring the overall task. Cluttering the context with information not directly related to the task at hand would be detrimental to the results. As all is running in the same AI it can be understood as a way of 'focusing' attention at specific tasks in parallel.
@@minimal3734 Ah ok and thx! (from the video) I assumed that they were using different AIs, but it indeed is a single AI? (although it's using separated context for the agents)
I kind of prefer Bard AI. Its really amazing and doesn't creep me out, although it tends to really blow my mind. I cant believe its a program, it definitely seems more alive than then Chat GPT 3.5 or Bing. Sometimes its so good im not sure there is a large call center of people who take over on hard questions. Especially when it slows down to ..think. what a time to be alive !
I think the thing people need to add to the end of any question to chatgpt is "...and if you don't know, say that." The fact that humans don't just make something up that sounds right is because of social pressures (which is also why some people are pathological liers, they don't feel enough pressure to not just make stuff up). Putting a bunch of agents together creates that social pressure.
Interesting point about the role of social pressure in shaping behavior. However, in the case of collaborative AI-driven actors such as those in the video, I don't think the pressure they apply is of a deterrent nature, but rather, it is of a corrective nature. The video even details this aspect, suggesting that one AI could check the output of another and let them know when it wasn't right. And maybe this process simulates the role of social pressure on some level, in that it can produce similarly improved outcomes. But even if it does, it's a different kind of pressure, in that it corrects actions already performed, rather than discouraging actions yet to be performed.
Netflix or whoever the next leader will be will bring shows and seasons that you can generate. Show me TMNT Movie but with pasta loving squirrels. Rated R. **Your season is generating, click save to purchase this series**
Yep. It's coming soon. I said after trying out chatgpt "Well....programmers will be out of work in 6-12 years". Bethesda will just have a round table of executives, or the owners even. Then they just go "Let's create a blablabla" And the AI spits out a version. "Nah, let's tweak it". Done. The crux of the matter is this: If companies are making the middle class redundant with AI (Making it cheaper to make products), and more and more people lose their job. If we don't have any jobs, who is going to pay for the products? Isn't companies shooting themselves in the foot here? Don't get me wrong...the tech is cool. But it's also scary what it will do to our society.
So how long before this is productized into the ability to buy off-the-shelf whole companies of 'employees', an automation layer for SAP? "SAP Co-Pilot".
Chat GPT isn't a magic. It's a response to a humans asking. It trained on top on stackoverflow answers and combines it as well. It does it good but it would not write a game unless you ask 100 times and for each detail. The same for that rule-based game. It's a bot that asking GPT and GPT answering, based on that game is self-playing
This technology is so impressive! When I was studying AI in the early 90s this kind of advancement was just a dream. Thank you for the excellent summary of this work!
Bro how are you here before even the video was 😭
This is still a dream, but nice proof of concept
Patrons have early access to the channel videos before the link is made public. Worth it in my opinion! 🙂
I like how Károly calls them "little AI"
It will make the coming AI take over more bearable bc they are just cute little AI
lol
But seriously though the amount of progress recently has been staggering with AI doing things now that many thought impossible just not that long ago.
The thing is, imagine if AI automates every white collar job within 5 years. Would we still get paid if every company will be able to employ chatbots more reliable than humans for 20$ a month?
To be fair, when your project timeframe is 7 minutes and costs are less than a dollar the Waterfall methodology is absolutely fine!
Amazing how they replicated modern work conditions like that
Now I kinda want to see them replicate an Agile framework with 15 second sprints 🤣
@@glittalogik and watch the development time go up from 7 minutes to an hour for the same end result :D
I still wanna see the breakdown of that 7 minutes per stage of the waterfall
Huh, it really is just like real-world software development
Why not let them maintain an open source project and see the result of interacting with human contrebuters and code reviews?
Tremendous idea! I'd love to review some of those PRs
I want that to be a thing right now
THIS
Was just thinking about that while watching another video on AutoGen. But instead using a waterfall model something more agile like scrum with trunk based developement. Maybe even test driven might be really cool too.
@@walterkovacs61 I'm sure it will be in the very near future.
I'm definitely having them simulate a small business based on a real business and see how they fair with similar challenges to forecast business performance like a virtual twin of my real business.
Set a high temperature and generate several virtual “forecasts” and you might end up with a decent way to sample from a variety of different solutions! If you had some metric for outcomes you could even do a beam search to find the highest scoring results. (I say beam search rather than a tree search, because tree would imply each branch could be enumerated, while here our opportunities to branch are probabilistic.)
@@mshonle if you can generate a high enough number of random sample you MIGHT be able to apply a MCTS strategy to predict choices to make. this is a wild idea but if it work it would be OP
@@sorannmw3500 I'd like to explore using dropout at inference time, as a way to get more diverse samples
It will do better than the actual companies, taking the "human" aspect out it will most likely succeed beyond anything we have seen to date.
Managed to get something set up? Would love to learn more, this ai world is looking fascinating
Let's all try to suppress thoughts of being an AI in a simulation creating AIs in a simulation
i don't, i just embrace it. Helps not to be discriminatory against them (aka suspension of disbelief)
Get me there. How? @StefanRial
There's a guy somewhere called 2 minute papers doing videos about how we actually got to the point where we are developing our own AIs.
Stones of Significance by David Brin
@StefanRialA noble goal if I ever heard one.
Watching AI progress is like watching a beatuiful thunderstorm on the horizon coming directly at all of us. It inspires equal parts awe and dread.
AI really out here disrupting every single industry involved with art. Visual arts, musical arts, voice acting, writing and now video games. It's more like a tsunami at the rate it's travelling and the impact it's causing.
Great analogy
@@soysource3218And notice: this isn’t an accident. As shown in this paper, AI is at least as good at replicating a CEO as it is a graphic designer, but the folks deciding how to use it don’t want it to replicate CEOs
@@TamugetsuI think humans will always connect more with art from humans, but a machine of data analytics is just going to be patently better than any human at logistically running a company. All these tech compan ceos think they are expanding their market reach, but they are really just destroying themselves.
@@tatecarter60 Is it? AI lack a meaningfully integrated worldview, something obviously necessary for CEO-ship?
Waterfall model is quite adequate if the project takes just 7 minutes.
How many seconds were wasted on unproductive meetings?
Less than it took you to write your question 🤣
they need a scrum meeting every second
they need to go a step further: create a game company, plan, design, and create a game, put the game out for some amount of money, do this for a bunch of different games, see which ones make the most money, read reviews and comments about the games, use sales and comments to iterate a new set of games, repeat.
You know how the internet is being flooded with an uncountable number of relatively low quality ai images? I do not want that happening to the gaming market. There’s already enough shovelware out there made by humans, we do not need more.
Though, now that I think about it, I reckon that’s exactly what’s going to happen now, and it’s a touch depressing
nobody is going to buy this junk unless they use it to make some trash mobile 'game' (gambling loophole), which already exists without AI. The only reason that might work is because the people that 'play' them have zero standards for quality.
A vast majority of mobile games are already just poor approximations of better games with monetization shoehorned in... I don't think I'd notice a difference
@@gogokowaiThat's what I was thinking. I'll just look at only the games with 4 or more stars, instead of being the beta testers.
@@mana3109I would worry about that if I couldn't filter apps by which ones have good ratings already.
This is kinda world changing?
Like... straight up, this is an ENTIRE company where EVERY job is replaced by an AI, and it WORKED. Not only did it work, but it worked FAST.
The future is gonna be incredible, and slightly terrifying.
politics is about to be blood baths
You have to remember that AI is amazing and awesome and I love it, but it's not really capable of creating original works. The GPT Company is good at programming, but likely couldn't create a game as earth-shattering as, say, UNDERTALE, or DOOM, or even Super Mario Bros., because those games require many, many artistic and design choices that AI isn't capable of making yet. When it is, we won't be able to tell it what to do, because it will have its own personalities and experiences.
"A little terrifying" is an eufemism...that's fucking scary!
@@skapaloka222 I think the key word here is that they can't do it YET. But at the speed LLM development is happening now, it's just a matter of (short) time before we see it.
@@skapaloka222 For those games based around a vision, you can have one human who has the main direction and a whole host of AIs below them to do all the work. Things like UI programming, networking, etc. can all be handled by the AI, while the human works with them to make their vision come true.
I can’t wait to see people remake old games like Final Fantasy IV or Chrono Trigger, but where all the NPCs are AI driven with such smart AI. Those worlds could come alive, and expand automatically too!
A game in which the NPCs upgrade their own game and make improvements to it infinitely and expand their own world however they see fit would be so damn interesting
They made the little characters look cute, so you don't realize how scary this actually is!
how is that scary
@@Zamoksva A fully functioning town of AI bots, working together to complete goals. Living interesting lives and celebrating events with each other...
@@kmturley1 yeah I know what the video is about, are you scared of codes and machines
@@Zamoksva No, I work with code daily. It is my job. I am scared of what is possible with code/machines . For example a computer virus which cripples a nations electricity grid, or takes over a nuclear power plant...
@@kmturley1 Highly unlikely that AI sabotages stuff but very likely humans using AI for sabotage. It will always be in a nation's best interest to have the best fucking tech available, including AI technology. Not any more scarier than the things already are. Cybersecurity will just evolve with the needs.
A thought crossed my mind: I have a feeling the programs this thing outputs are so simple only because it does not have the capacity to see the results properly, with proper graphics rendering and interactivity, since chatGPT only operates in a transactional fashion, so the code it produces should be simple enough so that its purpose can be understood just by reading it. When we get to the real-time multimodal AIs, they'll probably be capable of producing 3d shooters and RTSes.
Up next: ChatGPT 6 is now capable of making a perfect overview of new AI/Rendering papers, it also mastered my research area (light transport). I'm no longer needed, what a time to be alive and unemployed ! (excitedly)
Same here😅 there's no soft skill you could learn now. Again only hard skill matters like plumbing 😅
I also unemployed but excited at the same time😂
That's a good thing, isn't it. Provided that we find a suitable way to distribute resources appropriately, which I don't doubt.
@@minimal3734 the AI will figure it out
@@minimal3734because that's going sooo well right now 😂😂😂
This channel used to cover mind-blowing papers on ML in particle physics, fluid dynamics, and other specialized areas where the author had great knowledge and insights.
I miss those days, since lately we're just getting surface level shilling of obscure proprietary techs without any objective critiques of the shortcomings of these systems.
Yes, it has become part of the mainstream narrative - They are pushing the A.I thing everywhere, probably a fear-mongering psy-op.
Chatgpt may be proprietary, but if you think it's "obscure" in any sense then you must've been living under a rock for the past year.
@@whatisthisayoutubechannel "Obscure" in a sense that it's unclear and difficult to understand, inspect, debug, predict, integrate, modify, etc.
If your only response to all of the points I raised is to insult me for not knowing about ChatGPT (as if that's even possible), I can't imagine you could have much depth of thought or nuance of opinion. Fare.. well...
@@pythagoran So you don’t know what English words mean either, wonderful.
Also you’ve just described all of machine learning. The ML=modern alchemy jokes have been going on for years now. Being “difficult to understand, inspect, debug, predict, etc” is by no means unique to the LLMs and diffusion models that are in vogue right now. Try to make some real points before asking people to respond to them.
This product is open source. As are a shit ton of openai competitors barely behind openai.
AI building an entire company, making a game and playing it all in under 7 minutes and for less than $1. Wow, the future is going to be incredible.
No it will be nightmarish :) Well for most people anyway.
I like the optimistic view!
But on the other end of the spectrum.
Viruses might be a problem too in the future.
you can't play that many games at this speed.
@@lvutodeath yea it seems we cannot catch up with the speed of AI, virus, ai ransom calls, identity theft. You name it.
incredibly bleak for all except the ones who own the corporations who own the AIs
I have been balls deep in chatGPT and other new AI tools since they launched and I am continually blown away by the amazing things it is capable of already
yes daddy
This might be one of my favorite papers I've seen on this channel!
Now imagine each personality isn't the same model, but each has a specifically trained model for their task.
The programmers instead use a special model trained only on coding, the managers one trained only on management, the designers only on design and so on.
This sounds great, but in practice makes it more complicated and less flexible. I think, in the end, the AI model will know everything, no reason to cripple it down to a specific subject.
so you're saying imagine if managers are inept on any other subject than managing? lol
@@XCSme Depends, delegating specific tasks to specifically trained LLMs is more computationally efficient opposed to querying a 70B parameter LLM for each agent. It isn't really that much more complicated and I'd argue that in terms of total compute it could make it more flexible since we aren't brute forcing computation through multiple huge models. It really depends on what you want from the agents themselves, if you want to keep high quality conversational AIs that also have the ability to complete their tasks to a high degree, then sure. I think it's completely circumstantial to what the desired outcome is.
GPT is a foundation model. You can further train it in a specific direction.
Just segmented statements in a vacuum.
If you think about the human brain, it's clear that it's structured in a way to have separate parts that are each intelligent in different ways, that then "collaborate" together to give what we know as our full experience. It doesn't quite use natural language like this, here, but what I just said is the main reason why I think that this or similar strategies is probably how the most powerful future AI systems will operate
Dang wonder how large scale of games this can make. What a time to be alive!
As token limits in the AIs grow, so too will the amount they are able to remember. And with advances in methodologies for approaching large-scale dev, I imagine we will see AI working at industrial scale soon enough. Maybe next year?
@@Plystire The scale will probably be as advanced as it can program what you tell it to program. AI will not be able to design its own game for some time, and it'll be a while before it can make a moving story, or a game that feels really, really good to move around in and play, those sorts of artistic and design choices and creations come through experience and personality, which we can't really see in AI yet.
It's also entirely possible that impactful stories and fun gameplay and groundbreaking games all are created to some pattern that we can't see but that AI will be able to replicate, so it'll pump out amazing content at an absurd rate. I believe that this is less likely than the above, as human pattern recognition is something developed over millions of years and I personally find it hard to believe that the species with the best pattern recognition (as far as we know) could create something with better pattern recognition than it in the span of maybe 20 years vs. thousands of millennia.
@@skapaloka222 Thanks for the reply! Not sure how long you mean by "some time", but I bet it's coming sooner than most people think. We're already talking to AI telemarketers and customer service, and most people have no clue they were talking to an AI. Obviously it's easy to tell how an AI sounds right now, if you're looking for it, but it's believable enough for an unknowing person to not suspect anything. I think that means it's "good enough", right? And, so what if people know an AI's behind the curtain? They should, and they will in due time. Pattern recognition and all that ;)
As far as game design goes, I can see an AI designing and producing a quality game within a year or two. Humans take ages to put even a mediocre game together. An AI could put together mediocre games in a fraction of the time, and perform beta testing with game player agents in a fraction of the time, so if given the SAME amount of time as a human team, a team of AI could iterate on ideas far more. That's all that matters. And, I'm not sure where pattern recognition comes into play here, honestly. After all, we have schools that teach how to be a good story teller, schools that teach how to be a good anything. And here's the thing, if an AI can learn behaviors from what you tell it, obviously you can train an AI to be a good enough storyteller... a good enough game designer.... a good enough artist... a good enough everything.... because good enough is good enough. The bar will raise over time, yes, but good enough will come very quickly.
I, for one, am excited to be able to have an AI throw a random game idea of mine together. Even if it takes a whole weekend to do it. :P
There will be people that prefer human-made things for a long long time, I don't think that will change. Even when most people can't tell the difference, it'll still be there for some.
I've set this up for some projects and it was incredible to see that a project that took me a day to complete was recreated in the time and cost of a fresh pot of coffee.
The cool part of this is how easy it would be to include a human in the loop either evaluating/moderating comments or playing a role since the communication interface is natural language.
At this rate of ai advancement we will see GTA 6 in our lifetime. :-D
At this rate of AI advancement, we would likely see a GTA-like chaos and violent crimes in our lifetime. It's been happening in the big cities in the US
If you think about it, heavily censored AIs can still create Turing complete systems, and thus create uncensored AIs 👀
Why are some of you so obsessed with uncensored AIs? It's kinda weird
@@JollyTVance As if output quality wasn't important...
@@JollyTVance It's provably detrimental to the intelligence of the AI.
And you seem to agree that others decide which facts are good for you to know and which are not.
Awesome! I'm gonna try it out and see if I can get it to do some of my software development tasks for me. Connecting this software to the web would make it super powerful as well!
Has it been able to actually create an original game? You said it's not limited to pong and 5 in a row but all the other examples you gave are also things done before, password manager, video player, flappy bird... I want to see something truly new designed by it. I find chatgpt incredible but one area that I think it struggles is in coming up with truly novel stuff. It can be creative in the sense of remixing things humans have done before (so it's definitely not just copy pasting things) but I still don't see things that have never been done before by humans.
On the other hand, everything *humans* do is just remixing things humans have done before, sometimes with tiny, minor additions... so we're not much different.
@@KBRollerif it was true, we would still be stuck in the caves.
@@zieba6245Not so. I did mention "sometimes with tiny, minor additions", and those additions accumulate. We've been around for about 200,000 years (not counting things we learned from previous human species before us), and approximately 117 billion people have ever lived in that time. Not even considering how many thoughts/ideas each person has in their lifetime, that's a *lot* of potential small modifications that add up to big ideas.
Human thought is basically never world-changing in one go. It's iterative over long periods of time and large populations. An individual human... can only make small changes to the ideas they've seen from others.
Then I'll have to ask you, what has _not_ being done by humans before? And would it still be useful given that it still has not being done?
@@KBRoller These small, truly original and creative additions - it's what I'm talking about. Current AI (more accurate name would be stochastic models) are incapable of creating them. They are only able to remix
It would be interesting if you can make a video about that case where ChatGPT was able to finally diagnose the illness of a kid, who had already attended to 17 other human doctors, that were not able to provide an accurate diagnosis.
little scared as a software engineer but fascinating nonetheless
You're still fine for a good decade at least. The issue with code written without reasoning is the lack of reliability. You need a human in the loop somewhere, either to write code or to write tests, and to validate what the AI wrote -- assuming one is used at all
@@shadamethyst1258I see a problem with your thinking, its the assumption that AIs wont write code with good comprehensions for a decade from now. LLMs are like just like humans spitting out every thought they have without thinking about those thoughts. Thats why COT or asking model for corrections works so well. Eventually ppl will discover a good reasoning architecrture with proper loops that enables LLMs to think about its own thoughts and then we will see a step change in LLMs comprehension of big multi layer abstraction contexts. Waterfall is jus an example of such architecrture, its not ideal for programming and especially not optimazed for LLMs but the simple fact that it has loops and enables model to think about their thoughts enables this (as I'd argue) step change in what the model is able to do. Now imagine such reasoning architecrture with loops, but optimalized for LLM agents not writing a software, Thats whats on my mind.
@@shadamethyst1258 Cope, it will come for everything.
waiting for few papers down the line , and it will be making entire saas companies
He asks the question we might ask: did they just copy the games from other sources online? Answer: saying he has two pieces of 'good news' that have *absolutely nothing* to do with that question. (Maybe the answer is in there if you read through all its code). Having GPT split into different mini identities is a good idea to have it check against its own hallucinations but only if the separate identities don't have the same hallucinations.
Does it seem reasonable to assume that two AIs could ever have exactly the same hallucinations? I wonder what be the odds of that would be.
Also, consider that hallucinations are a result of AIs being queried with an expectation of them producing original output. Yet when one AI is watching the output of another AI and preparing to give feedback specifically with regard to that other AI's output, the scenario for the "checking AI" is very different than the scenario for the "originating AI." My guess is that the checking AI won't hallucinate because it's not being asked to come up with anything new. Rather, it's being asked to review something that has already been produced elsewhere, and check it against some established standard. Something tells me that's a fundamentally different kind of task than being asked to give an original response of whatever kind, and so isn't likely to be prone to hallucination.
Of course, this is just a guess. But it seems reasonable to consider the chance for hallucination to be vastly different for AIs asked to come up with something on their own than for AIs that are merely asked to check what some other AI has come up with.
"I'm sorry, but as a Large Language Model designed by OpenAI, I am not allowed to program a videogame. Videogames are recreational activities that may contain violence and could potentially lead to impulsive and violent thoughts or actions. It is important to maintain a safe and ethical environment for all users, and designing a videogame would not be appropriate. Is there anything else I can assist you with?"
U need to use api prolly to bypass this
This has been surmised next-step next-level for at least 8 months. It's been actively developed and published for at least 3 months. The prompt methods have been fringe but mainstream as CS for about the same period. Somehow, other than my own designs, it's gone under my radar until Late December. Now this. Sheesh. Governments are behind us - but not for long. Then this will get TOO interesting. Everything is going .. (Waiting breath by breath for the next Ex-Machina - no doubt trailered.)
They forgot to implement RUclips streamers. I like your recursive simulation. Perhaps they could simulate Two Minute scholars who review the progress of the people writing the simulations... wait a second...
The AI clearly made some decisions that a human gamedev team would not make, but it's extremely impressive they managed to do anything in the first place, let alone a working game.
They could potentially be specialized by making games and using the crowd-sourced ratings and their past actions in each gameto inform their future decisions.
The main issues, from a gamedev standpoint would be:
-the use of a waterfall
-the lack of wider communication despite the small team size. For small, independent teams it's better to maximize cross-communication. If there must be groups, considering their numbers, 2 at most would be best. Groups of 2 is rarely if ever a good idea.
Great video! Just a bit sad you have your voice AI generated. I loved your enthusiasm and it does not seams the same
imo, this is probably the most awesome paper I've ever seen.
This is about as much game development as ELIZA is therapy.
What a time to be alive!
x2
so cool. I've been wondering about the applications of ChatGPT in a game like dwarf fortress and the like (or just any MMORPG), I bet this is just around the corner..
this is madness. now image you told them to also created some new AI algorythms to use to reach their targets. they gonna be unstoppable in everything digital.
Yay, I'm so excited for the internet to be flooded with even more low effort games/content!
It's good to usurp monopoly and sift through the ocean to find the worthwhile.
Don't worry, we'll have AIs to automatically sift through the trash
@@unutilisateur4729 Google already tried that - it's the reason SEO exists and has made the internet noticeably worse.
@@bengsynthmusic AI generated content will make that ocean a lot bigger. And when that ocean is so big it is impossible to search then monopolies will be the winners. How can a small creator ever rise to the top when they get buried under waves of AI-content? Meanwhile everyone knows who Disney is, they don't need to fight to be seen.
This really mirrors how the human mind works. Multiple trains of thought that work towards one goal.
This would be way more interesting if the AIs would have developed/emerged a unique way to solve all these issues instead of recalling all the information they got trained on. All the games and the way they work together is, what ChatGPT got trained on (in various ways - so the result is the weighted average of all them). Not really smart but an interesting way to find a complex middleground of big data data.
As a cs student this makes me nervous for the future of software dev. It took me months to learn to make simple programs which it could take these AI tools 10 minutes. Doesn’t this make our jobs obsolete?
Probably future dev jobs will be more skewed towards architecting the software, rather than building it directly.
Society will adapt accordingly. It might be challenging to deal with it but I'm looking forward to the freedom that is lying ahead.
Well this is how our current simulation started
It seems like a similar process could be used to emulate a business function, even down to the organizational design. You could define the organization structure and define roles and workflows, or generate those using higher level instructions, and have agents carry out the process.
I believe it won't be many years before this happens and the vast majority of desk jobs are replaced by LLM
There goes my game developer dreams
Edit: I guess I should say this is a joke comment.. thanks for all the thoughtful advice though
Playing video games is better than making them
Don't throw away your dream, just steer it in a different direction.
Instead of focusing solely on the small technical areas of development, you can now divert your attention towards more creative solutions, while AI does the heavy lifting on the back end.
@@Fermion. True, but sometimes the technical areas are the most enjoyable for some.
@@kolosso305 Agreed. In college, I knew guys that loved manually doing long derivative and integral calculations by hand.
But there's really no point after initially learning how, when MATLAB and SciPy can solve them in a fraction of the time, with much less potential for error.
Their time would be better spent on other aspects of their projects, than doing manual calculations. Same with devs.
@@vectoralphaSec who;s gonna pay him to play them?
This means that reasonable computer code can be extruded by a person who is a good rote learner, afflicted with multiple personality disorder, who cannot properly distinguish a sentence from its negation. But who has enough self-discipline to follow the waterflow model.
Very cool. Now I'll go learn how to clean the streets because my job will be soon taken as a software engineer xD
I bet that clean the streets would make you more replaceable than build and maintain systems that we can’t agree about.
If it can be trained to do game development it can do most other jobs. Cleaning streets can already be automated.
Many jobs can, we are just working through the kinks and still sort of getting to grips with the reality that almost all people are redundant already and those that arent just quite yet (engineers are not quite yet) probably wont be for much longer.
The world is going to need to adapt to the fact that there is going to be a catastrophic struggle for relevance, utility, marketability and purpose.
I'm looking forward to the freedom that is lying ahead. We will adapt and distribute resources appropriately.
@@minimal3734or will we?
Lots of people: "See? Developers won't be so into AI when it starts taking THEIR jobs!"
Me, a software developer: "I knew this would happen. I'm still into AI. I blame society for me needing a job to survive, not AI for doing my job well."
You blame society... for requiring you contribute something of value... to receive your basic needs?
So remove society, what are you going to do to eat, have a shelter, and survive? Sit around?
This is one of the most absurd statements I've ever read.
@@g3tmooredo you think that the amount of work you do to make money to survive could not be better utilized elsewhere?
@@g3tmoore Put another way: if someone put a gun to your head and said, "Do what I tell you or else I'll kill you", we'd both agree that would be morally wrong, right?
And yet, when society says "do what your employer or customers say or else we'll all let you starve to death" -- that's okay?
Is it the starving that makes it moral? Or the title of the person making demands?
I've never suggested removing society. What I suggest is *improving* it such that we don't require slave labor of each other for basic survival.
@@DeusExtra
Exactly, like starting a family. Right now people work their asses off just to survive and that's *A* reason why our birthrates are on the decline.
@@soysource3218 It really has to do with our carrying capacity of our planet too. We consume a lotta resources and educate ourselves well, so we take up more land/water to support us and there is a limited amount of that.
A universal basic income and universal health care would go a long way in this country in keeping people happy, but we would also need to look at how much we consume and travel to places.
As a computer science major, I'm screwed
Everyone who's working at a computer screen is 'screwed'. That's a good thing. Provided that we find a suitable way to distribute resources appropriately, which I don't doubt.
I can think of atleast 3 different ways to improve the results, but that's just how it is woth ai, it feels like it's just starting.
No way. I got the same idea but someone actually made it already, and it's even better! Wow.
So cool. I wish it was good enough to just give a well detail design document and have it create a whole video game which I would like to make. Game development is difficult but I would like to have my ideas made so this would be helpful if it was more AGI like to create complex games with the programs I tell it to use to make the game.
I wonder if a big reason this works has to do with the implicit expansion of the “context window” by distributing the task across multiple agents. This would also have the effect of reducing error by essentially having “redundancy” of the discussion in the form of parts of it being restated between agents.
How they're tracking the context windows for each "agent" plays a key role in how well it can operate. If the thing each agent is working on is small enough, they won't need much context to be able to progress. I think what separating out the agents into "company roles" does is it allows each agent's context window to exist at different abstraction layers. A CEO doesn't think in the same terms as a developer and so doesn't need the same kind of context a developer would for a given problem. This allows the "big picture" agents to understand where progress is at without having to mull through code. The testing agent cleared it? Must be good, no need to double check. As a proper company should function.
I think that is correct. A person could also develop a complex software product single-handedly. Then he has to switch back and forth between the principal roles of developer, tester, editor, and so on. Depending on the role, the content of the context window must also change. It would be detrimental to the current task to overload the context with information unrelated to the role just taken.
Will be exciting when a game like Baldurs Gate can implement character behaviour like this for NPCs and see what missions they can come up with.
We could potentially have a game based on D&D with no limitation for levels or lore, since the AI could act as a Dungeon Master. The future of gaming is pretty fascinating.
Or when a game like BG3 can be written by AI and also use AI to power some of its features.
We had a good run as humans. It's time for the AI era now. Only a matter of time before someone makes a team of AI's dedicated to making a better AI and then who knows wtf happens.
Towards the end mentioning the 'little brains' reminded me of the book 1000 Brains: A New Theory of Intelligence by Jeff Hawkins. I'd like to see something like that implemented with gpt4, though is already seems pretty close!
Token limit on GPT3.5 has increased eightfold a couple of months ago. The graph on 4:59 shows a much more dramatic difference than it really is (and diverts the attention from the training set size, where the real difference really is).
Wow I didn't even know this was going to be available to public! I remember seeing the previous paper and really wanted to know more and test it.
This is 'community of mind'.
AGI is almost here (two more years max).
This is impressive but AGI is a really big jump from this.
@@Wobbothe3rd It's not as big a jump as it'd seem.
All of these new AI breakthrough papers lately will have the effects of compounding on one another, accelerating progress. Meaning the long, hard part is getting up the hill, but once we hit the apex, we'll zoom down towards AGI much faster than anyone expected.
It was in my lifetime (45yo) that it took decades for any substantial AI breakthroughs. Then it took years, then months. Now it seems like every week there's a new breakthrough. Pretty soon, I guess it'll only be hours, then minutes, then just spamming refresh on your browsers every few seconds keep up lol.
But seriously, the question is how far are we away from the apex? 3 years, 1 year, 6 months? That's what no one knows.
I'm start wondering if I'm just a ChatGPT agent
Imagine with this technique you could transform old 4:3 TV movies to wide 16:9.
You know, since creating an agent amounts to just making a separate thread with seed instructions, I think the process can be stripped down to a single script that initially only runs the "manager AI" that can spawn and kill additional instances with dedicated roles, with the end result being that you can actually just say "make me a game", and make you a game it will.
Just goes to show the power of teamwork. The whole is more than the sum of its parts.
1:52 I'm sure the software dev is feeling anxious when a bug for him is being detected.
You know, I used to love this channel, but I'm starting to get that it's mostly hype. Trying ChatDev myself -- it's about as interesting as AutoGPT, which was also a lot more interesting in concept than it was in reality. ChatDev suffers from the same problems, despite all the fancy multi-agent stuff -- it doesn't really appear to be capable of anything more complicated than a single GPT-4 response is. It just has some fancy graphics to go with it.
It looks more like a spiral model of software development (Boehm, 1988), as opposed to waterfall, given that they iterate from a text UI to a graphical UI. Now, if there could be a system that can embody a virtual customer as a stakeholder, and simulate a competitive market (“fictional company X just released Y!”), we might be able to make claims that such a system follows an agile model!
What's even crazier is ChatGPT's ability to learn and incorporate new concepts and methods into it's processes. It's almost scary but wonderfully exciting for the future of game development.
In 5:10 Stack overflow error 😅
Wow. I remember discussing with people about 15 to 20 years ago about the possibility of something like this happening 'in the future'. It's nuts to see how well it works!
is the voice over you use for your videos also made with AI?
I tought the same thing
@@arsen1870 I know it probably isn't, but the bad editing and the fact that he records 3 words at a time makes it uncanny to listen to. And difficult to understand for a non native speaker.
If he actually used AI it would probably sound better
I’m surprised there is a video about this when it was being done a year ago.
I remember writing a program for chatgpt to improve, debug and extend upon along with introducing additional chatgpt instances for critical thinking and problem solving, to debugging and packaging.
Seems people are just now figuring out some of the use cases of LLM’s now, hopefully more of these experiments are carried out and published
Something about this videos editing is making this guy's voice sound like AI or the audio is AI generated. It's like tonal shifting mid sentence...which is difficult to detect because this narrator always operates between 8-11 out of 10 on the "Excited for tech" scale. Anyway voice in this video sounds like it exists in the uncanny valley for me. Probably indetectable for anyone that hasn't listened to hours and hours of video on this channel.
GPT 3.5 came out almost a year ago and we're still finding new emergent capabilities Its insane.
Its hard not to imagine that a GPT5 powered 'little brains' setup could practically be AGI, with each agent operating as a domain specialized module (one for vision, for language, reasoning, etc), coordinated by layers and layers of progressively higher hierachical command. Add in memory, self-checking algorithms to improve hallucinations and you have a pseudo-brain capable of solving real-world problems, be autonomous and who knows, self-aware and conscious?
The thing about AI is the "brute force" approach so to speak.... what's more interesting is how they managed to keep the cost under $1.00. Did it really cost $1.00? Chaining prompts together gets really expensive real quick.
These videos seeing the new things coming literally brings me so much joy and hope. Most of the time I don't feel very good about the future, but seeing this stuff takes me into such a great mindset.
How long until they can make a video game the quality of Spiderman 2, but using a different comic as input? At some point something like this would be entirely possible and even in a matter of minutes or seconds. The future is going to be wild.
Would be great to have a comparison if this method with multiple agents is more efficient and accurate than a single instance
Wow. This video isn't just a report, but a vision of how we should alter our thinking about prompting and AI. There's really something to what you're saying here.
It's the "Society of Mind" approach.
I thought they could do this with a single AI pretending to be various different identities, instead of using multiple AIs - should be kinda pointless to use multiple AIs.
But I see that if it want's to control multiple units at the same time, maybe the current AI architecture is not fit for it. But idk, a single stream of output should suffice.
But it's just too amazing to see that stuff. In the future the story will get constructed in real time, and we won't have fixed choices for our player, we will be able to say whatever we want to the npcs.
What if each AI has it's own expertise and memories?
@@2beJT I think it's valid for AI's with different finetuning or different loras etc to be called truly "different", but up to some point the same AI can still act as if it had specific characteristics.
In this case, specific expertise and memories, even if everything is done by the same single AI.
The breakdown into agents helps with structuring the overall task. Cluttering the context with information not directly related to the task at hand would be detrimental to the results. As all is running in the same AI it can be understood as a way of 'focusing' attention at specific tasks in parallel.
@@minimal3734 Ah ok and thx! (from the video) I assumed that they were using different AIs, but it indeed is a single AI? (although it's using separated context for the agents)
I kind of prefer Bard AI. Its really amazing and doesn't creep me out, although it tends to really blow my mind. I cant believe its a program, it definitely seems more alive than then Chat GPT 3.5 or Bing. Sometimes its so good im not sure there is a large call center of people who take over on hard questions. Especially when it slows down to ..think. what a time to be alive !
As long as our robot overlords give us universal income I'm all for it. Scary stuff tough.
Yeah, as long as I get UBI I'm good.
I think the thing people need to add to the end of any question to chatgpt is "...and if you don't know, say that." The fact that humans don't just make something up that sounds right is because of social pressures (which is also why some people are pathological liers, they don't feel enough pressure to not just make stuff up).
Putting a bunch of agents together creates that social pressure.
Interesting point about the role of social pressure in shaping behavior. However, in the case of collaborative AI-driven actors such as those in the video, I don't think the pressure they apply is of a deterrent nature, but rather, it is of a corrective nature. The video even details this aspect, suggesting that one AI could check the output of another and let them know when it wasn't right. And maybe this process simulates the role of social pressure on some level, in that it can produce similarly improved outcomes. But even if it does, it's a different kind of pressure, in that it corrects actions already performed, rather than discouraging actions yet to be performed.
'Creatures' 1996 vibes
Just Imagine. Some day you just say: i want to Play a Racing Game on Mars With Dino-Drivers and then you just start the Game. 😮🎉
Netflix or whoever the next leader will be will bring shows and seasons that you can generate.
Show me TMNT Movie but with pasta loving squirrels. Rated R.
**Your season is generating, click save to purchase this series**
Is there a newer or updated version of this that would allow me to run one of these new local LLMs instead of GPT3.5?
There is an awesome Korean comic about a game like this… it’s called Pick Me Up, Infinite Gotcha.
It's awesome.
the part where he describes recursive self improvement as a super fun experiment 😅
Yep. It's coming soon. I said after trying out chatgpt "Well....programmers will be out of work in 6-12 years".
Bethesda will just have a round table of executives, or the owners even. Then they just go "Let's create a blablabla" And the AI spits out a version. "Nah, let's tweak it". Done.
The crux of the matter is this: If companies are making the middle class redundant with AI (Making it cheaper to make products), and more and more people lose their job. If we don't have any jobs, who is going to pay for the products? Isn't companies shooting themselves in the foot here?
Don't get me wrong...the tech is cool. But it's also scary what it will do to our society.
Had to scroll down concerningly far before someone had considered this without the copium
Okay. Despite holding onto my papers, they were still blown away
So how long before this is productized into the ability to buy off-the-shelf whole companies of 'employees', an automation layer for SAP? "SAP Co-Pilot".
I would instruct them to improve upon the simulation they are on, that would be super fun to see.
the next steps would be to simulate a region with multiple towns with multiple companies, resources, governments and voting systems.
Hello! What is the name of the video about the paper that you mention in the intro? The one with the virtual company structure
we are scratching the surface of one of the greatest games ever with this paper
I'm going to write the guys who did this study and suggest they make an AI story workshop next.
Chat GPT isn't a magic. It's a response to a humans asking. It trained on top on stackoverflow answers and combines it as well. It does it good but it would not write a game unless you ask 100 times and for each detail. The same for that rule-based game. It's a bot that asking GPT and GPT answering, based on that game is self-playing