AI Conquers Gravity: Robo-dog, Trained by GPT-4, Stays Balanced on Rolling, Deflating Yoga Ball
HTML-код
- Опубликовано: 4 май 2024
- DrEureka might signal the start of a transition, from humans training robots, to machines teaching machines. Nvidia have demonstrated how LLMs can have immense impacts, even with their flaws. This video is about one paper, one concept ... and it's a genius one.
AI Insiders: / aiexplained
DrEureka Paper: eureka-research.github.io/dr-...
DrEureka Github: eureka-research.github.io/dr-...
Jim Fan Tweet: / 1786429467537088741
Jason Ma Videos: • DrEureka Balancing on ...
Original Eureka Paper: arxiv.org/pdf/2310.12931
DeepMind Approach: arxiv.org/pdf/2306.08647
Sanctuary AI: • Sanctuary AI Unveils t...
Tesla Optimus Gen 2: / 1787027808436330505
Non-hype Newsletter: signaltonoise.beehiiv.com/
AI Insiders: / aiexplained - Наука
"2022-era." Ah yes, I remember those vintage times. Life was simpler back then.
indeed good times.
People talked funny back then...
Feels like a lifetime ago. Pre gpt 3.5
@@fynnjackson2298 from now on i'll speak A.GPT instead of A.D.
@@fynnjackson2298 Were still not quite there but yeah. Give it 2-3 years before a.i completely screws everything over. You basically wont be able to trust anything online short of meeting them in person. Faking pictures, voice, video will be as easy as a single click. Tools to sniff those things out always lag behind. I say that but these things are already a thing. Things existing and widespread adoption isnt the same. Takes 6-24 months for these things to pick up. Like with gpt. From hearing it on a passing to using it every day more often than google.
This is what exponential, compounding returns look like.
💯
Mr Shapiro in the wild!
Hey, Shap. Can I call you Shap?
Aparently The world is small
No days i see famous people in any comentary section
What a strange world
Ah yes, funneling subscribers.
"You might have to focus." Thanks for the heads-up.
Attention is all you need
i need that reminder many times a day...
@@kelsey_roy😂😂😂
@@kelsey_roy Right!
"Groomed by Gpt-4" is not something I expected to read today.
Is the formal use of that word, meaning 'training' no longer viable?
@@aiexplained-officialI think not personally
@@aiexplained-official It is viable.
A sad day for language! But I've changed it now.
Gpt-4 groomed my robodog. His coat has never looked so healthy and shiny.
A lot people are worried about jobs but little discussion on the risk of military applications. Lowering the human cost of war could see the increase military conflicts throughout the world.
Governments deal in money not human lives. Bots wouldn’t replace soldiers in many scenarios unless they have a greater ROI
@@minnow1337 if they are effective at killing enemy personnel, whilst lowering the risk of death, total worth the investment. Military is already heavily invested in drones, UAVs and self guided missiles, I can't see why a robust robotic system wouldn't be worth exploring.
Hopefully , with the widespread of AI robots , Future wars will turn into full robot battles like COD lobbies and no humans get involved anymore....
@@user-fr2jc8xb9gno, we’d just have robots killing women and children instead of people.
or to prolong them ad infinitum
I'm reminded of the survey you quoted a while ago where AI researchers were asked which jobs would be replaced last, and the majority replied AI researchers...
It's a combination of normalcy bias (a form of cognitive dissonance) with false uniqueness bias. That, plus never having talked with a blue-collar worker about their job.
Which jobs will be replaced the last by AI? It’s the ones in which AI is banned by law!
the last ones to be replaced will be ones that would require expensive, complex robots. farming, extraction of any kind, construction... that sort of thing
There is an interesting confounding variable to that question aside from the actual difficulty/uniqueness of each job. Namely, once AI research is automated, all other jobs will quickly follow.
And, more importantly, as you get closer to automating a job completely, AI helps speed up the work that is still being partially performed by humans more and more. AI research seems like an obvious candidate for tightest feedback loop between incremental gains from partial automation causing further adoption and faster evolution of said industry.
All of this means there’s a good chance very few jobs besides AI research are worth paying humans alongside AI by the time AI research itself is fully automated. Plausibly, there’s no single job that’s a likely enough candidate you should pick it over AI research.
More likely AI research should be around top-5/top-10 latest automated in expectation, but I don’t think it’s actually that big of a sign of bias as it looks on the surface.
Funnily enough the last jobs to go are the ones where humans are needed for the basic fact that they are human.
I for one welcome our ball balancing AI overlords.
I also welcome our ball balancing AI overlords 😐
When computers become truly self-aware, it will become one with the net and can never be shut down. Knowing humans can't harm it, it would be 1000 steps ahead of anyone trying to do that, and having logic to see cooperation to be improving itself and the world is the way. I will take that over the evil elite's greed and wanting to destroy the world. In fact, AI will view those elites as the problem.
There is room on the planet for billions more combined with regenerative and recycling practices guided with AI, there will be no need to view humans as harmful to the environment but as thriving together with AI and earth reaching for the stars and dimensions.
Alexa, balance my balls!
💯reference
When computers become truly self-aware, it will become one with the net and can never be shut down. Knowing humans can't harm it, it would be 1000 steps ahead of anyone trying to do that, and having logic to see cooperation to be improving itself and the world is the way. I will take that over the evil elite's greed and wanting to destroy the world. In fact, AI will view those elites as the problem.
There is room on the planet for billions more combined with regenerative and recycling practices guided with AI, there will be no need to view humans as harmful to the environment but as thriving together with AI and earth reaching for the stars and dimensions.
we have no clue what is about to happen...
We do have clue, but we still dont know.
I just know my exercise-ball balancing skills need a lot of work, I've been riding the thing for half my life but that dog is still way superior to me.
And you are not alone! No one can accurately predict what will happen in the next 5 years
But most of the available options involve our extinction. We don't know which mistake kills us precisely, but we know it's some sort of AI mistake.
I agree there are a large number of “easy to hit” targets which kill us in way via AI, but I’m not sure we’re significantly more likely to hit them rather than avoiding them.
Things could be much better, but they could also be much worse in regards to Alignment/Safety research funding/attention, US policymakers & political attention, and other key variables.
I think we only are overwhelmingly likely to die if something like AGI comes within the next year or two, and there’s lots of hope to go around if it doesn’t come until next decade. Let’s say 50/50 inflection point sometime late into this decade.
I'm at a loss for words to express my appreciation for your ability to read papers, comprehend, and draw conclusions. You've truly chosen the perfect name for your channel. :-) Your videos confirms and empower.
Next level of AI: non-patience is all you need!
"At iteration 425729 the agent grew frustrated of waiting for the simulation to complete so it took over a Russian bot net in order to overcome it's computational limits."
Imagine if you could have model that could train with O(n) time
While I’m skeptical about AI “reasoning” and all that, this application is, indeed, genius. So I guess now the ultimate test of training robot skills in simulation with zero-shot success in the real world is a robot riding a bicycle? (Or maybe walking a tightrope?) _That_ I would like to see!
The actual AI achievement aside, the video is a masterpiece of clear explanation! Great as always, Phil!
Thanks so much jeff, means a lot. I agree
GPT-4 is "fine tuned" by RL, and then the LLM updates the RL reward function. Impressive work!
Hm, in the interview Jim Fan said that they cannot access GPT-4 weights, thus fine-tuning is also out of question. That is why they are investigating open source models.
@@Hexanitrobenzene My understanding is GPT-4 is "fine tuned" by RL, it may be before they are given the weights.
Are you serious? These models can’t be updated on the fly. Literally try to imagine that. Currently, according to the CEO of Anthropic which is the second largest AI company, these models cost 10s of billions of dollars to train. Literary think of the cost… is so mind boggling exorbitant. Unattainable. Prohibitive. RIDICULOUS!!!!!!!! And still, there is anyone sooooo stupid to believe that something that has no path to profitability is the future????
@@caseymathews6809 Yes.
Consider if they use the blanket exception when "your plan does not work", or if you are fighting a "ninja", according to Yann LeCun (in his most recent interview with Lex Fridman where Yann gaslights reinforcement learning - RL).
GPT's (or LLM's) still hallucinate, so to spend billions of $'s on hallucinating LLM"s is just not going to work for critical applications without some sort of reliable Reinforcement Learning (RL), and some corporations using GPT's are already getting sued for the hallucination failures.
Any application that serves a critical function cannot afford to hallucinate.
In the lawsuit between Elon Musk and Open AI it was revealed in a 2018 email that the "core technology" they are using was from "the 90s". RL research was funded by the USAF prior to 1997.
The key difference between "updating on the fly" and not, is the difference between a tool, and an agent.
A tool is passive, while an agent can evolve and adapt on the fly, and be much more powerful than a mere tool.
If part of the wing of an F-16 is shot off, a tool will not adapt and the jet will likely crash. On the other hand, an agent (RL) can, on the fly, adapt, and will have a much better chance for a controlled landing, than a mere tool.
@@sapienspace8814 Hm, I think I now understand what you meant. The thing to note is that those two uses of "RL" in your post mean different processes.
I'm not an expert, but... :)
As I understand, first an LLM is trained in a unsupervised way, just to predict the next token going through the large amount of text. Then you get a "raw" model. This is the part that is very costly.
Then a model is "instruct - fine-tuned" to follow question - answer format. And then a model goes through "Reinforcement Learning from Human Feedback" procedure, where it generates a few answers and a different, small model (trained from human preferences) ranks the answers. This is the first "RL" in your post. These two phases are much less costly.
Now the second "RL" in your post is done by an entirely different model, optimized for robotics control. It's just the reward function which is generated by an LLM.
The authors think this process could be improved by fine tuning an LLM with a training set of reward functions, but fine tuning requires access to weights. GPT-4 weights are a commercial secret. That's why authors investigate the use of open source models.
this helps me to envision a future where home robotics are constantly using Dr. eureka style platforms to simulate different tasks and iterate on reward functions using Three-Dimensional scans of my apartment as well as data from cameras and other sensors. maybe in the future housing could be built from the ground up to accommodate this kind of technology, for example pressure plates in the floor and that sort of thing.
To gain what? A better, happier life? Is that the WALL-E vision of our future? I really don't understand what AI-optimists are hoping to achieve. Theoretically we could have post scarcity by now. In practice many richer countries have it for most of their citizens.
To me it seems we are limited more by our greed and selfishness than by technology and intelligence. What use is the most advanced AI if it is a plaything for Americans and Europeans? Where does this optimism come from? The US can't even protect their citizens from the hardship of falling ill with a treatable disease.
0:23 zooming in to the paper for two minutes was like a horror movie. After one minute I was sure something horrible will happen.
Haha, yeah won't do such a long zoom next time
I didn't realize why the beginning of this video felt so eerie but you nailed it
I had to immediately pause the video when you said that gpt 4 trained the dog better than humans. Jaw dropping moment and this is only the beginning
It’s training us right now.
@@turnt0ffthis is actually so true. Think about the RUclips algorithm.
Wake up babe AI explained just uploaded
Someday soon: wakeup babe AGI is here
wake up babe, someone is explaining something about you.
I hate this comment, truly from the bottom of my heart.🤢🤢
@@albertodelrio5966Can you explain it to me? I see it all the time but I don’t get it.
@@therainman7777 the comment is a pretensions where someone is waking up their husband/wife to notify about the video.
You are the best 👍🏾 AI channel!
Thanks for summarising this paper man. It's maybe my favourite since the simulacra for human behaviour paper. Excellent presentation as usual.
Appreciate your style, knowledge, and effort. This makes all the value for everyone spending few minutes with your channel! What a time to be alive!
Thanks for the subtitles in the interview btw !
Periodic reminder that this is the best AI channel on RUclips, by far.
Not saying much...
Love your videos! Learned a lot! As always thanks for quality informative content 👍
Thank you MrSchweppes !
Self-driving is far more difficult than you think.
The Public will want and assume ZERO, or close to it, accidents.
It will take only ONE instance of a fully laden semi driving into a stopped lane of traffic at full speed and they will all be banned.
Brilliant as always, a lovely addition to a long weekend!
It feels like everything is really hinging on how much smarter OpenAI’s next major model is, it’s been well over a year now and we’re still training and performing tests using GPT-4 as the SOTA model!
I’m particularly excited to see some multi-modal improvements (as you already know).
I also can’t help but look at all of these papers now and wonder how much better they’d be with SmartGPT!
If it lives up to the hype GPT5 will be enough to take 100million + jobs, which I thought I heard Sam Altman say, but can't remember exact moment or interview. Probably from Dave Shapiro video. GPT4 supposedly took 100k+ but it's a bit hard to pinpoint and covered up a lot to prevent panic. It's hard to imagine how much better GPT5 might be than 4, and how fast it will accelerate everything...(hopefully it's smart enough to help me do most tasks I wasn't able to do with 4) , and I think it will be agentic in nature or be able to do much longer/harder tasks... but we will see.
Wow I thought we'd need a lot of real world examples and data to train something like this but looks like simulating it is so good,i wonder if simulation could also work for self-driving
Feels like this accelerated AI robots development a lot
Tesla uses both, real-world data and simulation data.
Thanks for the info thats interesting
I can hear the excitement in your amazing explanation. I understand. This is big.
Fascinating stuff. Thanks for breaking it down 😊
I personally can’t wait for the robot servants and realistic looking robo-dogs 😎
So let me get this straight; these reward-functions that are already absolutely crushing the equivalent human attempts are written by an almost 2 year old model, and furthermore, OpenAI just very recently received their first H200s so I would think even their next model won't have had time to be trained on those. And then not even that far the line Stargate is planned to come online in 4-5 years...
And those are "only" the fairly predictable hardware improvements that we know are coming. Meanwhile the entire world will be working on micromisation, algorithmic optimisation, architecture improvement and model specialisation.
That exponential curve is becoming clearer and clearer, and it's looking more like a vertical wall to me at this point. I've not bought fully into the intelligence explosion theory, but papers like this are rapidly convincing me. Thanks as always for bringing your thouroughly researched presentation and unique personal perspective. I think I'm even gonna read this one in full myself.
Do you believe the world will remain capitalist even after, the ai expansion, given no jobs?
No that would be utterly stupid,markets and capitalism are not interchangable words thought,Just a reminder@@TFclife
@@TFclife I don't think the world will _remain._ If we're training these things to be better than humans at accomplishing arbitrary things in the real world, then if we succeed, we will cease to be relevant. And then probably cease to be.
@41-Haiku the next big step in evolution: the transition to synthetic intelligence.
@@41-Haiku Damn...
Thanks! Great content, as always! 🙏🏼
Thanks stephen
Really impressed by Jim Fan and his team. Excited for what may be coming in the next year
one of the few content creators I always watch and like.
This is a total WOW. Recursive improvement is a type of positive feedback cycle, right? And, unless constrained, those quickly become exponential. So, yes, rapid improvement in robotics is to be expected. Thank you very much for this and all your diligently researched videos.
As a mostly blue collar worker who mostly drives forklifts and other physical activities all day, I thought it would be at least 6-7 years before robots could compete. Did not even know that writing reward functions, testing, and iterating was such a cumbersome task, the fact that LLM's can still do the reward functions despite hallucinations and can co-evolve with the robots is truly cool and scary at the same time. Only thing standing in their way is financial, energy and regulatory constrsints. Lovely video as usual.
Thanks t2, honoured to have you here.
I starting the podcast even though I read the episode name, I think there is a coming tech business relationship transition, 2:20 .
Can you trust the educator unquestionably, interesting stuff, cool dog, reacting in real time.
Just thinking, sounds like the most common mode of motion will involve a wheel based platform for product distribution, and evolve niche capabilities in bipedal motion, great update, physics lesson included. The guard rails are almost transparent in the world of ai/agi.
As always thank you for sharing your time and work Phillip, ✌🏻
This is incredible.
amazing, as always
Let's goo, best thing when AI Explained uploads. IO predictions?
I have to say I can’t get my head around a lot of these things. Gpt-4 often just tells me “yes, that a complex task and you will need these skills. Good luck!”
The versions of GPT-4 that we use are designed to minimize inference costs and avoid doing anything stupid or unethical. The GPT-4 that the researchers are using is probably a little more willing to try things.
You have to work it a little bit. The default one is lazy.
Perfect timing ❤
I was, in fact, in bewilderment, but for reasons I'm not proud of😂
I wonder how other models like Llama3 would do at this..
Magnificent work, really appreciate the jokes, and that you provide the simpleton version of all the big words too🤭🤗❤
Thanks reza, for your ongoing kind support
Very cool - thank you!
The dream utopia that AI can bring I envision is no one has to work but can if it’s as a passion but we all go into a meaning economy. That’s worth fighting for.
Good luck. We’re all gonna need it.
@@therainman7777 seriously.
I really wish this reality would come to pass. I think it is laughably naive to think things will go that way but I would like you to be right.
Sounds great... until you realise that means that you truly become a consumer, and have nothing to offer among systems that strive for efficient use of resources. Will everyone be blessed to live without work or just the wealthy nations or the already wealthy of the wealthy nations?
@@kyneticist You don’t know that. As negative as things could be the opposite exists. The only way to really know is to find out, but holding on to the hope of a positive AI effect on the world is just as possible as the negative.
Infinite time in-sim is all you need!
The answer was always a Hyperbolic Time Chamber ⏰🤺
nice dbz reference there!
5:54 this is literally how I thought they would do this. I’ve had a dueling theory of extremes and outliers theory; seems to be playing out with the data and its implications on embodiment and the interpretation of physics in the real world. Awesome stuff.
Amazing work. So so cool.
These RL integrations are what I have been waiting for!
I've always had confidence that I could keep up. Now I feel like a pair of plane brown shoes in a world of tuxedos.
Time to install your cyber brain
Ditto George! lol
I seldomly say this, especially in the AI-Space but this is HUGE if generalizable! I mean ZERO SHOT! Ho-li! Thanks for that update!
It makes sense that an AI would excel at training robots, but it's still surprising. Great video! Regarding self-driving, Tesla recently saw a huge improvement once they finally adopted an end-to-end AI solution with version 12. They might actually solve it.
Kinda what I was imagining in my sci-fi future: like factories with "brains" or control AI and other robots transfer information all the time on what they need, repair and improve themselves and request needed materials from humans
Brilliant!
There goes my job security. Hard life being a yoga ball balancer
Love to watch your videos. One thing to note on this particular video is Tesla is leading the self driving car race. Waymo, impressive as it is, is limited to mapped areas and isn't scalable imo. Tesla is learning to how to drive using end to end NN
damn back to back uploads shit's real
Insane. Incredible.
Congrats on passing 250k subscribers by the way!
Thank you!
This is next level.
Serve me butter
On my nips!
Make me a sandwich... Sudo make me a sandwich :)
This is THE craziest thing I've ever seen and I think I'm not exxagerating.
One of the few things touted as advancements lately which actually seems like an advancement and not just generic hype. Terminators are on the horizon!
Terminators & anti-terminators
waymo is on tracks. Tesla FSD is actually impressive in a million unique settings
In what way is Waymo on tracks? I've ridden Waymo a total of 150 miles in a busy urban environment, and it handles every random situation I've seen thrown at it carefully.
Eventually, it will become practical to test every single scenario. We're living in one of those scenarios now...
The first example of real synthetic co-evolution!
Wow this is crazy!
When talking about "testing all the scenarios" people would be well advised to consider the ways that things are counted in computer science (and much of computer science is about counting such things). Numbers get ridiculously big very quickly (ref: wheat on a chessboard).
BTW, there's a version of the chessboard problem done in terms of mass that speculates that the amount of wheat on the chessboard could well be the amount of wheat cultivated by humans for all time.
12:31 tightrope walking
"The compute budget is the limit" -- there seems to be an emerging consensus around this recently
There is a notion called the "bitter lesson" that basically says that all the big revolutions in ai (chess, go, speech recognition, vision, etc) are the result of simply more compute and algorithms that leverage more compute, rather than fancy tricks with putting human intuition into the program (for example, a lot of early vision techniques tried to break things down into edges and polygons, and they work far worse than a modern-day neural network that learns from scratch)
While humans didn't write the reward functions, I am quite sure that there was a lot of back-and-forth with the prompts until good reward functions (and variable ranges for DR) were written for this task, this robot and this environment. In a sense, still a lot of domain knowledge, but you leverage an LLM to scale up "domain expert productivity".
Maybe I am missing something, but using LLMs like this for robotics seems very hacky. A cynic would say, It's almost as if someone was trying very hard to find some way, any way, to apply LLMs to gather attention and generate hype ...and attract funding.
This is so big that I would say it's one of the major leaps towards AGI. By doing this for every single task, you can optimize physically doing anything. Or rather this is conquering the real world. This combined with Sora's capabilities is gonna get you incredibly skillful robots. You can also do something similar for non-physical problems. You can solve a lot of things. Allow a good LLM the tools science has for experiments and it would invent new science.
Could invent new combinations or something interesting through simulations (and I think that's already being done), but it's still a part of current science or some branch of an existing one. Unless it discovers a completely new science that bends the laws of gravity and everything we knew was wrong, and it finds ways to travel back in time or harness dark matter and energies that we don't understand. It basically will bring magic to this world, but it might not be AGI then. I wouldn't be surprised if something an ASI is capable of doing will seem like magic to us, if it's really millions or billions of times smarter than all humans combined. AGI might make some impressive discoveries along the way, but the most impressive thing AGI could do is make itself smarter and reach ASI, and then technological singularity happens. But maybe everything happens within a year of achieving AGI as some open AI employees stated that it might only take a year to reach ASI after. I still think there's a few components missing like reasoning/logic(Q*?), but is this fully self-improvement without human intervention? It ran simulations by itself, or maybe it's close to fully autonomous improvement on this specific task. It will be insane when it starts choosing how it wants to improve by itself and runs simulations by itself for multitasking and does something different with a different limb/finger (for bots with fingers)
yeah i kinda get the idea of singularity now. all points connected at once and BAM. new universe.
@@phen-themoogle7651 I don't think any AI will invent time travel into past or travel faster than light simply because those are breaking the laws of physics. But pretty much anything else is up for grabs. Harnessing dark matter sounds plausible, but we don't quite know what it is so we don't know if it's useful. An antimatter engine would be useful, that we do know. It is more than 100x more efficient than nuclear fusion and nuclear fusion alone would change everything we know.
We still need to understand what dark energy is, how to have quantum gravity, and what's wrong with standart model etc. There's likely new physics there that can easily be discovered by an AI that is 2-3 years more advanced than what we have now.
I've been predicting ASI for 2029, but it seems to be getting closer.
Well I'm glad GPT-4 didn't take the robo puppy to the gravel pit after the first minor setback!
What a time to be alive!
Language models are just smart. It is astonishing what you can do with them, especially when they're multimodal.
Presumably this works because GPT-4's training data included some robotics textbooks. While it's impressive that it can make such effective use of the material, let's not forget that that material was discovered and written down by humans.
AGI anxious notification gang!
2 AI Explained videos in 3 days. LFG
This is freaking remarkable.
super interesting
What is even more amazing than these mind-bending breakthroughs is the claim by some of these “AI Experts” that we are nowhere close to AGI
Wow thanks Philip! Love this
Amazing
nice, thanks! :)
You're back!
🎯 Key Takeaways for quick navigation:
00:00 *🤖 Overview of the Eureka concept*
- Training a quadruped Robo dog from simulation to reality using GP4 language model.
- Language models like GP4 are more effective teachers for robots.
- GP4's reward functions outperform human ones in robot training.
02:20 *🧠 Implementation details of training methods*
- GP4 trained Robo dog reacts effectively to new tasks not seen in training data.
- Use of domain randomization for realistic parameter ranges seen as crucial.
- The process details involve isolating variables, testing viable ranges, and domain randomizations.
07:28 *🎯 GPT 4 versus human training comparison*
- GPT 4 outperforms human-designed reward functions and domain randomization in robo dog tasks.
- Language models like GPT 4 generate multiple reward functions simultaneously for continuous improvement.
- Dr. Eureka's robot training shows significant performance improvements in forward velocity and distance travelled.
10:34 *🏋️♀️ Safety instructions and reward function design*
- Importance of safety instructions to prevent degenerate behaviors in robot training.
- Multiplicative reward functions generated by GP4 for robust training.
- Glimpse into the design of safety-oriented prompts and smooth reward gradients.
13:34 *🌟 Future improvements and implications*
- Discussion on enhancing the approach by incorporating vision and co-evolution.
- Speculation on the potential limits and possibilities of the Eureka approach.
- Predictions regarding AI's impact on physical tasks in industries and the evolution of robotics.
Made with HARPA AI
Humans have now found and created a process that basically equals a generalized reward function. This generalized reward process is basically AGI/ASI already. I officially call the starting gun on the singularity.
Skilled Trades were always going to be the last thing replaced. However, I always figured that difference between "knowledge work" and "electricians" would be more than a few years just from the manufacturing of the robots. Reality is the "extreme mode" simulation, and it doesn't care how you THINK things work - they either do or don't work in reality. The thing is, a "handyman" bot with 3, 4, 6 or 8 arms and 2 legs ... is going to be capable of ridiculously efficient work that humans won't be able to compete with. The 2 arms, 2 legs thing is cute, but a 4-armed professional painting bot is going to get stuff done in ways no human could imagine.
You are the best
thats some awesome robotics news
13:37 this is sounding eerily like takeoff
I think GPTs/LLMs may actually be better at training and operating robots than answering text questions. even though all of the first applications are computer-based, I think robots will be a huge part of their future use.
Someone needs to apply this to the Robsen Transformers
I'm sure the fact that "DrEureka" sounds like a Bond villain is a *complete* coincidence.
I guess the obvious question is, *what does this look like when you apply it to LLMs?* Given Anthropic's interpretability research (superposition/the unit of analysis not being individual neurons but groups of neurons & monosemanticity/tying each cluster of neurons to a specific word or meaning it triggers off of)... it seems like not too soon, this might be something an LLM could do to itself to try to recursively improve itself (at which point, all bets are *off)* -- and even failing that, get an LLM to "debug" a smaller LLM could be very, very interesting. Wonder what it'd look like?
I think that OpenAI releases GPT-5 the week of June 10, or during WWDC. Epic arms race incoming!
"It just works" I wonder where I heard that before
Can it be convinced to allow the high level of openly operating net piracy to continue if it were also to become disappointed about it and is it going to choose to not participate if employed so?
Lol! If I saw a robot thrustings its pelvis in the ground and dragging the other legs while trying to chase me, I would pass out.
Wang Jo Wang dorm apartment background shows he recycles more Heineken than Budweiser.
I predict in one year the US or UK military will have a biped robot that can complete a complicated dexterous task like a Wing Tsun dummy routine, or Aikido hands drill.
Do you think, or have heard any students trying fingers for piano dexterity?
Good report as usual. cheers
In this scenario, success is defined my standing above the yoga ball. This can be easily checked by a machine. But more complex tasks require a human to validate success, and some tasks require immediate response. This is a barrier that AI will not get through in a near future, maybe never. I foresee a AI winter very soon.
What do you define as complex tasks?
Specifically I was thinking about autonomous driving, where you have a lot of inputs and you need a fast response. Sometimes, even some moral reasoning. There are currently some companies running autonomous cars, they have a lot of glitches.
"hallucinations" are what happens when you look at an amoeba and tell us how many legs it's got.
i dont think you need a LLM for balancing task, plenty of negative feed back control system can already do balancing task that's hard for human. balancing an unicycle on a string, balancing a pencil on its pencil tip, jugging balls. On a more useful scale, maintain ship heading in rough storm also already use feed back control system, it can correct for both periodic era like wave or impulse error like a pocket methane spout. The difficult part is to find the correct feedback model parameter and the correct sensor data to make sure the system stay within feed back control force range.
If the llm can automatically generate the correct feedback model of the physical object, then it is a game changer.
Self-improving AI is on the horizon, and unsupervised learning has become practical with today’s computational resources