I hope you enjoy this video, please let me know what you think below. 👇 STAY TUNED & SUBSCRIBE: Next video on REASONING FULL AI series: ruclips.net/p/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ Thanks Jane Street for sponsoring. They are hiring people interested in ML: jane-st.co/ml SUPPORT AOP: www.patreon.com/artoftheproblem
Seems like reinforcement learning's been on a wild trip since forever, but the way Brit breaks it down? It's like he's got a secret map of the RL universe. He makes the crazy journey from old-school ideas to today's stuff actually make sense. It's like watching history unfold, but you know, without falling asleep!
I've only watched a few of your videos so far, but I've fallen in love with your in-depth yet easily understandable explanations of how things work, what discoveries lead to what innovation and how it did so, and the way you avoid both the unreasonable hype of techbros and PR departments while also avoiding the equally unreasonable pessimism and negativity of people like Yann Lecun and Noam Chomsky.
THANK you this means a lot to me. If you can help share my new video around any of your networks today it might catch fire and would help me support the channel: ruclips.net/video/PvDaPeQjxOE/видео.html
Another great video. It's super interesting to see the DeepMind is attempting to figure out how much real world learning vs simulated learning is optimal while LLM researchers are simultaneously asking questions about the use of "synthetic data", naively (if the "synthetic data" approach proves successful at scale) it seems to vaguely point towards a further generalization in the machine learning field. I think a great follow video to this one would be about multi model models and maybe at the end discuss the idea of synthesizing this robotic action model with something like chatgpt, or maybe not just spitballing. EDIT: just read your pinned comment, seems like your already a few steps ahead of me on this, not surprised
You have a magical ability to explain with such eloquence and clarity that you make me feel intelligent. All that lead up to the moment (and also from your previous videos) when you explain Domain randomization 19:45 “you actually need less precise simulation” that realization felt like an explosion in my mind. Thanks for your channel man
General intelligence that doesn't need to be trained offline in a simulation or on a static dataset to improve it entails an algorithm that learns from experience, in real time. With the exception of maybe functioning as a perception module, backprop-training basically has no use for such a thing - and even as a perception module to interpret vision/audition it will be limited to seeing and hearing only stuff that it has actually been trained on. What we need is a whole new paradigm, a whole new learning algorithm, that learns from scratch, from experience, how to do everything. It's right around the corner, we're right there, and it seems like everyone is still distracted by what will one day be the "compute-expensive antique brute-force method of making a computer learn", which we know today as backpropagation. Predictive learning algorithms are what my money is on. It's just a matter of working in the ability for them to learn behaviors, and rely on compact compute-efficient sparse distributed representations. Sparse Predictive Hierarchies are the closest thing I've seen so far, but their fixed log2 prediction interval at each successive level of the hierarchy means that it learns the same patterns over and over, when you want something that has overlap so that it learns when a temporal pattern is the same pattern no matter what time it started at. I also think that instead of having a fixed scaffolding which knowledge forms over, the scaffold itself should be built as a product of experience. Something more like MONA. The problem MONA has is that its perceptual inputs are limited to clustering entire sense vectors, so that if it sees a ball in a room during the day it won't think it's in the same place when it sees the same ball in the same room at night. Individual portions of senses much be treated as equally important, not the sense as a whole generating a single input signal. MONA's method of input leaves no room for generalizing perception, only volition. People have been experimenting with promising novel algorithms, getting us closer, but a lot of people/corporations nowadays are just looking to get hired or make a quick buck and so are only dealing in backpropagation, when that's not the way forward. It's a lateral move that will invariably result in a dead end, eventually. It will always have its place, and its uses, but it's not going to result in autonomous sentient robotic beings that do everything that only humans can do. A real time learning algorithm whose learning and abstraction capacity is limited only by the hardware its running on is coming. Nobody knows how to build it yet, but I estimate less than 5 years until it comes to be, and it's going to blow everyone's mind.
Yes. We need a neural network architecture that enables learning on the fly. Biological brains adjust parameters in real time. Instead of pretraining, the training is done continuously in response to new inputs. I imagine this would be computationally expensive though and possibly impractical.
Thank you for your videos, I love the way you carefully present historical information to build up to modern ideas, it is better than any other channel out there. Keep up the great work!
new video is out would love if you could help me share it around, I only have 24 hours left for the algo to catch it: ruclips.net/video/PvDaPeQjxOE/видео.html
Thank you for the credit at the end. You compressed the data well and thus, the info regarding the value function was more easily understood, in my opinion.
I agree :) also If you can help share my new video around any of your networks today it might catch fire and would help me support the channel. I appreciate your help! ruclips.net/video/PvDaPeQjxOE/видео.html
I'm an inventor who has started to work on industrial robots (mostly for warehouses). This was excellent. I'd like to suggest a video that formally treats Moravec's paradox. Specifically, why the computational methods that have been used on language and games, such as monte carlo tree search and autoregressive generative methods, don't work on physical space. If you have time, you could also explore why geometrical approaches to tackle this problem, such as cat(k) spaces work in two dimensions but fail in three dimensions. I'd love to see an historical approach of why we seem to be far from, say, a robot that could do your dishes and so close to, say, a computer that could be your child's math tutor.
new video is out would love if you could help me share it around, I only have 24 hours left for the algo to catch it: ruclips.net/video/PvDaPeQjxOE/видео.html
It's amazing how reinforcement learning works. If you're interested in learning more about AI, other tools may provide you with more features and insights.
The substantial but systematically overlooked problem with this method in the video remains that solving real-life problems autonomously require a system built from identical fundamental code/data sets with the inherent capability to figure out what to do by having just a few (even, potentially one single) guideline to follow to reach everything by a uniform rating/feedback system that is rewarded (similar to putting an extra bead into the box of the most successful move) that, however, requires us, creators of potential future AI systems, to identify a general marker of success - potentially using a method to measure the level of actual complexity of a set of code/data, and giving a “bead” to the most complex fundamental code/data set. That’s what I’m working on for some 15 years, by the way.
@@ArtOfTheProblem Some of my verbal notes (partly in Hungarian:) are uploaded to my channel but nothing tangible yet. Understandably, they are not watched at all. As I once said: no matter how quickly you can run, how skillfully you climb, how strong, resilient and determined you are if you don’t turn precisely in the correct direction before even taking the very first step. I really doubt that anyone currently has clear and valid idea about either the definition of life (even matter, actually), intelligence and consciousness and the relation between these assumed categories. We want to solve a huge crossword in an ancient language that nobody speaks anymore, where every word is crossing every other and the definitions are merely moods, dreams and songs of birds. As for myself, working on this field in most of my time, I’m trying just to turn in the right direction for the last 15+ years without taking any actual steps (e.g. writing a line of code), and just letting others climb endless walls and run sideways vehemently - with pathetic results like generating mindless eye candies or imitating (a pretty retarded) human’s responses. I assume that I will KNOW when I’m ready to take a step… not even further… just to take an ACTUAL step at all. Once it actually happens (and it does coming closer this year) and it will be the right direction, I will upload videos with some tangible results.
Thanks for the excellent video... I started watching your Info theory & machine learning videos as a wide-eyed kid who loved the idea of AGI. After reading Asimov (and other, more controversial authors) and living through GPT/DALLE, I became convinced that AI is not the way forward. Nonetheless, I remain a huge fan of your videos. They are by far the most informative videos on the topic I have ever seen.
new video is out would love if you could help me share it around, I only have 24 hours left for the algo to catch it: ruclips.net/video/PvDaPeQjxOE/видео.html
The RUclips algorithm is not doing this video the justice it deserves. Maybe Google needs to up their reinforcement learning game for RUclips recommendations.
thanks :) I was frustrated with this video not getting shared at all by algo. my only guess is I was messing with thumbnail ideas when I first published. part of my thinks I should republish it but I never do that....
This is a great essay. Thanks! BTW, I think of RL as the opposite approach to gradient decent. With gradient decent we look at error and use the chain rule to update weights. With RL we add noise to the weights then test for error. BTW, you can use action tokens in transformers. In other words, the token is the representation for an action. We can collect actions as tokens from a customer service representative's actions placing then within a transcript, for example.
Instead of training RL models from scratch, we appear to be pivoting to combining LLM knowledge with action space choices to form pseudo RL models. Is this the best way forward?
It seems like llms have pulled attention away from traditional RL techniques for improvement of general systems, including right as we were developing better and better pure RL systems
To be clear there's no actual reward or punishment occurring, even simulated, it's just selecting for or against a particular response/state. These are just the silly words used for this approach.
Amazing work 👏 unfortunately we've already dug out complexity in these areas in 1300-1500s then in eccentric movements of thought creating English, English law, steam engine archeology etc etc etc This is why the great debate was warned ⚠️ no one would have a religious vs science issue when darwin & evolution anthromorphized grand unified theory is ancient old world beliefs lol Thermodynamical systems like say electronic plasticity of brain organoids injected dead with rabies still plays ping pong because has nothing to do with learning or feelings when dogs/valves are mechanically switched one and off or like a polaroid flashed image on a canvas picture. This anylitical y axis duslistic brain + primordial self soul agency energy density within humanity Triangulates thermodynamical systems similarly but it's not the same its just 1 part of many. Things like curses and blessings standardized weights and measure addition and subtraction emerging energetic properties e =Mc idealistic forces faith and physical lawisms works we plagerize correlate effortlessly prescribed upon the world until 1500s when we learned how effortlessly overcoming horizon paradoxes we were. Shocked to learn how things like how mosaic commandments English law moral realism was really in thermodynamical systems in the world around us . 1900s structuralism platonic wartime posterity everything physicalism everything starts in Greece revisionist history curriculum great debate anthrosphy was one last time exhausting old world beliefs math mapping Pre 1500s name & order face value dualistic form & shape 1890s-2010. Living in whatsboutism nilhisms as if math was foundational judge a book by its cover era movement of thought excersize. Right before was all about building library & museum with singularity fetish because we obviously use a letter . The old assyrian babylonian Greek 3 body problem in space yoo hoo woo uncertainty is not here on earth realism it's a prenticious clocklike broken tune weights and measure. Since mass discplament of Europe Asia Africa into America UK it made sense to help these immigrants in school and succed as new borders drawing new nations adjusted. So now we have a very prenticiously informed perception in our society when we need the best long-term decisions making skills
Its unfortunate that separatists puritan pilgrim classical American where pushed out into hardware that knows the key to the cosmos esoterica America longitude and latitude better than what colleges incentivized and draw from which Is very anthromorphized
@@ArtOfTheProblem nah I thought it was bad clickbait but you know what you're doing fasho, so it's fine to keep it. It does catch the eye. And I think somehow with LLMs they can 'feel' some thing when hinged on emotional words and the connotation of some words.
@@ginogarcia8730 if you have another non click bait title let me know as I'm still not seeing good click through on this title. "THE ROBOTS ARE COMING" :)
@@ArtOfTheProblem Brother best friend dad happened to travel to the US a lot and in one trip they bought one of those scanners with wheels (similar to a Geniscan 4000) which came with a 3 1/2 floppy disk with this character recognition software. IIRC it was made to only recognize printed letters, not handwritten ones. It was a stand alone software, separate from the one used to scan pictures, with a DOS shell like UI where you had to load the jpg image and it would spit out a txt file as a result. I managed to make it work but I remember my 386 PC to constantly crash when running it. No mention to AI anywhere in sight; it was just a cool software back then
Hands down the best AI history channel in the world
@@rickandelon9374 THANKYOU … no top ten amazing things this week here :)
Agreed.
Seeing this video at 466 views currently and shocked it doesn’t have hundreds of thousands if not millions. Awesome video
Second that, this vibe of food for thought is awesome to me!!keep going!!
I messed up something with my upload, algo did not share it :( ... yet
I hope you enjoy this video, please let me know what you think below. 👇
STAY TUNED & SUBSCRIBE: Next video on REASONING
FULL AI series: ruclips.net/p/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ
Thanks Jane Street for sponsoring. They are hiring people interested in ML: jane-st.co/ml
SUPPORT AOP: www.patreon.com/artoftheproblem
the way you introduce the REAL AI to the world, Nice job
finally, it's about time
Seems like reinforcement learning's been on a wild trip since forever, but the way Brit breaks it down? It's like he's got a secret map of the RL universe. He makes the crazy journey from old-school ideas to today's stuff actually make sense. It's like watching history unfold, but you know, without falling asleep!
@@belibem :) it was indeed a huge mess to untangle … notice I cut all the model free detours
I've only watched a few of your videos so far, but I've fallen in love with your in-depth yet easily understandable explanations of how things work, what discoveries lead to what innovation and how it did so, and the way you avoid both the unreasonable hype of techbros and PR departments while also avoiding the equally unreasonable pessimism and negativity of people like Yann Lecun and Noam Chomsky.
this comment means a lot, thank you. I try to stick to my lane and provide value where i can
THANK you this means a lot to me. If you can help share my new video around any of your networks today it might catch fire and would help me support the channel: ruclips.net/video/PvDaPeQjxOE/видео.html
@@ArtOfTheProblem I don't really do the whole "social media" thing, sorry.
@ leaving a comment and like is more than enough ! ( me neither :)
HALF an HOUR?! Why was this not shown in my subscriptions feed?! Most captivating content on the tube! Thank you
Thank you! please help me share it , I don't know why algo ignored it this time. Perhaps the click through rate or something.
I can't believe I just rewatched all your videos and then you've just released another one.
what a treasure
Another great video. It's super interesting to see the DeepMind is attempting to figure out how much real world learning vs simulated learning is optimal while LLM researchers are simultaneously asking questions about the use of "synthetic data", naively (if the "synthetic data" approach proves successful at scale) it seems to vaguely point towards a further generalization in the machine learning field. I think a great follow video to this one would be about multi model models and maybe at the end discuss the idea of synthesizing this robotic action model with something like chatgpt, or maybe not just spitballing.
EDIT: just read your pinned comment, seems like your already a few steps ahead of me on this, not surprised
love this....thanks for sharing your thinking it helps
this is an awesome overview! loved every second of it. Would have expected that this is at 1M+ views
@@quirinschweigert7794 thanks I worked super hard on this one, please help me share :)
The music that starts @ 25:40 provides a nice transition and nicely conveys the future potential of the technology.
thanks! I tried hard to make sure the music didn't 'get in the way' of the content
I really liked the historical perspective on how RL started. It helps stair-step my way up to modern day concepts :)
All your videos are excellent. Congratulations.
Another amazingly lucid video. Thank you! By the end it feels like were just getting started.
:) Yes definitely, I actually had a whole other part on world models (model based) I had to cut, and so setting up the next video for that
@@ArtOfTheProblem love it! Can't wait.
Among the best content out there, thank you 🙏
Thanks! did you check out the latest video i just posted a follow up
You have a magical ability to explain with such eloquence and clarity that you make me feel intelligent. All that lead up to the moment (and also from your previous videos) when you explain Domain randomization 19:45 “you actually need less precise simulation” that realization felt like an explosion in my mind. Thanks for your channel man
@@kingdodongo4126 woo! So glad that moment worked , I remember when I first figured that out too
Really great video! Awesome summary of the history of RL.… Very clear. Nice job.
thanks Jim!
General intelligence that doesn't need to be trained offline in a simulation or on a static dataset to improve it entails an algorithm that learns from experience, in real time. With the exception of maybe functioning as a perception module, backprop-training basically has no use for such a thing - and even as a perception module to interpret vision/audition it will be limited to seeing and hearing only stuff that it has actually been trained on. What we need is a whole new paradigm, a whole new learning algorithm, that learns from scratch, from experience, how to do everything. It's right around the corner, we're right there, and it seems like everyone is still distracted by what will one day be the "compute-expensive antique brute-force method of making a computer learn", which we know today as backpropagation. Predictive learning algorithms are what my money is on. It's just a matter of working in the ability for them to learn behaviors, and rely on compact compute-efficient sparse distributed representations. Sparse Predictive Hierarchies are the closest thing I've seen so far, but their fixed log2 prediction interval at each successive level of the hierarchy means that it learns the same patterns over and over, when you want something that has overlap so that it learns when a temporal pattern is the same pattern no matter what time it started at. I also think that instead of having a fixed scaffolding which knowledge forms over, the scaffold itself should be built as a product of experience. Something more like MONA. The problem MONA has is that its perceptual inputs are limited to clustering entire sense vectors, so that if it sees a ball in a room during the day it won't think it's in the same place when it sees the same ball in the same room at night. Individual portions of senses much be treated as equally important, not the sense as a whole generating a single input signal. MONA's method of input leaves no room for generalizing perception, only volition. People have been experimenting with promising novel algorithms, getting us closer, but a lot of people/corporations nowadays are just looking to get hired or make a quick buck and so are only dealing in backpropagation, when that's not the way forward. It's a lateral move that will invariably result in a dead end, eventually. It will always have its place, and its uses, but it's not going to result in autonomous sentient robotic beings that do everything that only humans can do. A real time learning algorithm whose learning and abstraction capacity is limited only by the hardware its running on is coming. Nobody knows how to build it yet, but I estimate less than 5 years until it comes to be, and it's going to blow everyone's mind.
Yes. We need a neural network architecture that enables learning on the fly. Biological brains adjust parameters in real time. Instead of pretraining, the training is done continuously in response to new inputs. I imagine this would be computationally expensive though and possibly impractical.
Thank you for your videos, I love the way you carefully present historical information to build up to modern ideas, it is better than any other channel out there. Keep up the great work!
thank you, it's a ton of work and I appreciate this
new video is out would love if you could help me share it around, I only have 24 hours left for the algo to catch it: ruclips.net/video/PvDaPeQjxOE/видео.html
Thank you for the credit at the end. You compressed the data well and thus, the info regarding the value function was more easily understood, in my opinion.
@@AdamJeffries-r4f thank you Adam !
My absolutely favorite channell!
thank you! I have a whole new topic coming next week. did you just find me or this video specifically?
11:00 when you hear this music, magic is about to happen
:)
I love this channel so much I only wish you made videos faster but it's always such engaging content I can see why it takes a while
appreciate this
AI changes on a weekly basis and it's hard to keep up, but you ground it all to its roots. Thanks for your videos and history.
@@NanoAGI yes I know the feeling , I don’t see others doing this so happy it helps
Brilliant work!
thanks mom
His videos have helped my students get interested in science, AI and computation!
@@Lightconelabs thrilled to hear it what ages ?
Awesome stuff!! I just love the way you explain things 🙏💕
I feel like I'm closer than ever to actually understanding AI 😅😅
thrilled to hear it, thanks Kaleb
Really enjoy watching computer science content, particularly in the subfield of AI. Please don’t ever stop ❤️
Appreciate the support. consider supporting AOP! www.patreon.com/artoftheproblem
Fantastic and engaging as always.
@@__m__e__ glad you enjoyed
great video as always!
One of the greatest inspiration for me is you sir, thanks alot❤. Love you from india
thank you! glad you found this
I agree :) also If you can help share my new video around any of your networks today it might catch fire and would help me support the channel. I appreciate your help! ruclips.net/video/PvDaPeQjxOE/видео.html
Awesome work my friend! It's hard to wait for the next video to come out. ❤
Fantastic Video mate, keep up the great work!
lucky me for this video today!
After a long time ❤️
hope you enjoy
I'm an inventor who has started to work on industrial robots (mostly for warehouses). This was excellent. I'd like to suggest a video that formally treats Moravec's paradox. Specifically, why the computational methods that have been used on language and games, such as monte carlo tree search and autoregressive generative methods, don't work on physical space. If you have time, you could also explore why geometrical approaches to tackle this problem, such as cat(k) spaces work in two dimensions but fail in three dimensions. I'd love to see an historical approach of why we seem to be far from, say, a robot that could do your dishes and so close to, say, a computer that could be your child's math tutor.
@@posthocprior thank you ! I will indeed follow this thread
new video is out would love if you could help me share it around, I only have 24 hours left for the algo to catch it: ruclips.net/video/PvDaPeQjxOE/видео.html
Such a good video man, really enjoyed it amd subbed ✌🏼
thank you for the comment, the algo seems to not like this video!
@@ArtOfTheProblem It happens
Again the top❤!
This was an incredible video. Thank you!
thank you for sharing, glad people are finding this
FYI consider supporting future content via. www.patreon.com/artoftheproblem - thanks again
It's amazing how reinforcement learning works. If you're interested in learning more about AI, other tools may provide you with more features and insights.
FYI consider supporting future content via. www.patreon.com/artoftheproblem - thanks again
All time great video, thanks so much
@@ronak14p thanks!
The substantial but systematically overlooked problem with this method in the video remains that solving real-life problems autonomously require a system built from identical fundamental code/data sets with the inherent capability to figure out what to do by having just a few (even, potentially one single) guideline to follow to reach everything by a uniform rating/feedback system that is rewarded (similar to putting an extra bead into the box of the most successful move) that, however, requires us, creators of potential future AI systems, to identify a general marker of success - potentially using a method to measure the level of actual complexity of a set of code/data, and giving a “bead” to the most complex fundamental code/data set. That’s what I’m working on for some 15 years, by the way.
cool do you have anything i can read?
@@ArtOfTheProblem Some of my verbal notes (partly in Hungarian:) are uploaded to my channel but nothing tangible yet. Understandably, they are not watched at all. As I once said: no matter how quickly you can run, how skillfully you climb, how strong, resilient and determined you are if you don’t turn precisely in the correct direction before even taking the very first step. I really doubt that anyone currently has clear and valid idea about either the definition of life (even matter, actually), intelligence and consciousness and the relation between these assumed categories. We want to solve a huge crossword in an ancient language that nobody speaks anymore, where every word is crossing every other and the definitions are merely moods, dreams and songs of birds. As for myself, working on this field in most of my time, I’m trying just to turn in the right direction for the last 15+ years without taking any actual steps (e.g. writing a line of code), and just letting others climb endless walls and run sideways vehemently - with pathetic results like generating mindless eye candies or imitating (a pretty retarded) human’s responses. I assume that I will KNOW when I’m ready to take a step… not even further… just to take an ACTUAL step at all. Once it actually happens (and it does coming closer this year) and it will be the right direction, I will upload videos with some tangible results.
Thanks for the excellent video... I started watching your Info theory & machine learning videos as a wide-eyed kid who loved the idea of AGI. After reading Asimov (and other, more controversial authors) and living through GPT/DALLE, I became convinced that AI is not the way forward. Nonetheless, I remain a huge fan of your videos. They are by far the most informative videos on the topic I have ever seen.
@@lejb8962 thanks for sharing , what path ahead are you excited about ?
@@ArtOfTheProblem Oh, me? I'm into homesteading now; I've become a luddite fundamentalist type. 😅
@@lejb8962 love it!
@@lejb8962 i haven't owned a phone since 2009
new video is out would love if you could help me share it around, I only have 24 hours left for the algo to catch it: ruclips.net/video/PvDaPeQjxOE/видео.html
I'm reinforced to hit the Like button on all your videos
The RUclips algorithm is not doing this video the justice it deserves. Maybe Google needs to up their reinforcement learning game for RUclips recommendations.
thanks :) I was frustrated with this video not getting shared at all by algo. my only guess is I was messing with thumbnail ideas when I first published. part of my thinks I should republish it but I never do that....
Wow!! I sent this to my kids.
thanks for sharing
18:16 damn he's bussin it down
Very cool
whenever I see your video, I just click it
This is a great essay. Thanks! BTW, I think of RL as the opposite approach to gradient decent. With gradient decent we look at error and use the chain rule to update weights. With RL we add noise to the weights then test for error. BTW, you can use action tokens in transformers. In other words, the token is the representation for an action. We can collect actions as tokens from a customer service representative's actions placing then within a transcript, for example.
❤❤
Instead of training RL models from scratch, we appear to be pivoting to combining LLM knowledge with action space choices to form pseudo RL models. Is this the best way forward?
It seems like llms have pulled attention away from traditional RL techniques for improvement of general systems, including right as we were developing better and better pure RL systems
@@whatarewaves this seems to be the case and I’m tracking this as we speak
We have come a long way. But we have miles to go before we sleep.
I watched all of your RUclips videos and yet RUclips didn’t recommend this to me? Weird
right! i have no clue why
glad you found it
wow these robots playing soccer are beyond cool...and cute
This video was too short bro I need to watch it twice
Thrilled to hear it....the script was so so long but I had to streamline it (remonded all model based methods)
FYI consider supporting future content via. www.patreon.com/artoftheproblem - thanks again
How long until we have robots that look like Haley Joel Osment walking around?
To be clear there's no actual reward or punishment occurring, even simulated, it's just selecting for or against a particular response/state. These are just the silly words used for this approach.
define actual reward
Where is your degree in machine learning?
For the algo
Amazing work 👏 unfortunately we've already dug out complexity in these areas in 1300-1500s then in eccentric movements of thought creating English, English law, steam engine archeology etc etc etc
This is why the great debate was warned ⚠️ no one would have a religious vs science issue when darwin & evolution anthromorphized grand unified theory is ancient old world beliefs lol
Thermodynamical systems like say electronic plasticity of brain organoids injected dead with rabies still plays ping pong because has nothing to do with learning or feelings when dogs/valves are mechanically switched one and off or like a polaroid flashed image on a canvas picture.
This anylitical y axis duslistic brain + primordial self soul agency energy density within humanity Triangulates thermodynamical systems similarly but it's not the same its just 1 part of many.
Things like curses and blessings standardized weights and measure addition and subtraction emerging energetic properties e =Mc idealistic forces faith and physical lawisms works we plagerize correlate effortlessly prescribed upon the world until 1500s when we learned how effortlessly overcoming horizon paradoxes we were.
Shocked to learn how things like how mosaic commandments English law moral realism was really in thermodynamical systems in the world around us .
1900s structuralism platonic wartime posterity everything physicalism everything starts in Greece revisionist history curriculum great debate anthrosphy was one last time exhausting old world beliefs math mapping
Pre 1500s name & order face value dualistic form & shape 1890s-2010. Living in whatsboutism nilhisms as if math was foundational judge a book by its cover era movement of thought excersize.
Right before was all about building library & museum with singularity fetish because we obviously use a letter . The old assyrian babylonian Greek 3 body problem in space yoo hoo woo uncertainty is not here on earth realism it's a prenticious clocklike broken tune weights and measure.
Since mass discplament of Europe Asia Africa into America UK it made sense to help these immigrants in school and succed as new borders drawing new nations adjusted.
So now we have a very prenticiously informed perception in our society when we need the best long-term decisions making skills
Its unfortunate that separatists puritan pilgrim classical American where pushed out into hardware that knows the key to the cosmos esoterica America longitude and latitude better than what colleges incentivized and draw from which Is very anthromorphized
I still think these systems could be better.
definitely, but I thnk we are at the cusp of a big leap
Lol. Yes.
That’s what motivates people to learn them.
WOw.. that background noise!!!!!!!!!!! Really?
Still haven't solved the long-term memory problem, I see 😂
Yrlui
learned to feel? hmm
@@ginogarcia8730 thoughts on different title ? I’ve been experimenting. though I like the analogy of value function
@@ArtOfTheProblem nah I thought it was bad clickbait but you know what you're doing fasho, so it's fine to keep it. It does catch the eye. And I think somehow with LLMs they can 'feel' some thing when hinged on emotional words and the connotation of some words.
@@ginogarcia8730 if you have another non click bait title let me know as I'm still not seeing good click through on this title. "THE ROBOTS ARE COMING" :)
I remember trying to run character recognition software back in 1992 or 1993
Love it , how did your experiments progress ?
@@ArtOfTheProblem Brother best friend dad happened to travel to the US a lot and in one trip they bought one of those scanners with wheels (similar to a Geniscan 4000) which came with a 3 1/2 floppy disk with this character recognition software. IIRC it was made to only recognize printed letters, not handwritten ones. It was a stand alone software, separate from the one used to scan pictures, with a DOS shell like UI where you had to load the jpg image and it would spit out a txt file as a result. I managed to make it work but I remember my 386 PC to constantly crash when running it.
No mention to AI anywhere in sight; it was just a cool software back then