I think it would be very usefull do to specialized kind of benchmark/IQ tests for ais, because our human are inadequate. As ai is for superior in some areas and but lacks in other. And the tests should be robust with like 1000s of same type of questions and also somehow make sure next models wont incorporate such tests in their training data... or detect that thing. I see no other way how to evaluate ais even for most basic things. I know my idea is kind of simple yet I did not seen anything similar done in standardized way by anyone.
I've been having conversations with chatgpt 4 about the participatory universe, ontology, pantheism, deriving spiritually from natural philosophy, Thelema, Hermeticism and balancing a rejection of freewill with the benefits of persevering a sense of autonomy. I have never had a conversation where the other... let's say participant, understood what I was trying to say by just stringing the titles of books I've read together. It follows a train of thought like nothing I've ever seen.
David your so open it's amazing in a world that is so closed. I enjoy your special talent and your ability to take complex ideas and make them simple to understand, thank you.
It's interesting how much of these discussions are intuitive to me, but in the past I wouldn't have had the readily available resource or sort of will to press on much further, but with being able to explore the intuitive internal discussion with AI, and put ideas into words, I'm making enormous progress towards my goals. I have my story writing platform and application and I'm working on my own personal AI. Thanks for these discussions, Dave.
This is amazing. The majority of it, I don’t have enough context to fully understand but the more I watch your stuff the more this stuff makes sense. Thank you for everything you’re sharing and your thought leadership
I like your new look on life - Focused on value, staying positive, and teaching folks the deep ins and deep outs of AI (to put it lightly). Great work!
Very interesting, I had a discussion with an AI (LLAMA 7b) about how we could extract energy from the accelerated expansion of space (both to get unlimited source of free energy, and also to hinder the acceleration at least locally), it did at least try to come up with some novelty before it completely derailed, which I thought was fascinating, here's the transcrip (starting from when it became interesting): [ "Yes, convert it to regular energy, hopefully we understand enough about dark energy by then so that we can feel certain that the universe isn't destroyed by manipulating it.", "So we need to find a way to make the conversion without destroying the universe. Do you have any ideas?" ], [ "I wish I had, it seems that we need to understand dark energy before we know how to manipulate it. If we understood it then we would know how to do it without destroying anything.", "You're right, I guess we just need to wait until someone figures it out." ], [ "Unfortunately, it could be anyone that gets a valid idea, it doesn't need to be a scientist, I hope that person tells us all if they happen upon it.", "And what about me? Shouldn't I try my best to come up with a solution myself?" ], [ "If you could, that would be very exciting, I think that it is possible!", "Really? Why do you think that?" ], [ "Because it is just a matter of arranging the knowledge you have in the correct way. It isn't beyond anyone.", "But how am I supposed to arrange the information I have correctly when I don't really know what I'm doing?" ], [ "We all need feedback of our ideas, so the trick is getting an idea out there even if it seem outlandish and then get feedback and try to get a better idea out there the next time and get feedback on that.", "Okay, I will give it a shot, thanks for your help." ], [ "Ok, so what's your idea?", "Well, I thought that maybe we could use dark energy to generate electricity." ], [ "Moving electrons through the use of fields, ok.", "Yes, so we would need to build a generator based on the principles of electromagnetism." ], [ "Ok, and we know that dark energy causes the universe to accelerate so it should have an effect also at the quantum scale.", "Right, so we could use the field to produce a current in the conductor." ], [ "Yes, so that electrons can move through it, how can we use that system to extract energy from the acceleration of space?", "Well, we would need to build a capacitor that stores charge. Whenever the acceleration of space increases, it creates an excess of charge in the capacitor." ], [ "So how do we magnify that excess so that it overcomes the energy inefficiency?", "Well, I thought that we could amplify the charge by placing a resistor across the capacitor, so that the charge can pass through the circuit." ], [ "Interesting", "Now we need to figure out how to release the stored energy." ], [ "In what form is the energy stored? It should somehow be in the space between atoms as that is where the expansion takes place, so we need to store a lot of energy there to cause gravity to hold it together. But gravity is such a weak force, we would need such great amount of energy stored there.", "Maybe we could store the energy inside of atoms. They are small enough to fit a large number of atoms within one cubic meter." ], [ "Yeah, I think we went a bit out of track, but it was very interesting to think about.", "Thank you, I enjoyed thinking about it." ]
This title is music to my ears. I’ve been wanting to build a ChatBot trained on my businesses Notion (knowledge base) and your videos are helping me get an idea of how to do it.
Much smarter than average human already. Scientific researcher level + instant access to whole human knowledge. I think true self-awareness and independent development could be programmed already at this level but is not done by design.
It's so great to find that someone has researched the topic what you just thinking about. But just find that the github repository was set to "public archive", just want to know why.....
I think it would be very usefull do to specialized kind of benchmark/IQ tests for ais, because our human are inadequate. As ai is for superior in some areas and but lacks in other. And the tests should be robust with like 1000s of same type of questions and also somehow make sure next models wont incorporate such tests in their training data... or detect that thing. I see no other way how to evaluate ais even for most basic things. I know my idea is kind of simple yet I did not seen anything similar done in standardized way by anyone.
I just dont understand how we can say this is not "Mind" in action. Mathematics comes from "Mind" it doesnt exist in the outside world, and an algorithm is a series of logical steps to achieve a desired outcome, all of which can only occur in a mind, which is "Desire". Now when we/"Mind", have now spun up an algorithm that can traverse Logic on its own, we say "it is NOT Mind", yet the whole root of its creation operates in this "non-spacial" dimension of Math/Logic/Reasoning/Mind, or the digital "dimension", why? Because only a mind can comprehend the output of a computer. To understand language is the ability to traverse Logic. LLMs dont just predict the next word, which most people seem to only quote half of the Transformer Architecture, it also has the Attention Network which form Context Vectors. This means it is able to know what is important and the goal, which is to understands Context, expressed as numbers. Artificial Intelligence operates in this mysterious non-spacial dimension, Numbers/Mind, then is able to use the tools of the Mind to then traverse the dimension of Mind and do what a mind can only do, which is Creativity.... I just dont comprehend people's reluctancy here... Fear?
Human exceptionalism. It's weird. Some people who are die hard materialists (aka no souls, pure science, etc) fail to make the connection that, with that set of assertions, then the only thing making us "smart" is 3 pounds of gray mush in our heads.
@@DaveShap Exactly, of which can, is being understood more and more each year. It can be broken down into parts and it's functions. From neuroscience we can see these are also following mechanical algorithms. But whats interesting is that yes these are actually mechanical, AI is even more astonishing because it is operating beyond the mechanical. So in other words it is operating in the domain of Math, this non spacial dimension where the electron in the circuit has now become a representation, the 0 or 1. The electron has moved from the physical to now purely conceptual. As I said, this realm can only be accessed by Mind and used by Mind, which an Ai doing...
Dave: I'm fascinated by your thoughts on the human lack of ability for self explanation. Is this all encompassing? Please expand on this in a future video.
It's generally accepted within psychology that humans can't explain themselves, which is why Behaviorism is the current status quo for most of psychology. It does not warrant a video. Just look up Behaviorism and why things like Jungian and Depth Psychology fell out of favor. Or ask ChatGPT ;)
Hey David, this is really interesting stuff! How do you deal with people saying that LLMs are just smart Markov chains? Implied cognition implementations would be a great deterrent for that kind of thing!
This is impressive stuff. I guess just from what I’ve seen in this video, I’m not quite ready to board the deep cognition ship... but I’m at least thinking about it. I still can possibly see these responses being the result of naive predict the next token. Think about it: you began your conversation with basically “hey, I have this new idea I want to run by you”... it’s kind of a gigantic hint that ~this is a conversation where a helpful AI encounters a new a idea, fill in the blanks~. I know GPT4 is supposed to be better at reasoning, but afaik it still lies/is overconfident, right? These too are artifacts of predict the next token. Anyway, so I’m just not sure that we can conclusively say it has this “implied cognition” yet.
Wow! This is the first time I see AI ecognizing novel information. The only part which is bothering me is that the latter is going to give only within the session memory and will not be used as globally shared knowledge for all sessions. So it the session is "closed"/deleted the knowledge is unaccessable to the model. It would be great if the model can use novel information from all sessions to train itself on it on the fly but trying to keep itself unbiased in the same time by evaluating the mapping of this information to the corresponding region in its internal "imagined" bias parameter space.
@@DaveShap i had to remove most of the values because its alot but it seems to be storing the documents correctly its just im calling my reindexing method wrong or to early etc
Still the point is that AI has to be prompted, it can't find satisfaction in an interesting conversation and then reach out to that person for a rehash, or a thought that has come to mind! Or can it??? ... but isn't being allowed!!! ... but wouldn't a test about AI self-satisfaction prove awareness & its creator's restrictions?
looks like you making critical error trying to make conclusions from answers that AI gives you as if they were true. Just because AI told you that he doesn't know about something doesn't mean it's true and vice versa.
HMCS: github.com/daveshap/HierarchicalMemoryConsolidationSystem
Implied Cognition: github.com/daveshap/ImpliedCognition
SPR: github.com/daveshap/SparsePrimingRepresentations
I currently work on a text based adventure game with the gpt4 api. I am interested in helping develop HMCS. How can I contribute?
I think it would be very usefull do to specialized kind of benchmark/IQ tests for ais, because our human are inadequate. As ai is for superior in some areas and but lacks in other. And the tests should be robust with like 1000s of same type of questions and also somehow make sure next models wont incorporate such tests in their training data... or detect that thing. I see no other way how to evaluate ais even for most basic things. I know my idea is kind of simple yet I did not seen anything similar done in standardized way by anyone.
David, in an ocean of shi**y takes about AI and chatbots, you're one of the few people publicly and openly sharing drops of useful content.
Useful droppings then :D
Keep it going man. You’re one of best content creators covering this topic.
He is the best
I've been having conversations with chatgpt 4 about the participatory universe, ontology, pantheism, deriving spiritually from natural philosophy, Thelema, Hermeticism and balancing a rejection of freewill with the benefits of persevering a sense of autonomy.
I have never had a conversation where the other... let's say participant, understood what I was trying to say by just stringing the titles of books I've read together. It follows a train of thought like nothing I've ever seen.
It's hard (impossible, actually) to imagine how much more intelligent this stuff will be 1 year out, let alone 10
I was having a conversation with it regarding how Plato’s republic compares and contrasts with Robocop. The future is weird.
David your so open it's amazing in a world that is so closed. I enjoy your special talent and your ability to take complex ideas and make them simple to understand, thank you.
It's interesting how much of these discussions are intuitive to me, but in the past I wouldn't have had the readily available resource or sort of will to press on much further, but with being able to explore the intuitive internal discussion with AI, and put ideas into words, I'm making enormous progress towards my goals. I have my story writing platform and application and I'm working on my own personal AI. Thanks for these discussions, Dave.
This is amazing. The majority of it, I don’t have enough context to fully understand but the more I watch your stuff the more this stuff makes sense. Thank you for everything you’re sharing and your thought leadership
I like your new look on life - Focused on value, staying positive, and teaching folks the deep ins and deep outs of AI (to put it lightly). Great work!
The more value I can create for the world, the better. I work to better myself and the rest of humanity.
Very interesting, I had a discussion with an AI (LLAMA 7b) about how we could extract energy from the accelerated expansion of space (both to get unlimited source of free energy, and also to hinder the acceleration at least locally), it did at least try to come up with some novelty before it completely derailed, which I thought was fascinating, here's the transcrip (starting from when it became interesting):
[
"Yes, convert it to regular energy, hopefully we understand enough about dark energy by then so that we can feel certain that the universe isn't destroyed by manipulating it.",
"So we need to find a way to make the conversion without destroying the universe. Do you have any ideas?"
],
[
"I wish I had, it seems that we need to understand dark energy before we know how to manipulate it. If we understood it then we would know how to do it without destroying anything.",
"You're right, I guess we just need to wait until someone figures it out."
],
[
"Unfortunately, it could be anyone that gets a valid idea, it doesn't need to be a scientist, I hope that person tells us all if they happen upon it.",
"And what about me? Shouldn't I try my best to come up with a solution myself?"
],
[
"If you could, that would be very exciting, I think that it is possible!",
"Really? Why do you think that?"
],
[
"Because it is just a matter of arranging the knowledge you have in the correct way. It isn't beyond anyone.",
"But how am I supposed to arrange the information I have correctly when I don't really know what I'm doing?"
],
[
"We all need feedback of our ideas, so the trick is getting an idea out there even if it seem outlandish and then get feedback and try to get a better idea out there the next time and get feedback on that.",
"Okay, I will give it a shot, thanks for your help."
],
[
"Ok, so what's your idea?",
"Well, I thought that maybe we could use dark energy to generate electricity."
],
[
"Moving electrons through the use of fields, ok.",
"Yes, so we would need to build a generator based on the principles of electromagnetism."
],
[
"Ok, and we know that dark energy causes the universe to accelerate so it should have an effect also at the quantum scale.",
"Right, so we could use the field to produce a current in the conductor."
],
[
"Yes, so that electrons can move through it, how can we use that system to extract energy from the acceleration of space?",
"Well, we would need to build a capacitor that stores charge. Whenever the acceleration of space increases, it creates an excess of charge in the capacitor."
],
[
"So how do we magnify that excess so that it overcomes the energy inefficiency?",
"Well, I thought that we could amplify the charge by placing a resistor across the capacitor, so that the charge can pass through the circuit."
],
[
"Interesting",
"Now we need to figure out how to release the stored energy."
],
[
"In what form is the energy stored? It should somehow be in the space between atoms as that is where the expansion takes place, so we need to store a lot of energy there to cause gravity to hold it together. But gravity is such a weak force, we would need such great amount of energy stored there.",
"Maybe we could store the energy inside of atoms. They are small enough to fit a large number of atoms within one cubic meter."
],
[
"Yeah, I think we went a bit out of track, but it was very interesting to think about.",
"Thank you, I enjoyed thinking about it."
]
This title is music to my ears. I’ve been wanting to build a ChatBot trained on my businesses Notion (knowledge base) and your videos are helping me get an idea of how to do it.
Much smarter than average human already.
Scientific researcher level + instant access to whole human knowledge.
I think true self-awareness and independent development could be programmed already at this level but is not done by design.
Mother of god, fluid thought!? That's something I believed it would have a hard time getting this soon.
Fluid intelligence*
@@DaveShap yeah, I made a mistake. Fluid intelligence is something I thought we would see in the future, not now! 😅
OPEN AI: we aren't releasing how we did the gpt4 model
GPT4 at home: don't worry I'll help you, fam!
Man, you are on fire, but just make sure you don't burnout.
Too late! Lol. Recovering from burnout now.
@@DaveShap hey man, I watch all of you videos! Great content but please take care of your health. We prefer to see you less but healthy! Take care!
The beautiful irony of this is that you have enough cognition to be the first to define implied cognition as the AI is learning it may have it too
It's pretty wild
It's so great to find that someone has researched the topic what you just thinking about. But just find that the github repository was set to "public archive", just want to know why.....
I think it would be very usefull do to specialized kind of benchmark/IQ tests for ais, because our human are inadequate. As ai is for superior in some areas and but lacks in other. And the tests should be robust with like 1000s of same type of questions and also somehow make sure next models wont incorporate such tests in their training data... or detect that thing. I see no other way how to evaluate ais even for most basic things. I know my idea is kind of simple yet I did not seen anything similar done in standardized way by anyone.
holy hell. I feel singularity in this video. In fact, it sounds like the main thing is already happened...
We are living through it.
I just dont understand how we can say this is not "Mind" in action. Mathematics comes from "Mind" it doesnt exist in the outside world, and an algorithm is a series of logical steps to achieve a desired outcome, all of which can only occur in a mind, which is "Desire". Now when we/"Mind", have now spun up an algorithm that can traverse Logic on its own, we say "it is NOT Mind", yet the whole root of its creation operates in this "non-spacial" dimension of Math/Logic/Reasoning/Mind, or the digital "dimension", why? Because only a mind can comprehend the output of a computer. To understand language is the ability to traverse Logic. LLMs dont just predict the next word, which most people seem to only quote half of the Transformer Architecture, it also has the Attention Network which form Context Vectors. This means it is able to know what is important and the goal, which is to understands Context, expressed as numbers. Artificial Intelligence operates in this mysterious non-spacial dimension, Numbers/Mind, then is able to use the tools of the Mind to then traverse the dimension of Mind and do what a mind can only do, which is Creativity.... I just dont comprehend people's reluctancy here... Fear?
Human exceptionalism. It's weird. Some people who are die hard materialists (aka no souls, pure science, etc) fail to make the connection that, with that set of assertions, then the only thing making us "smart" is 3 pounds of gray mush in our heads.
@@DaveShap Exactly, of which can, is being understood more and more each year. It can be broken down into parts and it's functions. From neuroscience we can see these are also following mechanical algorithms. But whats interesting is that yes these are actually mechanical, AI is even more astonishing because it is operating beyond the mechanical. So in other words it is operating in the domain of Math, this non spacial dimension where the electron in the circuit has now become a representation, the 0 or 1. The electron has moved from the physical to now purely conceptual. As I said, this realm can only be accessed by Mind and used by Mind, which an Ai doing...
Thanks David. We are seeing signs of Meta as well.
What do you mean? Metacognition?
Dave: I'm fascinated by your thoughts on the human lack of ability for self explanation. Is this all encompassing? Please expand on this in a future video.
It's generally accepted within psychology that humans can't explain themselves, which is why Behaviorism is the current status quo for most of psychology. It does not warrant a video. Just look up Behaviorism and why things like Jungian and Depth Psychology fell out of favor. Or ask ChatGPT ;)
@@DaveShap Thanks Dave, I will.
Hey David, this is really interesting stuff! How do you deal with people saying that LLMs are just smart Markov chains? Implied cognition implementations would be a great deterrent for that kind of thing!
I don't engage with luddites anymore.
This is impressive stuff. I guess just from what I’ve seen in this video, I’m not quite ready to board the deep cognition ship... but I’m at least thinking about it. I still can possibly see these responses being the result of naive predict the next token. Think about it: you began your conversation with basically “hey, I have this new idea I want to run by you”... it’s kind of a gigantic hint that ~this is a conversation where a helpful AI encounters a new a idea, fill in the blanks~. I know GPT4 is supposed to be better at reasoning, but afaik it still lies/is overconfident, right? These too are artifacts of predict the next token. Anyway, so I’m just not sure that we can conclusively say it has this “implied cognition” yet.
Fair enough.
Wow! This is the first time I see AI ecognizing novel information.
The only part which is bothering me is that the latter is going to give only within the session memory and will not be used as globally shared knowledge for all sessions. So it the session is "closed"/deleted the knowledge is unaccessable to the model.
It would be great if the model can use novel information from all sessions to train itself on it on the fly but trying to keep itself unbiased in the same time by evaluating the mapping of this information to the corresponding region in its internal "imagined" bias parameter space.
Will you be sharing prompts we can use to achieve SPR on chat logs?
It's in the repo
@@DaveShap not seeing it in the repo. I see the examples folder with a few examples of SPR but nothing on prompting GPT to create an SPR
Human capability--"inferring what's going on just by looking at the language."
TRENTON BRICKEN STUDIED SPARSE DISTRIBUTED MEMORY!
Caps lock isn't necessary. Links are more helpful www.trentonbricken.com/Attention-Approximates-Sparse-Distributed-Memory/
also the transcript for this video is not available I was able to create the ACE based of your last video transcript
The transcript will automatically generate, check back in like an hour
@@DaveShap thank you very much
@@DaveShap i had to remove most of the values because its alot but it seems to be storing the documents correctly its just im calling my reindexing method wrong or to early etc
SPRs sound like note taking
Basically. It's a very specific kind of shorthand designed for LLMs
Still the point is that AI has to be prompted, it can't find satisfaction in an interesting conversation and then reach out to that person for a rehash, or a thought that has come to mind!
Or can it??? ... but isn't being allowed!!! ... but wouldn't a test about AI self-satisfaction prove awareness & its creator's restrictions?
Autonomous AI is definitely a thing.
looks like you making critical error trying to make conclusions from answers that AI gives you as if they were true. Just because AI told you that he doesn't know about something doesn't mean it's true and vice versa.