So, I would argue NN training can be qualitative. The GAN paradigm for instance uses examples to coax the discriminator into defining an objective function. Humans aren't good at multi-parameter optimization either!
Ones and zeroes are all that digital can give you. They have no inherent meaning. All the meaning is the human input and output. Even then, garbage in vs garbage out.
When you cry watching a movie, that's ones and zeroes. When a chess bot beats the best human players, that's ones and zeroes. Calling that "garbing out" is just being in denial.
We will launch an SAI spaceship. It will have all our knowledge and tools. Its mission will be: Explore, learn, innovate, design, protect, keep in touch!
The utility function has been clear all along...they want to build a machine that builds itself. The utility of the AI they are trying to build is one that can read and write its own code, the idea being to bootstrap a 100x einsstein mind which can solve all physics and invent new mathematics...the imagery stuff is just a showcase, they arent buying 350,000 H100 chips from NVIDIA to make art.
you are talking about something we already have. my agent writes its own code, updates its own training data, manages its reward function and goals, and retrains itself as needed. most of it runs on my local GPUs for just the cost of the wattage.
That's very interesting, how does it manage it's own reward function? Like assigns itself a reward function? Or tracks progress on the reward function you specify?
@@agenticmark sounds like a fun project but the theory is that if they can build enough brain function similarity, consciousness will automatically be emergent inside the machines nodes. Each H100 Nvidia superchip cost the price of a Tesla and they have ordered 350,000 of them to build out an AI node. This node will be trained on what I imagine is the richest source of human data, youtube videos. A highly resource hungry form of language model training but ultimately the best. The first real path to AGI is allocating the machine the task to design its own hardware architecture with fairly unlimited budget, see what it comes up with!
Thanks for having me Mahon! I really enjoyed our conversation.
So, I would argue NN training can be qualitative. The GAN paradigm for instance uses examples to coax the discriminator into defining an objective function.
Humans aren't good at multi-parameter optimization either!
Ones and zeroes are all that digital can give you. They have no inherent meaning. All the meaning is the human input and output. Even then, garbage in vs garbage out.
When you cry watching a movie, that's ones and zeroes. When a chess bot beats the best human players, that's ones and zeroes. Calling that "garbing out" is just being in denial.
@@olemew 'garbing' 😂
@@gauravtejpal8901 "it's garbing time" Jared Leto, probably
Always pays to think the worst with AI. You know because of existential threats and all that.
Sam Altman vs Sam Tideman
😂😂😂 I'll take the Tideman
@@MahonMcCann plot twist, both are AI generated ;)
We will launch an SAI spaceship.
It will have all our knowledge and tools. Its mission will be: Explore, learn, innovate, design, protect, keep in touch!
The utility function has been clear all along...they want to build a machine that builds itself. The utility of the AI they are trying to build is one that can read and write its own code, the idea being to bootstrap a 100x einsstein mind which can solve all physics and invent new mathematics...the imagery stuff is just a showcase, they arent buying 350,000 H100 chips from NVIDIA to make art.
you are talking about something we already have.
my agent writes its own code, updates its own training data, manages its reward function and goals, and retrains itself as needed. most of it runs on my local GPUs for just the cost of the wattage.
That's very interesting, how does it manage it's own reward function? Like assigns itself a reward function? Or tracks progress on the reward function you specify?
@@agenticmark "my agent writes its own code" What's writing right now as you're reading this question?
@@agenticmark sounds like a fun project but the theory is that if they can build enough brain function similarity, consciousness will automatically be emergent inside the machines nodes. Each H100 Nvidia superchip cost the price of a Tesla and they have ordered 350,000 of them to build out an AI node. This node will be trained on what I imagine is the richest source of human data, youtube videos. A highly resource hungry form of language model training but ultimately the best.
The first real path to AGI is allocating the machine the task to design its own hardware architecture with fairly unlimited budget, see what it comes up with!
Says a man who belives in god lmao
What do you believe in?
Quite worthless - you will realise how wrong you are soon of course AI can figure out value
What is value?