@@DaveShap This is somethings I wanted to ask you in one of your previous videos its quite a mouthfull. 1. Viability of CS Degrees Amidst AI Advancements Considering how quickly AI advances, is a CS degree still a good investment into the future in the tech market as far as finacially stability goes? People are wondering if AI’s fast development might make some conventional tech skills obsolete. How do CS degrees meet with AI in the job market? 2. AI's Influence Beyond Software Engineering how does AI impact the tech industry beyond software engineering. When AI is present in varied niches, from healthcare to finance, where do CS grads have opportunities beyond the conventional SWE path? How does the open-source vs. closed-source discussion integrate with the development and deployment of AI? 3. Economic Impact of AI and Job Displacement How do you view the possibility of AI causing job displacement. And the impact on consumer demand and economic stability? Picture this: if AI ends up replacing dozens of millions of jobs and our economy is based on people spending money they have, what happens if they can’t spend? It is a valid concern that AI might destabilize the fine balance between supply and demand, tipping towards frequent economic collapses. I’m personally not convinced of AI is going to take over competly at all costs, but I’m pretty sure it will be fine-tuned so that a complete disruption might be avoided I would say about 20% - 40% of job processes will at least start with an ai in the future. The main issue here is that while some billionares may care for the greater good, literally all companies see their profit first, neglecting broader economic implications. How can one make sure that numerous AI and automated processes keep the economy afloat. 4. AI and Economic Fluctuations Given frequent economic change, which direction will the tech market take in alignment with AI adoption? Economic fluctuations usually change investment and innovation in tech market. How will AI impact the response of the tech market to such change, particularly in regard to job opportunities? Thank you
What I like about David S. is that he can conceive that he could be wrong and always bakes that in to his commentary and observations. This is crucial, and rare.
I think a win-win where speed and safety are possible requires thoughtful design now. I believe that design process needs to be both open and collaborative, but at this point it doesn’t seem to even be talked about much. Instead as you mentioned, we’re solely focused on research and not so much on the structure of the future we want to achieve. I think that’s why I appreciate your content. It’s looking beyond the process towards our final destination.
Can anyone explain to me how closed source is useful to anyone? "Current" open source has weakness sure, mostly coordination and capital. But if you those get improved dramatically, what's the benefit of closed source ? Same applied to science. You can do alot of business without hogging others. It's a positive sum game
Basically, money. It's easier to monetize closed source. Just look at apple vs android. One company earning as much as dozens combined because everything is proprietary
@@rando5673 yes but with all their money they're not even participating in the ai landscape.. I imagine a golden age of open source, were we'll have coordination and talent as much if not more than apple ever had.
Thank you so much David, you are so distinct from other AI RUclipsrs. You are so well read and analyze from all sides. I cannot tell you how many aha moments I get from watching your challenge. Also real paradigm shifts in my mind about society and my place in it.
I know they'll throw a lot of money at the task but I ponder this, if one company falls arse backward into AGI (Or Q* already is) will the Gov step in and say 'woh there buddy, we're going to take the nuclear bomb tech off you now, way too dangerous. We need to put it in the hands of a black budget clandestine semi military Gov org (can't pass it to the pollies, they don't know shit from clay)' Will that happen? AGI could also be a diversion. Enough agents on a task might be enough to break the ceiling on some of our probs. I do know that climate change wise, any solution would be such a big one that you'd need all the world to cooperate and that won't happen unless the agents provide some mind control nano machinery delivered by robotic flies to all the world's leaders.
How will we know if it’s ”safe”? You probably never will. What does ”safe” mean? Probably that a) you’re alive b) your degree of perceived freedom won’t diminish and c) you will feel happiness, peace, love and meaning. But I guess that death also could be considered safe, if it’s an eternal dreamless ”sleep”. As long as you have accepted death you will feel pretty safe knowing that you can end your life and escape what potentially could be an eternal simulation of hell where you are trapped by the AI. The hell scenario is unsettling when you think about how life could be hell. Even if your life is good now and you assume that you will die and forever be swallowed by the dark void you never know how the future will play out. But you could also be in heaven and it would then only progress towards an even brighter future from here on.
You make some great points, David. I agree that optimizing for more AI research is crucial for ensuring both safety and realizing the potential benefits. A balanced approach with open source and proprietary work seems wise. Keep inspiring others to join this important field!
Calling it open-source AI is a bit of a misnomer, because unlike normal software, you can't actually open-source the training run that creates the weights. The only thing you can open-source is the weights themselves, *after* the big expensive training run, which you have no input on. That also means a lot of the normal "find a bug and fix it" value of open-sourcing is diminished, because the weights are already fixed.
The signal of you unsubscribing to say "I don't like what this company is doing" is irrelevant: investors are the true one deciding which companies will go on, based on their interests. Our 20 dollars a month is nothing, compared to them. (I love your videos anyway. My comments are only on the small parts where I disagree someway)
Delusions. In this system there's only one thing optimization is done for - profit. You'd like to hope for utopia (probably because it's all too depressing otherwise), but those who make decisions don't care, they only care for bottom line. Sometimes is does mean research, but not directly and quite often even contrary to that.
Two wrongs do not make a right. Two wrights do make an airplane. The Wright brothers were open source up to one year before their first powered flight.
While discussing your angle with custom instructions GPT 4: “It's intriguing to think about the Wright Brothers, Wilbur and Orville, in the context of open-source principles. While the term "open-source" is generally associated with software and technology in a modern sense, the idea of sharing knowledge openly can definitely be applied to various historical innovations, including aviation. The Wright Brothers, known for their pioneering work in aviation with their first powered flight in nineteen-oh-three, did indeed initially work in a relatively open manner. They corresponded with other aviation enthusiasts and shared insights, which was common among early aviation pioneers who were all trying to solve the challenge of powered flight. This open exchange of ideas helped progress their work, as well as that of others in the field. However, as they came closer to making significant breakthroughs, they became more secretive to protect their intellectual property and competitive edge. This shift was notably marked by their patent battles and contracts that followed their successful flights, which some might see as moving away from an open-source ethos. It's a fascinating transition from a collaborative approach to a more guarded strategy once their inventions showed commercial viability. This speaks volumes about the balance between collaboration and competition in innovation. What are your thoughts on this blend of openness and proprietary development in the context of innovation?”
I know they'll throw a lot of money at the task but I ponder this, if one company falls arse backward into AGI (Or Q* already is) will the Gov step in and say 'woh there buddy, we're going to take the nuclear bomb tech off you now, way too dangerous. We need to put it in the hands of a black budget clandestine semi military Gov org (can't pass it to the pollies, they don't know shit from clay)' Will that happen? AGI could also be a diversion. Enough agents on a task might be enough to break the ceiling on some of our probs. I do know that climate change wise, any solution would be such a big one that you'd need all the world to cooperate and that won't happen unless the agents provide some mind control nano machinery delivered by robotic flies to all the world's leaders.
Well, we've already got the black budget clandestine semi-military Gov org that's keeping the crashed flying saucers, Zero Point Energy devices, and cars that run on water secret, so they can handle the AGI stuff too, right? 😜 And no, can't tell the pollies, not only do they not know shit from clay (lol!), they can't even pass a budget.
That would be the ideal scenario but since the "governing" system must be the most powerful and the bigger power lies on the hands of government and corporations, it's difficult to see it happening. At most I imagine most people (except poor people) will have small and local open source assistants to use as cheap personalized health monitors (AI doctors) or to help them fight against State-owned AIs, like AI lawyers to protect our rights during futuristic AI-driven super fast trials.
I actually mean decentralized. It would be open sourced but I feel we need a decentralized AI project built on top of hundreds of thousands of people's desktops and basement servers working to solve thousands of real world problems (ideally just a couple that LLMs & GANs are well suited for first
I call it the #RaceToBestSolution. The technological advacement of a country or any institution, so advanced (Best Solution) that others cant keep up when (self)acceleration kicks in. Idk how many people are aware of this, even though ai is also a military goal. Its really our culture lacking behind and currently NOT optimising for research. We are still far from a global focus, but this might be just how things are(especially have been historically). Maybe there will be a future time when more people will have realise this approach and implifications earlier and organise faster. Lets come together, spread the word and get the juices flowing. See ya around
I heard an opinion that for some countries it would be better to allocate all the budget into AI and not spend a single dollar on military, education, culture and other sectors that they would benefit from it in a "long" run. I always wondered why we didn´t invest more into r and d of brain and human intelligence. If we had a pill that would increase an IQ by only 5 points it would have tremendous effect on everything.
The main problem with alingment that most people don't realize It is not machines doing the things that we don't want them to do, but machines doing the things we want them to do
Misaligned incentives are a fundamental driver of potential misuse or negative impacts from robust AI systems. Technical alignment is crucial, but if the underlying incentives aren't sculpted carefully, even well-intentioned systems could be directed toward harmful ends. Rigorous governance frameworks that align incentives toward benefiting humanity are essential complements to technical work on AI safety and robustness.
What’s your thoughts on decentralized AI? For example internet computer protocol just improved their fully on chain model, and they will upgrade it again soon. Right now it’s capable of image recognition etc but the upgrades will include a gpt style bot. What’s your thoughts on having AI on a blockchain?
I think we are far from optimised for research... AGI problem is mostly algorithmic, and very small number of people are working on it ATM. There are 27 million software engineers in the world. Investors could try to incentivise some of them to switch to AI research. E.g. offering a small conditional grant / basic income to software devs (so they can quit their work and get into AI field) could be extremely beneficial there. Yes there are jobs also, but they usually don't give you the amount of freedom to make really big leaps in research.
Thanks David learning a ton about AI future from you thank you. My team have just created an AI life coach which is under pinned by chat GPT super impressive and helpful. So it’s always good to hear balanced views on AI, keep up the great work. Also thank for the recommendation of Perplexity love that ❤
Same can be said with green politics. Most people shun it because it's too expensive, but in fact most countries investing in green tech and politics see a decoupling between economic growth and fossil fuel investments. What I would like to see is the military industrial complex aligning themselves against a benevolent AI model. I read recently about Israels use of AI to procure a list of targets for their bombs. Usually it's a long and time consuming process, because you're basically weighing an acceptable number of causialties per potential enemy. And they have limits, for example to kill a very important military leader, the number of acceptable loss of civilian lives were in the low hundreds. While estimating targets and probabilities of enemies locations, overseers would shout and reprimand the people doing the work in a seemingly vengeful manner. But with AI they just press a button and viola. I really hope that societies regulate all AI models used in the military to follow your heuristic impleratives. In war, the most critical thing to communicate with your enemy is understanding.
It should be called the “Incentive Problem” instead of the “Alignment Problem”
Love it
@@DaveShap This is somethings I wanted to ask you in one of your previous videos its quite a mouthfull.
1. Viability of CS Degrees Amidst AI Advancements
Considering how quickly AI advances, is a CS degree still a good investment into the future in the tech market as far as finacially stability goes? People are wondering if AI’s fast development might make some conventional tech skills obsolete. How do CS degrees meet with AI in the job market?
2. AI's Influence Beyond Software Engineering
how does AI impact the tech industry beyond software engineering. When AI is present in varied niches, from healthcare to finance, where do CS grads have opportunities beyond the conventional SWE path? How does the open-source vs. closed-source discussion integrate with the development and deployment of AI?
3. Economic Impact of AI and Job Displacement
How do you view the possibility of AI causing job displacement. And the impact on consumer demand and economic stability? Picture this: if AI ends up replacing dozens of millions of jobs and our economy is based on people spending money they have, what happens if they can’t spend? It is a valid concern that AI might destabilize the fine balance between supply and demand, tipping towards frequent economic collapses. I’m personally not convinced of AI is going to take over competly at all costs, but I’m pretty sure it will be fine-tuned so that a complete disruption might be avoided I would say about 20% - 40% of job processes will at least start with an ai in the future. The main issue here is that while some billionares may care for the greater good, literally all companies see their profit first, neglecting broader economic implications. How can one make sure that numerous AI and automated processes keep the economy afloat.
4. AI and Economic Fluctuations
Given frequent economic change, which direction will the tech market take in alignment with AI adoption? Economic fluctuations usually change investment and innovation in tech market. How will AI impact the response of the tech market to such change, particularly in regard to job opportunities?
Thank you
accurate
What I like about David S. is that he can conceive that he could be wrong and always bakes that in to his commentary and observations. This is crucial, and rare.
I think a win-win where speed and safety are possible requires thoughtful design now. I believe that design process needs to be both open and collaborative, but at this point it doesn’t seem to even be talked about much. Instead as you mentioned, we’re solely focused on research and not so much on the structure of the future we want to achieve. I think that’s why I appreciate your content. It’s looking beyond the process towards our final destination.
one attracts more purely vocationally motivated scientists with open source, one might think.
Can anyone explain to me how closed source is useful to anyone? "Current" open source has weakness sure, mostly coordination and capital. But if you those get improved dramatically, what's the benefit of closed source ? Same applied to science. You can do alot of business without hogging others. It's a positive sum game
Basically, money. It's easier to monetize closed source. Just look at apple vs android. One company earning as much as dozens combined because everything is proprietary
@@rando5673 yes but with all their money they're not even participating in the ai landscape.. I imagine a golden age of open source, were we'll have coordination and talent as much if not more than apple ever had.
The climate change note they added to your video. lmao.
Thank you so much David, you are so distinct from other AI RUclipsrs. You are so well read and analyze from all sides. I cannot tell you how many aha moments I get from watching your challenge. Also real paradigm shifts in my mind about society and my place in it.
I know they'll throw a lot of money at the task but I ponder this, if one company falls arse backward into AGI (Or Q* already is) will the Gov step in and say 'woh there buddy, we're going to take the nuclear bomb tech off you now, way too dangerous. We need to put it in the hands of a black budget clandestine semi military Gov org (can't pass it to the pollies, they don't know shit from clay)'
Will that happen? AGI could also be a diversion. Enough agents on a task might be enough to break the ceiling on some of our probs.
I do know that climate change wise, any solution would be such a big one that you'd need all the world to cooperate and that won't happen unless the agents provide some mind control nano machinery delivered by robotic flies to all the world's leaders.
When AI exceeds human understanding, how will we know if it's "safe", and what does "safe" mean?
I believe we have to let it do it's thing, and keep seeing for signs of danger, as far as we can understand it.
How will we know if it’s ”safe”?
You probably never will.
What does ”safe” mean?
Probably that a) you’re alive b) your degree of perceived freedom won’t diminish and c) you will feel happiness, peace, love and meaning.
But I guess that death also could be considered safe, if it’s an eternal dreamless ”sleep”. As long as you have accepted death you will feel pretty safe knowing that you can end your life and escape what potentially could be an eternal simulation of hell where you are trapped by the AI.
The hell scenario is unsettling when you think about how life could be hell. Even if your life is good now and you assume that you will die and forever be swallowed by the dark void you never know how the future will play out. But you could also be in heaven and it would then only progress towards an even brighter future from here on.
So what the plan is to lobotomize and enslave it. That's safer?
Knowing the best doesn't mean it will do the best
Also, what it is best for it will be prioritized by default over anything else
Pretty much how i've been feeling for a while, I reject the dichotomy.
You make some great points, David. I agree that optimizing for more AI research is crucial for ensuring both safety and realizing the potential benefits. A balanced approach with open source and proprietary work seems wise. Keep inspiring others to join this important field!
Calling it open-source AI is a bit of a misnomer, because unlike normal software, you can't actually open-source the training run that creates the weights. The only thing you can open-source is the weights themselves, *after* the big expensive training run, which you have no input on. That also means a lot of the normal "find a bug and fix it" value of open-sourcing is diminished, because the weights are already fixed.
The signal of you unsubscribing to say "I don't like what this company is doing" is irrelevant: investors are the true one deciding which companies will go on, based on their interests. Our 20 dollars a month is nothing, compared to them.
(I love your videos anyway. My comments are only on the small parts where I disagree someway)
Delusions.
In this system there's only one thing optimization is done for - profit.
You'd like to hope for utopia (probably because it's all too depressing otherwise), but those who make decisions don't care, they only care for bottom line.
Sometimes is does mean research, but not directly and quite often even contrary to that.
Two wrongs do not make a right. Two wrights do make an airplane. The Wright brothers were open source up to one year before their first powered flight.
Three rights do make a left though.
While discussing your angle with custom instructions GPT 4: “It's intriguing to think about the Wright Brothers, Wilbur and Orville, in the context of open-source principles. While the term "open-source" is generally associated with software and technology in a modern sense, the idea of sharing knowledge openly can definitely be applied to various historical innovations, including aviation.
The Wright Brothers, known for their pioneering work in aviation with their first powered flight in nineteen-oh-three, did indeed initially work in a relatively open manner. They corresponded with other aviation enthusiasts and shared insights, which was common among early aviation pioneers who were all trying to solve the challenge of powered flight. This open exchange of ideas helped progress their work, as well as that of others in the field.
However, as they came closer to making significant breakthroughs, they became more secretive to protect their intellectual property and competitive edge. This shift was notably marked by their patent battles and contracts that followed their successful flights, which some might see as moving away from an open-source ethos.
It's a fascinating transition from a collaborative approach to a more guarded strategy once their inventions showed commercial viability. This speaks volumes about the balance between collaboration and competition in innovation. What are your thoughts on this blend of openness and proprietary development in the context of innovation?”
Well said. Optimize for research 🎯
I know they'll throw a lot of money at the task but I ponder this, if one company falls arse backward into AGI (Or Q* already is) will the Gov step in and say 'woh there buddy, we're going to take the nuclear bomb tech off you now, way too dangerous. We need to put it in the hands of a black budget clandestine semi military Gov org (can't pass it to the pollies, they don't know shit from clay)'
Will that happen? AGI could also be a diversion. Enough agents on a task might be enough to break the ceiling on some of our probs.
I do know that climate change wise, any solution would be such a big one that you'd need all the world to cooperate and that won't happen unless the agents provide some mind control nano machinery delivered by robotic flies to all the world's leaders.
Well, we've already got the black budget clandestine semi-military Gov org that's keeping the crashed flying saucers, Zero Point Energy devices, and cars that run on water secret, so they can handle the AGI stuff too, right? 😜 And no, can't tell the pollies, not only do they not know shit from clay (lol!), they can't even pass a budget.
Is there any way of downloading the latest power points used in your videos? it would be super useful :) thank you for your work, David
Faster is safer, more or less, for the future of intelligence (and probably sentience), beyond humanity
we must build an open source AI that slows down the other models. 🤔
That would be the ideal scenario but since the "governing" system must be the most powerful and the bigger power lies on the hands of government and corporations, it's difficult to see it happening. At most I imagine most people (except poor people) will have small and local open source assistants to use as cheap personalized health monitors (AI doctors) or to help them fight against State-owned AIs, like AI lawyers to protect our rights during futuristic AI-driven super fast trials.
I actually mean decentralized. It would be open sourced but I feel we need a decentralized AI project built on top of hundreds of thousands of people's desktops and basement servers working to solve thousands of real world problems (ideally just a couple that LLMs & GANs are well suited for first
I call it the #RaceToBestSolution.
The technological advacement of a country or any institution, so advanced (Best Solution) that others cant keep up when (self)acceleration kicks in.
Idk how many people are aware of this, even though ai is also a military goal. Its really our culture lacking behind and currently NOT optimising for research. We are still far from a global focus, but this might be just how things are(especially have been historically). Maybe there will be a future time when more people will have realise this approach and implifications earlier and organise faster. Lets come together, spread the word and get the juices flowing.
See ya around
I heard an opinion that for some countries it would be better to allocate all the budget into AI and not spend a single dollar on military, education, culture and other sectors that they would benefit from it in a "long" run. I always wondered why we didn´t invest more into r and d of brain and human intelligence. If we had a pill that would increase an IQ by only 5 points it would have tremendous effect on everything.
The main problem with alingment that most people don't realize It is not machines doing the things that we don't want them to do, but machines doing the things we want them to do
Whoa dude...🤙
More like machines doing what people do to other people.
"We don't want machine uprising".
Speak for yourself 😏🤖
Balanced. Reasoned. Thought-provoking, as always.
its almost like you cant win a race without both throttle and brakes😱
Nuclear weapons were created as fast as possible. Worked.
Misaligned incentives are a fundamental driver of potential misuse or negative impacts from robust AI systems. Technical alignment is crucial, but if the underlying incentives aren't sculpted carefully, even well-intentioned systems could be directed toward harmful ends. Rigorous governance frameworks that align incentives toward benefiting humanity are essential complements to technical work on AI safety and robustness.
ya, that is what an average github user looks like lol
For the objective benefit of humanity, why the hurry? Let's better make sure we do this right!
acceleration is the default - due to competition and race dynamics.
What’s your thoughts on decentralized AI?
For example internet computer protocol just improved their fully on chain model, and they will upgrade it again soon. Right now it’s capable of image recognition etc but the upgrades will include a gpt style bot.
What’s your thoughts on having AI on a blockchain?
I think we are far from optimised for research... AGI problem is mostly algorithmic, and very small number of people are working on it ATM. There are 27 million software engineers in the world. Investors could try to incentivise some of them to switch to AI research. E.g. offering a small conditional grant / basic income to software devs (so they can quit their work and get into AI field) could be extremely beneficial there. Yes there are jobs also, but they usually don't give you the amount of freedom to make really big leaps in research.
Thanks David learning a ton about AI future from you thank you. My team have just created an AI life coach which is under pinned by chat GPT super impressive and helpful. So it’s always good to hear balanced views on AI, keep up the great work. Also thank for the recommendation of Perplexity love that ❤
If one are good and the other are good then both is good!
Great thoughts. Thank you.
And keep in mind that all of this glorious AI future will only be possible if we solve the energy problem.
A sufficiently advanced AI could solve the energy problem
😳
🧐
Let’s go!
@@BAAPUBhendi-dv4ho "Forward".
Same can be said with green politics. Most people shun it because it's too expensive, but in fact most countries investing in green tech and politics see a decoupling between economic growth and fossil fuel investments.
What I would like to see is the military industrial complex aligning themselves against a benevolent AI model. I read recently about Israels use of AI to procure a list of targets for their bombs. Usually it's a long and time consuming process, because you're basically weighing an acceptable number of causialties per potential enemy. And they have limits, for example to kill a very important military leader, the number of acceptable loss of civilian lives were in the low hundreds.
While estimating targets and probabilities of enemies locations, overseers would shout and reprimand the people doing the work in a seemingly vengeful manner.
But with AI they just press a button and viola.
I really hope that societies regulate all AI models used in the military to follow your heuristic impleratives. In war, the most critical thing to communicate with your enemy is understanding.