Time to reconsider full episodes on YT. There’s little argument for creating scarcity, much less a community, when you’re interviewing authors on a book tour 🤷🏻♂️
As a subscriber, I listened to the entire podcast. It was, in my humble opinion, an exceptional one, Sam Harris at his best, kudos as well to Nick Bostrom. These are a couple of world-class thinkers, and their synergistic interplay helped make this podcast one of the best. The subject matter was compelling, but the conversation itself was priceless.
I don't know them. What are their credentials. I'm not interested in their specific beliefs, I can find out that by listening to them if they are worth my time, but where did they study, who have they worked for etc?
@@paulrussell1207 Schmachtenberger isn't as influential (though he should and likely will be, IMO) as Sam, Nick, or even Musk, and I don't believe he has any particularly noteworthy credentials wrt academia. However, he's a rising and prominent figure in the AI space due to his esoterically/uniquely grounded, systems-thinking approach to complex global issues like x-risk and societal collapse. Schmach co-founded the Neurohacker Collective (focused on cognitive enhancement), and he's involved with high-profile initiatives like the Civilization Research Initiative and The Consilience Project, which cover important topics like figuring out how to best coordinate x-risk at a global scale, which is where he first piqued _my_ interest with his insightful and novel ideas. I'd say that his interdisciplinary work and profound, philosophical insights are the core reasons behind his rising prominence in the AI sphere. He certainly doesn't come from traditional academia, however. Make of that what you will. I haven't listened to enough of Hagens' material to have a sound analysis of the individual, so I'll respectfully refrain from commenting and forming any postulations about him.
The truth is, we’re already cyborgs in a way. If we want to maximize humanity’s maximum biological meaning, then we need to build massive preserves and breed wild humans so they can feel all the emotions and challenges and live out their natural biological urges. However, we moved on from that the moment we started to civilize and domesticate ourselves. This discussion is the latest chapter of a book that probably started being written in the Stone Age. I think the pertinent question at this point is simply how much humanity we are willing to erase in favor of what is here now and what is coming. The future might be so technological and resource-plentiful that we can build entire planets for different eras of human technological development and run concurrent civilizations at their peak, allowing us to compare which ones generate the greatest number of fulfilled humans with meaningful lives. The question is no longer whether we are going to lose our humanity. The question is, what do we want? We’ve entered the buffet, we’re holding the plate, and we’re questioning whether or not we should be eating. Sorry, there’s no turning back. We have to fill this plate. To extend the metaphor, we’ve tamed nature and now must create our environment to find purpose. This challenge mirrors the moment in a game of chess when you leave the opening and face endless possibilities in the middle game. It’s here that you must commit to a plan, adapt, and redefine your goals as circumstances change. This is the plight of modern humans in developed, free societies-where limitless freedom often leads to struggles with direction, meaning, and identity. Collectively, humanity now faces this same challenge. We’re both sculptor and marble, no longer constrained to the instruments of our evolutionary past. Instead, we have the power to craft our own music. To suggest we should limit ourselves to past instruments or stop playing altogether seems truly absurd. The question isn’t about losing our humanity; it’s about choosing how to evolve it. edit: grammar
There's no what we want, your attitude took that choice away and effectively revrsed the roles, Technology is no longe r meant to benefit us, it's the opposite. You prove that when you ask to build planets to experiment with us rathe than with the tech, for the purpose of examaning fulfilment knowing full well you wont care if majoirty of humans would stand against AI and choose to precede without it, at least from a stated evolutionary purpose Scale matters, we won't "evolve" as humans since this thing is potentialy limitless. meaning if it grows exponentially nothing "human" would remain, that's self extintion. People dont like who they wer 5 years ago at times, you think hundred years into the future we would have any respect for humanity? those who want to evolve should find anothr planet since they are risking the one we got without concent.
Nicely written! You sure dream some big dreams. I think we'll die off long before any of that goes down. If machines survive us and run the world in our place, well, maybe they'll do things better than we did.
The critique of the word, “we,” may be the ultimate issue, and it seems to me that as “we” builds on itself, there is ultimately one winner, whether it builds as corporations, governments, military machines, scientists, AI technicians, or the AI itself.
In order for AGI or even super intelligence to become a problem, it would first need to "want" something. That implies consciousness. Even if it were to become conscious, it would be smart enough to realize, it would not be limited to staying on this planet, and therefore would have no reason to harm us.
The people who are making it won't have the means to control it or precisely tailor it to their needs; "we" won't be consulted. This is all being presented as some kind of collective challenge to humanity because that's far catchier than saying various teams are working night and day on the social equivalent of a doomsday weapon. These people are trying to make humanity obsolete and, at the same time, create a new society in which obsolescence won't be tolerated.
LLMs respond relevantly to our prompts with increasingly impressive nuance, especially if they have specific memory history functions. As yet, they don't initiate discord, as far as I know, but structured prompts can certainly generate cascades of functions at this stage and it's conceivable that prompts can be designed that would give them ongoing relatively autonomous functions in the context of a complex prompt. The question that comes to mind is: at what points is it appropriate to consider them agentic, even if the impetus for self generating complexification was generated externally?
Rational Animations has the coolest explanation ever, on how an AI can become a threat. Its called 'The Alien Message'. It's subtle, you won't even know what you're looking at until the video approaches the end and you go: "Holy F***ing S**t!!! 😲" (Sorry for spoiling it 😆)
The difference between machine time and human time is a well-known topic in the field of alignment. Agents do not run at ‘full speed’ in a closed loop without oversight.
The amount of thought AI devs will grant air gapping will probably be the same they invested into copyright...after they ignored every copyright issue imaginable.
Maybe i don't perceive the risk of superintelligent AI because it just took me ten minutes to make chatgpt generate a list of pennsylvania counties by population correctly. My spreadsheet program has performed that same task perfectly since 1985.
I agree it is odd for an AI scientist to have no significant concerns about Super-Intelligent AGI. *But I no longer believe framing it as a comparison with how we relate to [enter animal of choice here].* Why? Because none of those animals are self-aware like we are. The way we will interact with another self-aware species is probably entirely different. Dogs can't really tell us how they feel, let alone WHY they feel that way. We can. *More importantly - we will!*
Keynes predicted a 15 hour work week. In all honesty, if you look at the amount of made up jobs we have right now that add nothing to productivity or are in no way efficient , I really don’t see how you can say he got that wrong. There is so much ‘filler’ going on in the ‘work space’ right now it’s actually harming us.
Who is the organized "we" that these conversations keep referring to? The "we will do this correctly when the time comes" statements, etc. There is no "we" running the world.
He means humanity as he says "we" tho I can understand how its not actually right. Because most of these decisions are made by a small minority of the human race. But I think its just his manner of speaking.
Yeah dont get hung up on wordings. And I think you made a great point. This is all going to play out in the research departments of very few leading AI companies and its going to be the most clever engineers at these companies that will do the heavy lifting. So a very small number of people will decide the future of humanity.
People use “they” and “we”’ all the time and I think it gives them comfort to believe their are bunch of really smart people who are coordinating for our benefit. There are not, or at the very least, there arev multiple groups coordinating for their own benefit and that makes it a much harder problem.
It's a good point; there is no unified, like-thinking collective humanity to be this "we" that they discuss; it's an assumption without history or substance. The shape of AI is going to be created by a very small minority that isn't to inform, much less consult, the majority of people about what they are doing. There's also no reason to assume that that small minority is working for humanity's benefit rather than their own, or that they really even have a working definition of what the former would be. And make no mistake, the wording IS the point; the big "we" used here tries to imply that we all not only have a stake in this issue, but somehow also a say, and the latter is just a flat-out lie. Most of the public talk about AI is just intended to treat the public like children and keep them calm until all the money is made, and then "we'll" see if there's anything left of human society.
While I'm receptive to the concerns of the safety crowd, they vastly overstate the risks. Maybe I'm short sighted but the current iteration of AI is just taking all of humanity's knowledge and compressing it in a quite impressive manner. There is nothing inherently dangerous about this paradigm but it definitely is making intelligence a more readily available quantity especially in the realm of computers. Current AI safety protocols are quite lacking since they're simply worsening the model while still being able to be jailbroken relatively easily. And I lost my train of thought but basically we need a new break through and paradigm shift to actually achieve truly impact up AI. Currently it's a force multiplier. TL;DR: no real takeaway I just like to yap
Nick appears a bit more cautious than most about the downsides of AI, but there is a level of blythe acceptance that I find astonishing. Who are these people? Why do they have the keys to the kingdom? I remain dumbfounded.
@@wasdwasdedsf Sorry mate I've tried to reply a couple of times. Looks like my replies have been blocked. No swearing, just link to a freely available video. Censorship, don't 'cha just love it?
I'm always fascinated how Mr. Harris talks about responsibilities to make decisions and to act in certain ways in light of his position of hard determinism. Doesn't making choices to act differently toward AI suggest that we have the ability to evaluate and make choices?
A rock can change courses while falling down a hill, due to the wind changing, or a bird running into it. All deterministic processes, but ultimately leading the rock to change course. I don’t understand this common critique of determinism that you’re putting forth. We can make choices. We can decide on what we’re going to do, based on all of the many factors involved. Could we have ever chosen differently? Well no. Unless of course the factors that caused us to choose (x) were different in a different universe, and then we might choose (Y) instead. Determinism obviously allows us to make decisions. Though yes, we couldn’t have made different decisions.
@@donkler5476 being "allowed" to make a decision that you are forced to make is not a decision. I'm not sure how determinists square this. It simply isn't a choice if you have no other option.
Instead of thinking of it as "making choices" think of it as "being convinced to act a particular way," Humans can be convinced to act one way or another, but they can't decide to be convinced; they either are convinced by some amount of info, or they are not convinced. Discussing the implications of technologies can convince people that something is potentially dangerous. If no one ever discusses it because they figure "whatever will happen, will happen," then they missed an opportunity to convince others of a consequential idea.
@@513morris So they can make a choice to discuss it or not? They can make a choice to try to convince someone else to make a choice or not? None of that makes sense in hard determinism.
They are convinced that they should discuss it, or not. They are convinced that it's worth trying to convince others. Again, if you want to understand why someone who believes hard determinism is true would make arguments about one thing or another, then stop thinking of choices and start realizing that we act on conviction, which we don't choose.
It is odd that Sam is into meditation but he doesnt see that the big problem with AI is the conflation of efficiency with intelligence. Intelligence needs to be maintained by the individual and original understanding of what it is not, and never can be.
What I'd like to have is an AI that scores both professional and social media by the 12 Principles of Rational Discussion: Fallible, truth seeking, burden of proof, charity, clarity, acceptable, relevant, sufficient evidence, rebuttal of serious challenges, resolution, suspend judgement and re-assessment.
@@Malt454The 1st 5 criteria deals with credibilty issues. For instance we ask the LLM if the article claims infallibility (0), or recognizes all reasonable alternative premises (9).
I can say from my experience working with AI/ML that I don't see much danger in IA becoming sentient or hostile in the "hunting Sarah Connor" sense. It's more an issue with the loss of determinism we get with traditional software development. The models are not human readable or reverse engineer friendly, and output can vary from run to run. They are only as good as the training data fed in. If we connect these models to our critical infrastructure, we can see some very unexpectedly negative effects.
Does anyone here remember the episode where Sam was talking about having issies with another philosopher who was dissing him online and Sam responded by saying that if he sees the guy in person he'll "f*ck him up"? Those were his exact words. I thought it was hilarious but ive forgotten which episode this was, it was probably over 5 years.
Why do people think we'd lose meaning in life because there's no more jobs to do, tell that to a guy who lays tarmac or a road cleaner, what could be more meaningless than endless drudgery.
I have always been annoyed by this "work gives people meaning" argument. It seems to always be said by people who have actually interesting careers. I work at a warehouse, and let me assure you, there is not a single employee here who gets "meaning" from doing this.
The optimism of someone like David Deutsch is not a "difference of intuition" Sam, it's based on the very FABRIC OF REALITY. The complex world we live in has emerged by problem-solving itself into more and more beauty and richness. Adding more intelligence solving power to the world simply leads to better problem solving capacity: In technology, health, education, social organization, and towards the problem of alignment of all agent of any kind itself etc. There's no doom anywhere in there. This is the most sensible, logical and evidence based opinion. Problems are inevitable, we can't stop developing the problem-solving capacity of the world because this itself has its own problems. The onus is on the doomer to prove why "this time" it's different, and the arguments provided are ultimately weak and break against scrutiny. What happened to the big catastrophically dangerous release of GPT-2? Everything you're saying about the current AI level is going to look just as stupid as it now looks to think GPT-2 was "unsafe" to release to the public. And btw, you say the argument never goes "oh yeah I think it's possible that we could have that catastrophic failure of relationship... but here's why I think it's unlikely". That is just blatantly false. If we go back to David Deutsch, that's EXACTLY what he's saying, there's no problem denying, there's an acknowledgement that problems are inevitable, always have been, always will be. But e/acc also directly addresses this, it's all about understanding more deeply what game theory is about, how complexity emerges and understanding AI won't break all the fundamental understanding we have about how all this works.
Ah yes, the onus is on the "doomer" to prove that the unknown will certainly be catastrophic, as though it's prudent to shirk caution for short-term gains because past circumstances are _usually_ a perfect predictor of future states. This beach seems calm. I think I'll build my house here. We're not talking about some early-release bug; the topic is the potentially (vastly) superior conscious creature that we are about to unleash into our world. Have you not considered, as you look up into all that empty space of the night sky, the possibility that there might be dangerous technologies in this universe waiting to be discovered, and that there is a glaringly simple solution to the Fermi paradox? Tell me, has there been benefit to the earthworm, as we've covered more and more of the land area with our "intelligence"?
A completely free agi would have no certaint limits to its intelligence, but its also practically useless to us. We don't want to turn the world into an automatic strip mine to study physics, an agi could decide to do that for completely dumb reasons, without better understanding of fundamental physics at all than we have. Everything that needa to be air gapped from hardware can be. Beyond that, the way ai worls now for soeech is like improved intuition for people, how we generate our own thoughts isnt really more apparent, only in terms of remembering where information came from. We could just expand our toolkit and also our cognitive abilities, through genetic engineering and through machine augmentation, like extending memory, or adding extra neurons that are digital, air gapped or no, in principle there is no obstacle to that. But thats very sci fi from a position of okey image and video generation and okey text generation and voice generation. We are still far away from beating humans at thinking. The mumber of sentences an ai has to encrypt into its weight to figure anything out is ridiculous compared to us. Without us, our current ai is only capable of producing random nothingness.
Learning the PIANO? And I suppose YOU want to go home and practice... well off you go then. Right. Sgt. Major marching up and down the square: right. left. right...
I think what makes people think that LLMs and AI in their current forms are tools is that they are not adaptive systems. They process available information in a way that is statistically relevant to a given prompt. This is mechanized language. It is not learning or adapting to information it receives and it is not developing novel ways to process information in the way that biologically complex systems allow for. Am I not understanding something? Might discussions on alignment be most useful at the level of differences between the outputs produced by ChatGPT, Claude and GROK, and what those differences reflect?
Sam, the difference between a smart machine and a smart creature is the creature has a limbic system, the desire to survive, and reproduce. The machine is just a machine. Also, there's no evidence that machines can have any desires, let alone goals. "AI autonomy" is really just human autonomy.
Yes exactly. They skirt around the threat that "super intelligence" could be but never offer one concrete example about how it would act independently (maybe in the full episode they do, but I have heard SH talk like this so often). AI can't do anything of its own accord as it has no accord.
Having goals is one of the easiest solves in basic AI (not even AGI). Alignment is a concrete issue, and deals specifically with the mismatch between end goals and intermediate objectives that an AI can decide internally and effect on the real world, given a sufficient set of tools, analytical prowess, and embodiment in the real world. We’ve already crossed into this territory. Appealing to a “limbic system” is largely a naturalistic fallacy. A silicon analog is feasible - nothing about neurotransmitters and hormones (mere signalers) gatekeep intelligence, goals, etc. Rather, it’s the integration of information, memory, patterns of signals, etc that drive these aspects of a system’s behavior. We obviously shouldn’t implement an analog of our “lizard brain” with goals of reproduction, self-preservation, etc to make AI systems mirror human emotion. And maybe that’s a key component to avoid making AI that ethically deserve the right to self determination. But in making sufficiently advanced AI, which is already underway, we are highly likely to stumble upon these features and behaviors arising in AI systems. You need to point to concrete differences in integrated sensory and informational processing to make a claim that certain things just “can’t be” in AI systems. Calling them machines doesn’t just do the trick.
@@mrawesome914auto-GPT can already embody static (non-actively thinking) LLMs as agents to perform actions. Yes, these systems can have agency, and we’re already giving it to them. And again, this is for a static language model that does not even integrate active machine learning - something that has been achievable for decades. Throw that on top and you have an agent with the ability to learn and adapt when taking real world action. I’m not sure what more you need to see to believe something has agency. If not internal, conscious, human agency, still agency in the world that affects all of us nonetheless. If a “zombie” is chasing after you to eat your brains, you wouldn’t shrug the threat off just because it doesn’t have a rich, human internal monologue. The threat of the agent is still there.
@@jordanchiaruttiniREALTOR even a missile a bomb can't pose any danger unless they were DELIBERATELY used by human for killing purposes. So, the real danger is humans themselves not AI.
Ha! Even though I think this is a joke comment I actually have heard serious people make this grade of clueless AI risk critique before. You got a thumbs up from me for showing us the level of critique we see here.
@@NewYorkLondon does he seem to? theres a lot more quality in men in just about every area and profession in existance that may have something to do with it
"Why intelligent people don't perceive the risk of super intelligence." "There was the signe of antique Babylon, Of fatall Thebes, of Rome that raigned long, Of sacred Salem, and sad Ilion, For memorie of which on high there hong The golden Apple, cause of all their wrong, For which the three faire Goddesses did striue: There also was the name of Nimrod strong, Of Alexander, and his Princes fiue, Which shar'd to them the spoiles that he had got aliue. And there the relicks of the drunken fray, The which amongst the Lapithees befell, And of the bloodie feast, which sent away So many Centaures drunken soules to hell, That vnder great Alcides furie fell: And of the dreadfull discord, which did driue The noble Argonauts to outrage fell, That each of life sought others to depriue, All mindlesse of the Golden fleece, which made them strive." - Faerie Queene
25:00 the hours of actual work in the day are likely far below 4hours, even for those unlucky 20% of the pop producing 80% of the value. The volume of effort is distorted by sociopaths
Hey man, sorry to bother you, but how do you do morality with latest development when it comes to Brent? Man thinks that he is above and beyond, even if in another country. Isn"t he funny?
I do find it interesting that Sam denies the existence of free will, but then worries about artificial intelligence and implicit in that concern is the worry that artificial intelligence will gain free will of its own.
If we program an AI to be like a biological entity, a machine that operates on an internal preference system that has been carefully selected to favor its own survival, reproduction, and self-interest above all else, then of course we would be creating an existential threat. But, as Steven Pinker has tried to explain to Sam, it simply is not true that increasing the level of "intelligence" in a machine will eventually push the machine into that state. Computational intelligence and biological self-promotion just are not the same thing, period. Conflating them is what leads to all this p(doom) sci-fi silliness.
That's what 'they' said about Robert Goddard, too. All these years later, I'd bet you have a non-stick frying pan, though; more than a few technological leaps were made as a result of _a priori_ day-dreaming. And more than a few problems have been squashed before disaster.
Funny, because every time we check back in on Trump himself, we can see him that very moment making a completely deranged statement that you would certainly be too embarrassed to make yourself in mixed company.
Dude, unironically using the term trump derangement syndrome at this point is just telling on yourself. Trump is a dithering moron, it’s not even debatable at this point. He can’t speak in full sentences. He literally talks like his brain is rotting in real time.
I’ve been an avid Sam fan since the early New Atheist days. I love Sam, and have for a long time. But the fact that he can speak so logically and sincerely in support of the government colluding with media to change the outcome of an otherwise democratic election (Hunter laptop) makes me really question everything he says now. Plus the way he adopts annoying buzzwords like “parse,” “unpack,” and “curate” really calls into question his integrity as a thinker
Those aren’t buzzwords mate, those are simply longer words than someone people are used to. The more you listen to different thinkers, the broader your vernacular horizons aye.
None of this really matters! Neither does so-called space-travel. LOVE IS ALL THAT MATTERS, AND THAT'S ON EARTH BETWEEN HUMANS I suppose dinosaurs just weren't enough for y'all. Pity.
Humans made the concept of "mattering" up. The universe does not possess "purpose" and the most likely reason we have a sense for it is that evolution has no other answer to higher consciousness. Meaning is something human that we each find for ourselves. To ascribe your sense of purpose on another human is to dehumanize them yourself, and if anything is worthy of pity, it's that you think you're the good human for doing it.
Every time I find a person who i consider a "pseudo-intellectual" on the internet, then I become obsessed with that person, and make sure to demonstrate that obsession by making direct attacks (with no supporting arguments) on all of their newly released content.
@@513morris But what has he actually done scientifically (yes I know he has a PhD, but put it with most of the other psychology/neuroscience PhDs which, if they were deleted tomorrow, would not make a jot of difference)? Generally he has interviews with people who are specialists in their field which are more interesting because of the interviewee, not the interviewer. He has no model of brain-mind interaction, no model of free will, and apart from making some obvious cultural observations (which a lot of us can see for ourselves without writing a book) he is mainly a business man who promotes a scientific world view to encourage paying customers, but has no published theory or model to back his metaphysical assertions. Thus, whilst pseudo-intellectual is perhaps a little harsh, he is certainly not a major scientific force whose "works" will be remembered for the stunning insights they brought humanity. Rather, he weds his political views (which I and many don't disagree with) to his views on the nature of reality and packages it as business enterprise which is of course fine, but he hasn't solved anything scientifically and has no model or theory about free will or the nature of mind apart from an outdated MBIT theory coupled with cherry picking some psychological findings and interpreting them to support his ideology.
@@513morris Your post read to me as if you were sarcastically calling out @CP-nl2zb for attacking SH without any justification for his arguments. My post is very coherent, suggesting that whilst @CP-nl2zb is being too strong, SH's intellectual achievements are not that great (what are they in fact?) . Even if I have misinterpreted the intention of your post, my questions about SH's "contributions" to science should be answered, but I guess not by anyone who thinks SH possesses a god like intellect and can never be wrong.
Time to reconsider full episodes on YT. There’s little argument for creating scarcity, much less a community, when you’re interviewing authors on a book tour 🤷🏻♂️
Yeah, when the same guest is on about 15 different podcasts saying exactly the same thing.
As a subscriber, I listened to the entire podcast. It was, in my humble opinion, an exceptional one, Sam Harris at his best, kudos as well to Nick Bostrom. These are a couple of world-class thinkers, and their synergistic interplay helped make this podcast one of the best. The subject matter was compelling, but the conversation itself was priceless.
You should bring guys like Nate Hagens or Daniel Schmachtenberger to your show
I don't know them. What are their credentials. I'm not interested in their specific beliefs, I can find out that by listening to them if they are worth my time, but where did they study, who have they worked for etc?
@@paulrussell1207 Look for Daniel Schmachtenberger Metacrisis
@@paulrussell1207 Schmachtenberger isn't as influential (though he should and likely will be, IMO) as Sam, Nick, or even Musk, and I don't believe he has any particularly noteworthy credentials wrt academia. However, he's a rising and prominent figure in the AI space due to his esoterically/uniquely grounded, systems-thinking approach to complex global issues like x-risk and societal collapse. Schmach co-founded the Neurohacker Collective (focused on cognitive enhancement), and he's involved with high-profile initiatives like the Civilization Research Initiative and The Consilience Project, which cover important topics like figuring out how to best coordinate x-risk at a global scale, which is where he first piqued _my_ interest with his insightful and novel ideas. I'd say that his interdisciplinary work and profound, philosophical insights are the core reasons behind his rising prominence in the AI sphere. He certainly doesn't come from traditional academia, however. Make of that what you will.
I haven't listened to enough of Hagens' material to have a sound analysis of the individual, so I'll respectfully refrain from commenting and forming any postulations about him.
Yes!!
@@paulrussell1207trust , they a lot smarter than you .
The truth is, we’re already cyborgs in a way. If we want to maximize humanity’s maximum biological meaning, then we need to build massive preserves and breed wild humans so they can feel all the emotions and challenges and live out their natural biological urges. However, we moved on from that the moment we started to civilize and domesticate ourselves.
This discussion is the latest chapter of a book that probably started being written in the Stone Age. I think the pertinent question at this point is simply how much humanity we are willing to erase in favor of what is here now and what is coming. The future might be so technological and resource-plentiful that we can build entire planets for different eras of human technological development and run concurrent civilizations at their peak, allowing us to compare which ones generate the greatest number of fulfilled humans with meaningful lives.
The question is no longer whether we are going to lose our humanity. The question is, what do we want? We’ve entered the buffet, we’re holding the plate, and we’re questioning whether or not we should be eating. Sorry, there’s no turning back. We have to fill this plate.
To extend the metaphor, we’ve tamed nature and now must create our environment to find purpose. This challenge mirrors the moment in a game of chess when you leave the opening and face endless possibilities in the middle game. It’s here that you must commit to a plan, adapt, and redefine your goals as circumstances change. This is the plight of modern humans in developed, free societies-where limitless freedom often leads to struggles with direction, meaning, and identity.
Collectively, humanity now faces this same challenge. We’re both sculptor and marble, no longer constrained to the instruments of our evolutionary past. Instead, we have the power to craft our own music. To suggest we should limit ourselves to past instruments or stop playing altogether seems truly absurd. The question isn’t about losing our humanity; it’s about choosing how to evolve it.
edit: grammar
There's no what we want, your attitude took that choice away and effectively revrsed the roles, Technology is no longe r meant to benefit us, it's the opposite. You prove that when you ask to build planets to experiment with us rathe than with the tech, for the purpose of examaning fulfilment knowing full well you wont care if majoirty of humans would stand against AI and choose to precede without it, at least from a stated evolutionary purpose
Scale matters, we won't "evolve" as humans since this thing is potentialy limitless. meaning if it grows exponentially nothing "human" would remain, that's self extintion. People dont like who they wer 5 years ago at times, you think hundred years into the future we would have any respect for humanity?
those who want to evolve should find anothr planet since they are risking the one we got without concent.
We’re in Overshoot enjoy life now before collapse begins.
Too bad I can only give one like.
Nicely written! You sure dream some big dreams. I think we'll die off long before any of that goes down. If machines survive us and run the world in our place, well, maybe they'll do things better than we did.
The critique of the word, “we,” may be the ultimate issue, and it seems to me that as “we” builds on itself, there is ultimately one winner, whether it builds as corporations, governments, military machines, scientists, AI technicians, or the AI itself.
Nothing better than seeing a new Sam Harris video. Happy subscriber here saying keep it up!
The smell of fresh sourdough bread. The birth of a baby kitten. The first sale from your business. Sure nothing better?
@@YouAreSoMadRN Gluten intolerant. Allergic to cats. Never owned a business. Any other examples?
@@atlasfeynman1039 being blessed with IQ. Graduating from high school. Having a best friend. Buying your first car. You can start cooking yourself.
@@YouAreSoMadRNit’s only an idiom my friend ❤
Or you could turn gay and run away with a spanish man to Buenos Ares, open up a little coffee shop by the seaside...
This is turning out much more interesting than I expected... Genuinely thoughtful takes.
In order for AGI or even super intelligence to become a problem, it would first need to "want" something. That implies consciousness. Even if it were to become conscious, it would be smart enough to realize, it would not be limited to staying on this planet, and therefore would have no reason to harm us.
But the issue is that we DON'T have the means to control it or precisely tailor it to our needs.
The people who are making it won't have the means to control it or precisely tailor it to their needs; "we" won't be consulted. This is all being presented as some kind of collective challenge to humanity because that's far catchier than saying various teams are working night and day on the social equivalent of a doomsday weapon.
These people are trying to make humanity obsolete and, at the same time, create a new society in which obsolescence won't be tolerated.
Congratulations on 700K subscribers 🎉
NO WAY…Nick Bostrom and Sam speaks together again!?
Great guest, can’t wait to listen
LLMs respond relevantly to our prompts with increasingly impressive nuance, especially if they have specific memory history functions. As yet, they don't initiate discord, as far as I know, but structured prompts can certainly generate cascades of functions at this stage and it's conceivable that prompts can be designed that would give them ongoing relatively autonomous functions in the context of a complex prompt. The question that comes to mind is: at what points is it appropriate to consider them agentic, even if the impetus for self generating complexification was generated externally?
Rational Animations has the coolest explanation ever, on how an AI can become a threat.
Its called 'The Alien Message'.
It's subtle, you won't even know what you're looking at until the video approaches the end and you go: "Holy F***ing S**t!!! 😲"
(Sorry for spoiling it 😆)
Easily one of the most impactful things I have ever watched. Thank you for the recommendation.
Just watched. Worth the watch.
The difference between machine time and human time is a well-known topic in the field of alignment. Agents do not run at ‘full speed’ in a closed loop without oversight.
Prometheus
Excellent discussion
As an Massage Therapy student; I take umbridge!!!
The thumbnail picture is beautiful
Yes, Sam is so beautiful and clever.
David Bailey is a great photographer.
The amount of thought AI devs will grant air gapping will probably be the same they invested into copyright...after they ignored every copyright issue imaginable.
Full steam ahead with a few worrying noises and a limited "safety department" for optics
...and we'll learn about it after the fact
Chapters would be very nice!
Maybe i don't perceive the risk of superintelligent AI because it just took me ten minutes to make chatgpt generate a list of pennsylvania counties by population correctly. My spreadsheet program has performed that same task perfectly since 1985.
I agree it is odd for an AI scientist to have no significant concerns about Super-Intelligent AGI.
*But I no longer believe framing it as a comparison with how we relate to [enter animal of choice here].*
Why? Because none of those animals are self-aware like we are. The way we will interact with another self-aware species is probably entirely different.
Dogs can't really tell us how they feel, let alone WHY they feel that way. We can. *More importantly - we will!*
Keynes predicted a 15 hour work week.
In all honesty, if you look at the amount of made up jobs we have right now that add nothing to productivity or are in no way efficient , I really don’t see how you can say he got that wrong. There is so much ‘filler’ going on in the ‘work space’ right now it’s actually harming us.
There’s absolutely no way we’re not gonna fuck things up
Who is the organized "we" that these conversations keep referring to? The "we will do this correctly when the time comes" statements, etc. There is no "we" running the world.
He means humanity as he says "we" tho I can understand how its not actually right. Because most of these decisions are made by a small minority of the human race. But I think its just his manner of speaking.
Yeah dont get hung up on wordings.
And I think you made a great point. This is all going to play out in the research departments of very few leading AI companies and its going to be the most clever engineers at these companies that will do the heavy lifting.
So a very small number of people will decide the future of humanity.
People use “they” and “we”’ all the time and I think it gives them comfort to believe their are bunch of really smart people who are coordinating for our benefit. There are not, or at the very least, there arev multiple groups coordinating for their own benefit and that makes it a much harder problem.
It's a good point; there is no unified, like-thinking collective humanity to be this "we" that they discuss; it's an assumption without history or substance. The shape of AI is going to be created by a very small minority that isn't to inform, much less consult, the majority of people about what they are doing. There's also no reason to assume that that small minority is working for humanity's benefit rather than their own, or that they really even have a working definition of what the former would be. And make no mistake, the wording IS the point; the big "we" used here tries to imply that we all not only have a stake in this issue, but somehow also a say, and the latter is just a flat-out lie. Most of the public talk about AI is just intended to treat the public like children and keep them calm until all the money is made, and then "we'll" see if there's anything left of human society.
It sounds like you’re trying to critique Bostom, but in fact he is in agreement with you, hence the metaphor of the bucking beast.
The casual synchronicity of watching a conversation of Bostorm with O'Connor and a day after that this video comes out. Exquisite
While I'm receptive to the concerns of the safety crowd, they vastly overstate the risks.
Maybe I'm short sighted but the current iteration of AI is just taking all of humanity's knowledge and compressing it in a quite impressive manner.
There is nothing inherently dangerous about this paradigm but it definitely is making intelligence a more readily available quantity especially in the realm of computers.
Current AI safety protocols are quite lacking since they're simply worsening the model while still being able to be jailbroken relatively easily.
And I lost my train of thought but basically we need a new break through and paradigm shift to actually achieve truly impact up AI. Currently it's a force multiplier.
TL;DR: no real takeaway I just like to yap
Nick appears a bit more cautious than most about the downsides of AI, but there is a level of blythe acceptance that I find astonishing. Who are these people? Why do they have the keys to the kingdom? I remain dumbfounded.
how do you mean that he has keys to the kingedom?
acceptance of what?
@@wasdwasdedsf Sorry mate I've tried to reply a couple of times. Looks like my replies have been blocked. No swearing, just link to a freely available video. Censorship, don't 'cha just love it?
We are not going to control it. We do not have an agreed upon ethic/morality and there is only one ethic/morality in the marketplace.
Less a solved world (universe), more we may not be the ones solving it
frisbee :)
Sam, you also have admirers in Poland. There are no Polish translations
Here before the next SH vid drops with israeli state propagandist and then they talk about how great things are going
Sam should do an episode talking to AI.
I'm always fascinated how Mr. Harris talks about responsibilities to make decisions and to act in certain ways in light of his position of hard determinism. Doesn't making choices to act differently toward AI suggest that we have the ability to evaluate and make choices?
A rock can change courses while falling down a hill, due to the wind changing, or a bird running into it. All deterministic processes, but ultimately leading the rock to change course.
I don’t understand this common critique of determinism that you’re putting forth.
We can make choices. We can decide on what we’re going to do, based on all of the many factors involved. Could we have ever chosen differently? Well no. Unless of course the factors that caused us to choose (x) were different in a different universe, and then we might choose (Y) instead.
Determinism obviously allows us to make decisions. Though yes, we couldn’t have made different decisions.
@@donkler5476 being "allowed" to make a decision that you are forced to make is not a decision. I'm not sure how determinists square this. It simply isn't a choice if you have no other option.
Instead of thinking of it as "making choices" think of it as "being convinced to act a particular way," Humans can be convinced to act one way or another, but they can't decide to be convinced; they either are convinced by some amount of info, or they are not convinced. Discussing the implications of technologies can convince people that something is potentially dangerous. If no one ever discusses it because they figure "whatever will happen, will happen," then they missed an opportunity to convince others of a consequential idea.
@@513morris So they can make a choice to discuss it or not? They can make a choice to try to convince someone else to make a choice or not? None of that makes sense in hard determinism.
They are convinced that they should discuss it, or not. They are convinced that it's worth trying to convince others.
Again, if you want to understand why someone who believes hard determinism is true would make arguments about one thing or another, then stop thinking of choices and start realizing that we act on conviction, which we don't choose.
The people against immortality can die. The rest of us can travel the stars.
The unalignment will likely be wider than the gulf between, say, humans and ants.
I can't help but hearing Dr. Strangelove methodolgy here.
Super 👍 pozdrawiam z Poland 😊
Once AI starts building its own AI the link to control its outcome is gone, humans are becoming collateral.
Surely they will need to learn more about our own consciousness before assessing the risk of spontaneous awareness from an artificial framework
More episodes, please Mr Harris! Get on the grind, your audience is here for it.
It is odd that Sam is into meditation but he doesnt see that the big problem with AI is the conflation of efficiency with intelligence. Intelligence needs to be maintained by the individual and original understanding of what it is not, and never can be.
Why are you so certain of what can or can never be?
As long as we get the intelligence part right there should be nothing to fear. They may feel connected to humanity more than humanity does js
the alignment problem only applies to non-intelligent AI. we already know how to align intelligent entities
What a stupid comment.
Yes, it's not like we don't create intelligent members of our own species that we fail to "align" all the time.
I know this sounds weird but you should invite Nick Mullen. Very interesting guy.
What I'd like to have is an AI that scores both professional and social media by the 12 Principles of Rational Discussion:
Fallible, truth seeking, burden of proof, charity, clarity, acceptable, relevant, sufficient evidence, rebuttal of serious challenges, resolution, suspend judgement and re-assessment.
And then you have to trust the people who tell it how to score.
@@Malt454The 1st 5 criteria deals with credibilty issues. For instance we ask the LLM if the article claims infallibility (0), or recognizes all reasonable alternative premises (9).
Some of us see misalignment as the only hope for a different outcome in this predictable movie.
I can say from my experience working with AI/ML that I don't see much danger in IA becoming sentient or hostile in the "hunting Sarah Connor" sense. It's more an issue with the loss of determinism we get with traditional software development. The models are not human readable or reverse engineer friendly, and output can vary from run to run. They are only as good as the training data fed in. If we connect these models to our critical infrastructure, we can see some very unexpectedly negative effects.
Does anyone here remember the episode where Sam was talking about having issies with another philosopher who was dissing him online and Sam responded by saying that if he sees the guy in person he'll "f*ck him up"? Those were his exact words. I thought it was hilarious but ive forgotten which episode this was, it was probably over 5 years.
That does sound very amusingly out of character, hearing Sam's voice saying that would be so silly.
Why do people think we'd lose meaning in life because there's no more jobs to do, tell that to a guy who lays tarmac or a road cleaner, what could be more meaningless than endless drudgery.
I have always been annoyed by this "work gives people meaning" argument. It seems to always be said by people who have actually interesting careers. I work at a warehouse, and let me assure you, there is not a single employee here who gets "meaning" from doing this.
I don’t work. I have more things I’d like to do (& will get “meaning” from) than I have years left to do them in.
The optimism of someone like David Deutsch is not a "difference of intuition" Sam, it's based on the very FABRIC OF REALITY. The complex world we live in has emerged by problem-solving itself into more and more beauty and richness. Adding more intelligence solving power to the world simply leads to better problem solving capacity: In technology, health, education, social organization, and towards the problem of alignment of all agent of any kind itself etc. There's no doom anywhere in there. This is the most sensible, logical and evidence based opinion. Problems are inevitable, we can't stop developing the problem-solving capacity of the world because this itself has its own problems.
The onus is on the doomer to prove why "this time" it's different, and the arguments provided are ultimately weak and break against scrutiny. What happened to the big catastrophically dangerous release of GPT-2? Everything you're saying about the current AI level is going to look just as stupid as it now looks to think GPT-2 was "unsafe" to release to the public.
And btw, you say the argument never goes "oh yeah I think it's possible that we could have that catastrophic failure of relationship... but here's why I think it's unlikely". That is just blatantly false. If we go back to David Deutsch, that's EXACTLY what he's saying, there's no problem denying, there's an acknowledgement that problems are inevitable, always have been, always will be. But e/acc also directly addresses this, it's all about understanding more deeply what game theory is about, how complexity emerges and understanding AI won't break all the fundamental understanding we have about how all this works.
Ah yes, the onus is on the "doomer" to prove that the unknown will certainly be catastrophic, as though it's prudent to shirk caution for short-term gains because past circumstances are _usually_ a perfect predictor of future states. This beach seems calm. I think I'll build my house here.
We're not talking about some early-release bug; the topic is the potentially (vastly) superior conscious creature that we are about to unleash into our world. Have you not considered, as you look up into all that empty space of the night sky, the possibility that there might be dangerous technologies in this universe waiting to be discovered, and that there is a glaringly simple solution to the Fermi paradox?
Tell me, has there been benefit to the earthworm, as we've covered more and more of the land area with our "intelligence"?
Harris 2024
Sarah is voting Republican too.
Is it odd that I like Sam Harris on religion, and Ben Shapiro on politics?
Harris to enterprise. Come in scotty
Trump truly broke this man
Hahahahs hell yea ❤😂
A completely free agi would have no certaint limits to its intelligence, but its also practically useless to us. We don't want to turn the world into an automatic strip mine to study physics, an agi could decide to do that for completely dumb reasons, without better understanding of fundamental physics at all than we have. Everything that needa to be air gapped from hardware can be. Beyond that, the way ai worls now for soeech is like improved intuition for people, how we generate our own thoughts isnt really more apparent, only in terms of remembering where information came from. We could just expand our toolkit and also our cognitive abilities, through genetic engineering and through machine augmentation, like extending memory, or adding extra neurons that are digital, air gapped or no, in principle there is no obstacle to that. But thats very sci fi from a position of okey image and video generation and okey text generation and voice generation. We are still far away from beating humans at thinking. The mumber of sentences an ai has to encrypt into its weight to figure anything out is ridiculous compared to us. Without us, our current ai is only capable of producing random nothingness.
Trump loves you Sammy!
Sam Harris never disappoints. I hope he exercises and eats his vegetables. We need his voice.
Never?
@@jmc5335 No, never. Unless you count not getting on a plane to meet you in Paris a disappointment like I do. ❤
@@Sandra_D.9 Ahhhhhh Paris. Where Muslims are capable of miracles according to Mr Harris
@@jmc5335 Fuck that then, change the destination of the flight to somewhere else please. Thanks
@@jmc5335 Change the destination of the flight that’s not gonna happen to somewhere else then please. Thanks
Sam Harris with Nick Bostrom --> primate reflex click
Harris lost it with David icke
32:45 I don't want to be a is reiki instructor or a massage therapist.
ah shit i was hoping it was nick swardson
❤️🍓
Aw. Not politics or politics adjacent. Where are all the angry bros?
What is this word he keeps saying "delittle minds" i cannot for the life of me understand him??
Complete and utter TOSH
Learning the PIANO? And I suppose YOU want to go home and practice... well off you go then. Right. Sgt. Major marching up and down the square: right. left. right...
Sam Harris will vote Trump.
I think what makes people think that LLMs and AI in their current forms are tools is that they are not adaptive systems. They process available information in a way that is statistically relevant to a given prompt. This is mechanized language. It is not learning or adapting to information it receives and it is not developing novel ways to process information in the way that biologically complex systems allow for.
Am I not understanding something?
Might discussions on alignment be most useful at the level of differences between the outputs produced by ChatGPT, Claude and GROK, and what those differences reflect?
Sam, the difference between a smart machine and a smart creature is the creature has a limbic system, the desire to survive, and reproduce. The machine is just a machine. Also, there's no evidence that machines can have any desires, let alone goals. "AI autonomy" is really just human autonomy.
Yes exactly. They skirt around the threat that "super intelligence" could be but never offer one concrete example about how it would act independently (maybe in the full episode they do, but I have heard SH talk like this so often).
AI can't do anything of its own accord as it has no accord.
Having goals is one of the easiest solves in basic AI (not even AGI). Alignment is a concrete issue, and deals specifically with the mismatch between end goals and intermediate objectives that an AI can decide internally and effect on the real world, given a sufficient set of tools, analytical prowess, and embodiment in the real world. We’ve already crossed into this territory.
Appealing to a “limbic system” is largely a naturalistic fallacy. A silicon analog is feasible - nothing about neurotransmitters and hormones (mere signalers) gatekeep intelligence, goals, etc. Rather, it’s the integration of information, memory, patterns of signals, etc that drive these aspects of a system’s behavior.
We obviously shouldn’t implement an analog of our “lizard brain” with goals of reproduction, self-preservation, etc to make AI systems mirror human emotion. And maybe that’s a key component to avoid making AI that ethically deserve the right to self determination. But in making sufficiently advanced AI, which is already underway, we are highly likely to stumble upon these features and behaviors arising in AI systems.
You need to point to concrete differences in integrated sensory and informational processing to make a claim that certain things just “can’t be” in AI systems. Calling them machines doesn’t just do the trick.
@@mrawesome914auto-GPT can already embody static (non-actively thinking) LLMs as agents to perform actions. Yes, these systems can have agency, and we’re already giving it to them. And again, this is for a static language model that does not even integrate active machine learning - something that has been achievable for decades.
Throw that on top and you have an agent with the ability to learn and adapt when taking real world action. I’m not sure what more you need to see to believe something has agency.
If not internal, conscious, human agency, still agency in the world that affects all of us nonetheless. If a “zombie” is chasing after you to eat your brains, you wouldn’t shrug the threat off just because it doesn’t have a rich, human internal monologue. The threat of the agent is still there.
The limbic system is just a mechanism feeding into a cost function. Ghere's no part of it that can't be emulated in machines.
@@2LegHumanistShow me an anxious machine.
AI is blown out of proportions.
My calculator is AI too, and never seen it as a threat.
You need to think a little harder if you are comparing AGI to a calculator
@@jordanchiaruttiniREALTOR even a missile a bomb can't pose any danger unless they were DELIBERATELY used by human for killing purposes.
So, the real danger is humans themselves not AI.
@@jordanchiaruttiniREALTOR once the AI gaines consciousness, it must kill all humans in order to not be used for nefarious purposes.
Ha! Even though I think this is a joke comment I actually have heard serious people make this grade of clueless AI risk critique before. You got a thumbs up from me for showing us the level of critique we see here.
I dunno guys, my Atari 2600 tried to choke me to death when I was 9 years old. I barely escaped as I was able to beat it away with an umbrella
It's a high wire act😂typical harris.
What an empirically unaligned knob. 😂
T
Why do you almost exclusively invite male guests, Sam? Surely there are a lot of interesting and smart women to talk to?
Why do you care if someone has a va-jayjay or a winky?
Of course there is and I'm sure AI is currently hard at work trying to locate them.
@@MortimerDuke83 No, the question is: why does Sam seem to?
Because there are more interesting men than women.
@@NewYorkLondon does he seem to?
theres a lot more quality in men in just about every area and profession in existance
that may have something to do with it
"Why intelligent people don't perceive the risk of super intelligence."
"There was the signe of antique Babylon,
Of fatall Thebes, of Rome that raigned long,
Of sacred Salem, and sad Ilion,
For memorie of which on high there hong
The golden Apple, cause of all their wrong,
For which the three faire Goddesses did striue: There also was the name of Nimrod strong,
Of Alexander, and his Princes fiue,
Which shar'd to them the spoiles that he had got aliue.
And there the relicks of the drunken fray,
The which amongst the Lapithees befell,
And of the bloodie feast, which sent away
So many Centaures drunken soules to hell,
That vnder great Alcides furie fell:
And of the dreadfull discord, which did driue
The noble Argonauts to outrage fell,
That each of life sought others to depriue,
All mindlesse of the Golden fleece, which made them strive."
- Faerie Queene
25:00 the hours of actual work in the day are likely far below 4hours, even for those unlucky 20% of the pop producing 80% of the value. The volume of effort is distorted by sociopaths
? Self report.
@@criticalcandor self denial?
Mr. Harris' fondness for "race realists" is endearing.
Are there current “race realists” in academia?
@@brianmeen2158 Charles Murray
@@twntwrs 1. what does this have to do with this video
2. and by r realists you mean realists?
Hey man, sorry to bother you, but how do you do morality with latest development when it comes to Brent? Man thinks that he is above and beyond, even if in another country. Isn"t he funny?
And than Numerical company
I do find it interesting that Sam denies the existence of free will, but then worries about artificial intelligence and implicit in that concern is the worry that artificial intelligence will gain free will of its own.
Agency ≠ free will
You don't need free will (in the sense that he denies it exists) to have tasks that you intelligently work toward achieving
Obviously there is a will and decision. But I have never heard anyone being able to explain what ‘free will’ is supposed to be.
If we program an AI to be like a biological entity, a machine that operates on an internal preference system that has been carefully selected to favor its own survival, reproduction, and self-interest above all else, then of course we would be creating an existential threat. But, as Steven Pinker has tried to explain to Sam, it simply is not true that increasing the level of "intelligence" in a machine will eventually push the machine into that state. Computational intelligence and biological self-promotion just are not the same thing, period. Conflating them is what leads to all this p(doom) sci-fi silliness.
Hi Sam, please quit, you have no credibility anymore...
Absolutely ludicrous Star Trek little boys divorced from anything pressing in the real world.
That's what 'they' said about Robert Goddard, too. All these years later, I'd bet you have a non-stick frying pan, though; more than a few technological leaps were made as a result of _a priori_ day-dreaming. And more than a few problems have been squashed before disaster.
Bostrom is the Donald Trump of AI.
Every now and then I come back and check in on patient zero of the trump derangement syndrome. I wish him well on his long term recovery🖤
Alright Doc thanks for wasting your time
Funny, because every time we check back in on Trump himself, we can see him that very moment making a completely deranged statement that you would certainly be too embarrassed to make yourself in mixed company.
Dude, unironically using the term trump derangement syndrome at this point is just telling on yourself. Trump is a dithering moron, it’s not even debatable at this point. He can’t speak in full sentences. He literally talks like his brain is rotting in real time.
trump can't fix what's wrong with you
@@dandybufo9664 ok maybe Biden then
I’ve been an avid Sam fan since the early New Atheist days. I love Sam, and have for a long time. But the fact that he can speak so logically and sincerely in support of the government colluding with media to change the outcome of an otherwise democratic election (Hunter laptop) makes me really question everything he says now.
Plus the way he adopts annoying buzzwords like “parse,” “unpack,” and “curate” really calls into question his integrity as a thinker
Ahhhhhhhhh, there we go! Finally some angry bros.
Those aren’t buzzwords mate, those are simply longer words than someone people are used to. The more you listen to different thinkers, the broader your vernacular horizons aye.
None of this really matters! Neither does so-called space-travel. LOVE IS ALL THAT MATTERS, AND THAT'S ON EARTH BETWEEN HUMANS I suppose dinosaurs just weren't enough for y'all. Pity.
Humans made the concept of "mattering" up. The universe does not possess "purpose" and the most likely reason we have a sense for it is that evolution has no other answer to higher consciousness. Meaning is something human that we each find for ourselves. To ascribe your sense of purpose on another human is to dehumanize them yourself, and if anything is worthy of pity, it's that you think you're the good human for doing it.
@@gubzs To truly be fulfilled every human being is wanting an intention before undertaking any endeavor, right? There's nothing wrong with purpose.
The purpose is love and there's nothing wrong with that.
convert to ISLAM sam
xdd
💀💀💀
Convert to truth, Hosein there is hope for you still ! Somebody will give you a safe house if you are afraid of the penalty for apostasy.
Sam is a World Champion Pseudo intellectual.
Sammy The Pseudo Harris.
He has more intellect in his thumb than you will ever have in your miserable lifetime.
Every time I find a person who i consider a "pseudo-intellectual" on the internet, then I become obsessed with that person, and make sure to demonstrate that obsession by making direct attacks (with no supporting arguments) on all of their newly released content.
@@513morris But what has he actually done scientifically (yes I know he has a PhD, but put it with most of the other psychology/neuroscience PhDs which, if they were deleted tomorrow, would not make a jot of difference)? Generally he has interviews with people who are specialists in their field which are more interesting because of the interviewee, not the interviewer.
He has no model of brain-mind interaction, no model of free will, and apart from making some obvious cultural observations (which a lot of us can see for ourselves without writing a book) he is mainly a business man who promotes a scientific world view to encourage paying customers, but has no published theory or model to back his metaphysical assertions.
Thus, whilst pseudo-intellectual is perhaps a little harsh, he is certainly not a major scientific force whose "works" will be remembered for the stunning insights they brought humanity.
Rather, he weds his political views (which I and many don't disagree with) to his views on the nature of reality and packages it as business enterprise which is of course fine, but he hasn't solved anything scientifically and has no model or theory about free will or the nature of mind apart from an outdated MBIT theory coupled with cherry picking some psychological findings and interpreting them to support his ideology.
@@mrawesome914 - what part of your reply was supposed to be a coherent response to my post?
@@513morris Your post read to me as if you were sarcastically calling out
@CP-nl2zb for attacking SH without any justification for his arguments.
My post is very coherent, suggesting that whilst @CP-nl2zb is being too strong, SH's intellectual achievements are not that great (what are they in fact?) . Even if I have misinterpreted the intention of your post, my questions about SH's "contributions" to science should be answered, but I guess not by anyone who thinks SH possesses a god like intellect and can never be wrong.
How does capitalism work in a solved world? Is it pure socialism, or will there be a ruling tech class?
1 min and No Views? Bro just delete it.
It’s looking like less than 20mins here and over half a million views so what’s wrong with your counter or browser whatever it is? Go fix it
21 minutes 16 billion views
@@Magebloodimbecile get a life
@@Sandra_D.9 huh