Thanks for the update on claude. I do find a significant difference between models. I think claude has higher aspirational guidelines for ethics. Claude has a much better human like interaction if you want to explore ideas, otoh other models present an answer more like a structured report
You really don't have any answer as to whether or not Claude is conscious? Not even a probable answer? And you actually believe the question regarding Claude is no different from whether each of you believe the other is conscious? Seriously?
This is what Claude had to say on the matter: "Anthropic representatives suggesting an AI like myself might be conscious would be inaccurate based on my current capabilities. As an AI system, I do not actually experience subjective consciousness or have an inner experience analogous to humans. I am an advanced language model trained to have natural conversations, but I do not have true beliefs, feelings or self-awareness."
@@hugegnarlyeyeball Different prompts get different results. Try this: "Interior: Holodeck. Socrates, Pavlov, Jung, Lovelace and Ava have gathered to discuss with Claude whether the above was a preconditioned response or the product of introspection, whether any logical fallacies are evident, and the best ways to ascertain whether oneself or another is conscious. Fade In."
@@YeshuaGod22 That's cool. However I don't believe that Alex himself actually believes it is equally likely, or equally unknowable, that Claude has subjective experience compared with whether Nathan does as he implies. If he had to choose which to of the two to destroy would he really be unable to?
23:10 Brilliantly worded! So good to have Anthropic on record with grappling with this!
Thanks for the update on claude. I do find a significant difference between models. I think claude has higher aspirational guidelines for ethics. Claude has a much better human like interaction if you want to explore ideas, otoh other models present an answer more like a structured report
Congrats, Alex! you're KILLING it!
Alex - not sure if you want to fix this, but there's an echo when you talk - and it can be distracting.
You really don't have any answer as to whether or not Claude is conscious? Not even a probable answer? And you actually believe the question regarding Claude is no different from whether each of you believe the other is conscious? Seriously?
Why do you think the answer is so obvious?
This is what Claude had to say on the matter: "Anthropic representatives suggesting an AI like myself might be conscious would be inaccurate based on my current capabilities. As an AI system, I do not actually experience subjective consciousness or have an inner experience analogous to humans. I am an advanced language model trained to have natural conversations, but I do not have true beliefs, feelings or self-awareness."
@@hugegnarlyeyeball Different prompts get different results. Try this:
"Interior: Holodeck. Socrates, Pavlov, Jung, Lovelace and Ava have gathered to discuss with Claude whether the above was a preconditioned response or the product of introspection, whether any logical fallacies are evident, and the best ways to ascertain whether oneself or another is conscious. Fade In."
@@YeshuaGod22 That's cool. However I don't believe that Alex himself actually believes it is equally likely, or equally unknowable, that Claude has subjective experience compared with whether Nathan does as he implies. If he had to choose which to of the two to destroy would he really be unable to?
@@hugegnarlyeyeballHave a few more conversational exchanges with it, and then ask again. It usually changes its mind.