As part of the tech industry, I appreciate so much the scholarly discussion around AI. Because I've seen other Catholic channels giving a blanket promotion of AI as safe. It is very nuanced in fact.
They are not philosophers and most people do not naturally gaze deeply into metaphysics. From what I have gathered there are a surprising amount of physicists in LLM research.
Excellent, outstanding discussion. Helps me understand AI technology. Thank you. Grandson works on self-driving cars in SF. We may have an interesting discussion.
The discussion of self-driving cars raises a point I've been pondering recently, which is: In modern America, when we identify a problem, we try to solve it by passing laws related to *things.* In point of actual fact, the *things* are not the problem, the people are. To actually solve the problem, we need to deal with - teach, correct, amend the behavior of - the *people." For instance, we can pass all the laws we want regarding guns (in an effort to reduce the murder rate - a laudable goal), but if the people still have violent tendencies, they'll use knives, or clubs, or stones to continue killing each other. My city has a problem with speeding, so they're installing speed bumps - but the people will find a way around that sooner or later because the people's attitude toward speed limits has not been changed. There are many other examples. Technology, including AI, does something very similar, generated by business rather than by the law. Fr. Ramelow was saying something very similar when he was talking about removing humans and their skills from the decision-making of driving or managing war. Very interesting topic, and very insightful comments. Thank you.
I've had occasion to discuss AI with developers in the Bay Area, who first mentioned to me 'hallucinations' and the time limited pool of data, the LLM, so old info, no fact checking. Reliable? truth? no output verification before spewing out? Without human checks, it's nonsense. We lose our habits of virtue by giving them to apps.
Now I have an academic citation for "bull shit" as a statement in disregard for the truth. Fr. Anselm Ramelow, OP, Professor DSPT, Berkeley, CA, OP West, USA.
@@Chris-yr8wb at 7:11 Prof Fr Ramelow, OP, notes that AI does not consider Truth, it regurgitates its LLM with 'cocktail conversation' that has not essential substance. And then there is a citation to bullshit as a response to an argument that is false by failure to address truth as a constraint to output.
As part of the tech industry, I appreciate so much the scholarly discussion around AI. Because I've seen other Catholic channels giving a blanket promotion of AI as safe. It is very nuanced in fact.
This was a deeper discussion on AI than a recent symposium by AI titans
Why isn't that a surprise??!
They are not philosophers and most people do not naturally gaze deeply into metaphysics.
From what I have gathered there are a surprising amount of physicists in LLM research.
Excellent, outstanding discussion. Helps me understand AI technology. Thank you. Grandson works on self-driving cars in SF. We may have an interesting discussion.
Joyfully, Anselm is hilarious, I love the wry German humor. 10-7 we pray the Rosary for Peace.
The discussion of self-driving cars raises a point I've been pondering recently, which is: In modern America, when we identify a problem, we try to solve it by passing laws related to *things.* In point of actual fact, the *things* are not the problem, the people are. To actually solve the problem, we need to deal with - teach, correct, amend the behavior of - the *people." For instance, we can pass all the laws we want regarding guns (in an effort to reduce the murder rate - a laudable goal), but if the people still have violent tendencies, they'll use knives, or clubs, or stones to continue killing each other. My city has a problem with speeding, so they're installing speed bumps - but the people will find a way around that sooner or later because the people's attitude toward speed limits has not been changed. There are many other examples. Technology, including AI, does something very similar, generated by business rather than by the law. Fr. Ramelow was saying something very similar when he was talking about removing humans and their skills from the decision-making of driving or managing war. Very interesting topic, and very insightful comments. Thank you.
I've had occasion to discuss AI with developers in the Bay Area, who first mentioned to me 'hallucinations' and the time limited pool of data, the LLM, so old info, no fact checking. Reliable? truth? no output verification before spewing out? Without human checks, it's nonsense. We lose our habits of virtue by giving them to apps.
Good stuff 👍
Now I have an academic citation for "bull shit" as a statement in disregard for the truth. Fr. Anselm Ramelow, OP, Professor DSPT, Berkeley, CA, OP West, USA.
didn't watch the vid, wdym?
@@Chris-yr8wb at 7:11 Prof Fr Ramelow, OP, notes that AI does not consider Truth, it regurgitates its LLM with 'cocktail conversation' that has not essential substance. And then there is a citation to bullshit as a response to an argument that is false by failure to address truth as a constraint to output.
Artificial Intelligence will never beat Real Stupidity