On self-refuting theories: ruclips.net/video/p22qfddYkXI/видео.html Could there be no laws of logic? ruclips.net/video/4B61OYuNEwI/видео.html ruclips.net/video/PhbWFAbRYvE/видео.html
This is similar to rounding a number while evaluating the solution of a mathematical algorithm. Sometimes the answer is not far off, but if you're unlucky, while traveling down the path, the approximation flips something or gets exponentially amplified to the point that your solution is -1000000% off. You can see this happen in computer game physics sometimes. Unless you input definite, non-approximate truths, you can't be sure logic holds.
Is there an explenation to this "flip"? Is it an internal problem within the solution, or does it become "off" by interacting with other solutions to other problems? Hope that makes sense.
@@DeadEndFrog It's a quirk of the algorithm, for example if you have non-continuous function like division in there somewhere. Math is full of these functions. Take f(x)=1/(100x-0.01). If x is 0.1, f(x) is also approximately 0.1, if you round down 0.1 to 0, f(x) instantly becomes -100. This one is really obvious, but they can become very hard to spot if the algorithm is complicated.
This was extremely interesting and refreshing. I've been away from academic philosophy for over a decade now, and I kind of expected this to be a rehash of Gödel's incompleteness or something. This was a pleasant surprise. I realise that although I took a special interest in formal logic back then, I never really pushed that interest very far, as I switched to a different field. Deffo subscribed :)
This has me thinking of Deleuze and Guattari, in the plateau on the rhizome, writing that language is precise in being vague. What they argue is that language is not "approximate" but rather that if it had static, unchanging and precise meanings, it would fall apart. When I say "apple", you cannot know whether I'm referring to a green, or red, a rotten one, or one still hanging from the tree. But that doesn't mean the truth you are getting from it is only "approximate" or "idealized", rather it means the description you have access to is not precise. If "apple" could only refer to the perfectly round red apple I found in my garden this morning, the word would be utterly useless and nobody could ever use it to describe anything except the one apple I found in my garden. For example, if you hear the story of how Newton came up with his theory of gravity when he was sitting under a tree, and an apple fell on his head, you don't know what position "sitting" means and you don't know what the apple looked like, even the tree might be a young one full of apples or a dying old tree that surprisingly still held fruit. But does that mean the truth delivered to you was only an approximation or is it rather the case that the image you get is vague and imprecise? I would certainly tend towards the latter. For the question whether that means classical logic will hold, I will vaguely (hehe) gesture towards process ontology and what it has to say about classical logic (I am also just too lazy to detail that in this comment which already feels long enough)
Suggestion for new video title - *“How Approximate Truth Breaks Deductive Logic”* - or something. I think the current one is a bit too general :) Anyway, great lecture! I am working with inferences and natural language atm, so these are concerns that give me sleepless nights :P
That would be an excellent title if the goal were to accurately describe the content of the video. However, the actual goal is to get people to click on the video, and I think a more general title is more likely to encourage that.
This is the first video of yours I watched. Based on the thumbnail I was hoping you would talk about some unique case where boolean algebra doesn't work. I guess I now watched a philosophy video about approximative truth.
I don't think the concept of "approximate truth" is well defined. There are probable truths, and there are approximations of various measurements, but to say that something is approximately true is just to say that it's false. My perspective on the matter is that a given theory is nothing more than a self contained abstract systems, with no necessary relation to reality. In that sense, valid deductions in a theory are absolutely true *of the theory* , so we don't need to worry about what sorts of deductions are valid. What we actually want is not approximate truth, I think, but approximate measures. For example, I don't care if the electric potential of my capacitor is *exactly* what my theory predicts. I know that the prediction is false, and it doesn't matter. I only care that the predicted measures are accurate enough to build a working circuit. If a specific measure, as predicted by a theory, has been tested and verified to be accurate to reality within some amount of error, and that error is tolerable, the model is useful. To give an abstract example, consider the problem where we pick some positive number x>0, and we want to know if the infinite sum sin(x)+sin(x^2)+sin(x^3)+etcetera will converge. We know that, for small x, we can approximate sin(x)~x, and under this approximation, our sum is just x+x^2+x^3+etcetera. We know this sum converges when 0
I get what you mean, but I'd say the way that "approximate truth" gets used in the video indeed means more than just falsehood. It is true that theories are self-contained logical systems, but their application gives so much more than just an approximate concrete answer. A theory, and a model that is part of that theory can be used to study phenomena qualitatively in an effective way. The thing is, we just need to be aware that the theory doesn't correspond to the truth. But the practical aspects of the conclusions of the theory can indeed work in reality in a way that (qualitatively) it is as if they were indeed the truth, defined by the theory (for example, as if the "Sydney Opera House" was indeed an objective an unique concept which concers us as the problem solvers). And proof of this is, of course, the success of any science, be it an "exact" science, or other natural or social science.
@@eduardoz398 I don't know if it makes sense to say that *truth* is utilitarian, but I personally am pretty utilitarian, so its natural to me to justify the concept of synthetic truth in utilitarian terms. That's not to say that I don't have other philosophical reasons that guide my attitude, I do, but those arguments are pretty esoteric and I wouldn't be able to do it justice in a RUclips comment. The utilitarian argument is probably the most convincing to others, since it makes the fewest assumptions and is compatible with many different philosophical perspectives. The relationship between reality and any given synthetic system is often tenuous, but using synthetic systems to approximate reality has been a very successful strategy, historically.
Since the year started, I started seeing more and more and more small yt chanells that are talking about philosophy, philosophers, logic etc. Sometimes I love the algorithm
After watching this, I feel like the "limits of logic" you've listed come down to variables changing and, as you mentioned at 10:08, us being unaware/disconnected from the "one true logic/oblective reality" which I think we should always strive towards. The sydney opera house example you gave was made vague enough for use in this video, in my opinion; the example could've been "What was the speed of sound in the sydney opera house on May 2nd, 2009 at 2:35 PM Sydney AEDT sharp?" instead of just "What is the speed of sound in the sydney opera house?" This reminds me of the differences between set theory and homotopy type theory (In my understanding, at least) where set theory is more "vague/loose" with what is and isn't an equal to a variable. For example, if all elements of set A are in set B, and vice versa, then A=B; however, in HoTT, two "types (X and Y)" that would be equal if they would be interpreted as "sets" in set theory now have more dimensions that could be used to distinguish between the two types, such as location and time (This also relates to your comments on "equivocation" at 10:38). All that to say that the question "What is the speed of sound in the sydney opera house?" is too approximate of a question to have anything but an approximate answer. Just my initial thoughts, and me being completely wrong is a definite possibility.
to acknowledge that "truth," in some "objective" sense, is never actually accessible, but rather that only "approximate truth," in some utilitarian sense, is all we can grasp, is the first step in realizing that "truth" was never the goal to begin with. Any given "theory," in any field of inquiry, has a degree of utility. That utility is all we need to focus on, while "truth," as classically conceived, can be safely ignored.
@@ryanziller220 so, on a year old comment you are speaking gibberish to make some kind of point, is that it? How droll. I confess, that OP is a bit convoluted, but perhaps it makes more sense in the context of the video, I don't remember. But the points seems valid: We can only identify "objective truth" by way of our experience, which is why we can focus on our experience and leave "objective truth" to be as it may and yet still be none the worse for that. This is pragmatism.
@@ryanziller220 I expect that my comment is one you either understand the utility of, or you don't. If you do, then we can use it to work together in whatever ways we see fit, and if you don't then we can't. But beyond this understanding and utility, I don't know what "some kind of truth" you are expecting.
@@ericb9804 @ericb9804 A truth is never related to those cognitive environments wherein the objective reality of an individual is expected to be used as he or she mulls over of those facts which are directly related to the proven insights to come of actuality in so much that distortions or dissonance can be misused. Did you expect that you would have provided some kind of truth with those statements which you brought up in your comment?
@@ericb9804 Objective truths do not exist if you were right. Subjectivity negates the need for there to be objective truths. We already know that subjectivity acknowledges the limitations for seeking the meanings behind any truth. Instead, there are apodictic descriptions and certainties which support the truths of subjectivity. Nothing expressed by me was meant to suppose the utility of an unproductive thought. You must be seeing nothing while reading the comment, with that having some relevancy in regards to your approval for misconceptions.
From a certain perspective, the study of statistics is first and foremost a way to assign quantities to levels of confidence and perform arithmetic with them. If A and B, then C; A with 90% confidence, B with 90% confidence, therefore C with 81% confidence
There is a lot of Bayesianism in what you say, it is very nice. I personally like to think of logic as a special case of a probabilistic system, one that would give a value between 0 and 1 to any logical statement such that it would satisfy several axioms, like the value of "false" is 0, the value of "true" is 1 and if "a" and "b" can be proven to be disjoint then P("a or b")=P("a")+P("b"). (I do not give any value to these probability myself, they just have to satisfy those axioms, and for instance logic, which gives values 0 and 1 only could in principle be such a probability distribution). Now you can play around with this and for instance define conditioning which is kind of fun, for instance if P["if a then b"] is high (higher than 1/2), then P["b"] is lower than P["b" | "a then b"], this means that the knowledge that "a -> b" increases the probability of "b" and by symetry decreases the probability of "a". If you apply this is maths (or science), when you have an axiom "a" and you prove theorem with it, then you increase the probability that the theorem is true, but you decrease the probability that the axiom actually is true, I like how it resonate with the fundations of maths and the way we are trying to change it completly every now and then.
Yo I have never oriented to think of logic as a binary probability system. Wow my friend you are exceptional, it's a pretty simple but base level different concept. I'm trynna wrap my head around what this practically means. This is awesome thinking though
Precise language will not help. In "Proofs and Refutations", Lakatos shows that even in mathematics, statements have hidden lemmas that will falsify the claims. What matters is if we deal with the hidden lemmas in a progressive way that leads to new knowledge.
I don't think language can be fully precisified anyway. The concept "species" can decompose into various species concepts: the biological species concept, the ecological species concept, the cladistic species concept, etc. Then each of these can decompose into further concepts: the biological species concept can be understood in different ways depending on how we define what counts as reproductive isolation. Since concepts are indeterminate, it will always be possible to draw finer lines.
@@KaneB @Kane B yes, and each of these finer lines are a hidden lemmas that can falsify the claim about species. But again what matters is how we deal with the hidden lemmas. If we save the theory from refutation by changing the definition of "swan" to include "whiteness", we are degenerating the theory because it does not lead to new knowledge. If we instead discover biological reproductive differences between coscoroba swans and other swans, then we have progressed the theory. Hidden lemmas and finer lines will always persist, but so what if we can use them to our advantage and learn new knowledge - gaining a finer grasp of comprehension.
In the Sydney opera house idea, the key is to clearly state the exact premise so that some logic can be performed on it. Problems like this often crops up in programing, where some user/business functionality is unclear, and further requirements/specifications needed to be spelled out; so that a program can perform automation/logic on the premise and reach the intended conclusion. Often times, the input needs to be transformed, and edge cases for scenarios need to be tackled. Most programs are deterministic models of logic, unless you are looking at the probabilistic models used in recent AI programs. Edit: Much of these problems that you've went over are because the logic is being performed on the natural language (which is ambiguous and prone to evolving). This is why pure logic is done via programming languages, which are specific.
I'm also a programmer, and I disagree with you, even programming, the best we have are useful approximations, because we build things based on assumptions that are subjective to interpretation, and with limited resources to use to handle the problems. Even in the most specific implementation, we are doing inference based on subjective concepts and incomplete models. So our inferences are not precise, because we are limited by the computational power both of the computer and from our brain, and those are based on approximate concepts, with gives us a good enough response, but we can't say this response is completely correct.
Although it's true that you can't necessarily maintain approximate truths through a deductive chain of reasoning, if you treat claims as having probabilities of being true, then you can observe every possible branch individually, then sum their respective probabilities using Bayes' Rule to deduce approximate truths. There is one problem with this, which is that this process can be highly sensitive on the initial approximate truth values you assign to your premises, so if the chain of Bayesian reasoning is long enough, you could arrive at approximately true conclusions, which nonetheless seem very unlikely. All of this is to say that usually this works. Using logic and by extension probability theory (Bayes' Theorem) along with inductive reasoning (investigating the approximate truth values of the initial claims/premises) tends to work. "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is an article, after all.
I have this concept that -- Theories can be *creative* and generative of reality. For example, in computer science, somebody will come up with a theory based on observations of the world, and it'll be directed towards some end, and the theory is like a possibility -- it is descriptive of a possibility. And then if it works out, it creates a real life situation that is well described by the theory, and proves the plausibility ("That was a true description of a possibility, and we know that because we could do it.")
The truth of a deductive conclusion is contingent upon the truth of the axioms, and the truth of the axioms cannot be determined deductively because you run into an infinite regression. Premises can only be demonstrated inductively, and induction is necessarily probabilistic.
@@lawrencedoliveiro9104 Yes, its a deductive conclusion about induction, and this deductive conclusion about induction can only be verified inductively
@@lawrencedoliveiro9104 Perhaps re-read my comment? I literally said that this claim can only be verified inductively. Induction does not allow certainty or proof, only probabilistic confidence
6:14 Suppose the sentence "the only thing we can get are approximately true descpriptions of the world" is true, then that sentence is, by definition, only an approximately true description of the world. That means that there's a sense in which there exists descriptions of the world that are not only approximately true but also strictly true (given that the sentence "the only thing we can get are approximately true descpriptions of the world" would be strictly true of that wasn't the case). But if stricly true descrpitions of the world necesarily exist this constradicts our premise. It's like saying "everything is subjective" Which means that what you just said is subjective but if that is the case, there exists a subjective frame in which everything is not subjetive, but if something is objective in a subjective frame, it has to be objective in all reference frames. Therfore contradicting the premise. It's an argument similar to the barbers paradox.
All communication involves translation, which is Abductive. One could reason thusly: Everyone is playing the abduction game, whether they know it or not, and it is up to the cognizant to let those that call "foul" when truth can't be pinned down precisely continue to play the game with their own made up, extra rules. Just... over there a ways, at the kids table. Thanks for another great video. I enjoyed it and might possibly agree with large portions of the arguments... I think. Bayesian Regress imminent while I sort it out. ;)
"All communication involves translation, which is Abductive. One could reason thusly: Everyone is playing the abduction game, whether they know it or not...." Isn't this a deduction? I mean, I like your post, but I also like shitposting, so here we are.
@@YawnGod nah, abduction it is proper kind of inference. And it is currently the most important in scientific practice. For example, when looking for the best form of medical treatment, purely deduction is not used, the current method is abductive (evidence based medicine) where you gather several hypotheses about the patient and various evidence about the context. And the result will always be approximate never absolute, even if you received the best true clinical proposition last year, this year you may receive a completely different true answer as the evidence may have changed.
@@YawnGod So let me do a little test: A. If I perceive that you do not understand the basics of logic. B. makes a joke about vaccines. C. Writes like a psychotic. If my conclusion is: you are a complete imbecile. Is it deductive, inductive or abductive logic?
As someone with scientific training, I've never really considered anything other than "approximate truth" to be "truth" when we use that word colloquially. Or something like "an idea that provides explanation and allows us to interact with the world" - that's plenty enough for me, I don't hunger for more. The banal (to me) kind of deductive truth, where the conclusion is just the premises rearranged, is something I've never taken very seriously, it seems to have no application when it comes to understanding reality, outside maths. I've never considered this question before (and I know nothing about logic), but off the top of my head, it seems to me that logic is a post-hoc analysis of the cognitive tools we've evolved to understand and interact with the world. Should our expectations be particularly grand? I think there's some structure to reality that makes logic and maths work, and allows maths to describe nature - and that's just how nature is.
For someone with contemporary scientific training, 'approximation' is directly proportional to the measure theory applied. For example, choosing infinite strings of "real numbers" as a measure entails that everything is approximate and nothing exact, choosing a finite level of Stern-Brocot tree as your measure, every numerically valued measured gives either a rational value or value between neighboring rationals. On very general level, prenumeric measure theory can consist of only more-less relations, and equality can be defined as neither more nor less.
_"The banal (to me) kind of deductive truth, where the conclusion is just the premises rearranged, is something I've never taken very seriously, it seems to have no application when it comes to understanding reality, outside maths."_ But the process of rearranging premises is useful in pretty much all fields of science. Whenever you solve the equations of a model for some variable of interest, you are performing this kind of deductive reasoning. There would be no point in creating models if you were not interested in deducing the logical consequences of the model's assumptions. The problem is just that the uncertainty in the model, the measurements and numerical solvers forces us to think about validation and error bounds. It would be more appropriate to describe the perfect objects of interest in maths as banal, compared to the real world, than to describe the idea of deducing logical consequences as banal.
@@MelodiousThunk Deductive (mathematical) reasoning is of course absolutely crucial to our understanding of the world, I'm not saying that it isn't important. I'm saying that when we ask what we mean by "truth", citing the fact that Z follows deductively from X and Y seems to me a banal response. Yes, sure, Z follows deductively from X and Y, but what I care about is whether X and Y are "true" and what that means. And as I said, I think it's something like "X and Y are ideas that provide explanation and allow us to interact with the world".
@@jonstewart464 I don't know what examples of premises and conclusions you have in mind, so I can't comment on the value of these conclusions. I suppose they would be banal if they were obvious consequences of the premises though. But in general, it can be extremely difficult to deduce some logical consequences of a given set of premises, and solving such problems can greatly enhance our understanding of the world. E.g. there's a million-dollar prize for answering a seemingly basic question about the nearly 200-year-old Navier-Stokes equations, because it's thought that this will bring us closer to understanding turbulence.
The more and more formal mathematics and logic I studied, the clearer it was that we were just winging it regarding its application to the real world. The more you see proofs, you realise there's no way the real world could be so strict. Then you see model theory and you realise that we've privileged some mathematical axioms, deductive systems, and results with our attention, but out of practical concerns. We have tunnel vision about mathematics: we've historically only focused on results in those parts of mathematics that can be approximately applied to the real world , and now most people think that that's all the math there is and that the logical system that supports it is the one true logic
Regarding the opera house. Once you define your parameters you could ask the question did sound travel from point A to point B in X seconds as predicted in a hypothesis? The answers Yes or No would be logic. What was the approximate time sound traveled such distance is a tentative and physical question that requires scientific experimentation, but did the sound travel as fast as predicted Yes or No doesn't require relentless testing; It either did or didn't. You couldn't even practice science without laws of logic.
Is this just a matter of definitions rather than logic? We experience complex patterns and name subsets of this when we spot sufficient similarities.If this is useful to live and to communicate then we don't need to make stricter definitions...until we need to do so. Everything is approximate. Is there more being said here?
We study logic in school and I understand nothing of it despite actually liking logical schemes in cpu as a child. I still think that the knowledge is very limited if it can be actually acquired and it is really hard to deal with the irrationality of the world we live in. I hope the philosophy guys are doing well cause they think a lot
This is basically the same idea of the vSauce video "Do Chairs Exist." Another great illustrative example is that when asked "which ship is the real ship of theseus", the problem is not one of identity, but one again of natural language. If the real ship of theseus is meant to mean the one that is, say, registered under port authority as the ship of Theseus, being used actively as the ship of theseus, then it's clearly the new one. If the real ship is meant to refer to the collection of original boards, then it's clearly the old one. It is a matter of the lack of precision inherent in natural language
6:00 that you can get only approximations doesn't mean that phenomenon doesn't exist, that truth perhaps its instrumental, but its in the way of truth. If the prerequisites of a category can exist, such category can exist, like the speed of sound on that place. Is there a material medium? Yes, then a speed of sound exist.
There has been since Carnap or maybe Tarski an attempt to understand the content of a theory. Popper had two notions of content: firstly he had logical content, which is all the claims that it logically implies (possibly excluding tautologies, since they have zero content) and then there is informative content, which is any state of affairs or theories or statements or propositions that the theory rules out. Popper thought that nothing (empirical) follows logically from a universal theory unless supplemented with auxiliary hypotheses. Yet the theory can exclude states of affairs without auxiliary hypotheses. My point here is that content is always best construed as the set of models that are excluded by the theory. It makes a lot of things logically neater. Approximately true theories exclude more true models - in the model theory sense - than they should (in other words they claim that certain things are false that are in fact true) It is in this sense you should understand approximate truth. The reason is, like Dummet said, what is primary about a sentence is what it rules out. So you could say that approximately true theories claim things are true when they are false. But the things that it claims are true, exclude other things from being true and that is what the (informative) content of those claims are. My other point is that in order to see what a theory rules out you have to entertain it as true. So your argument that approximate truth somehow makes deductive logic inapplicable is mistaken, since it construed logic (like most logical explanations) as a mode of justification when it is simply a way to manipulate content. To show what is included in what.
When talking about the different ways the same word can be used in such a way that deductive logic doesn't hold, you could prove or assume that an implication is true for all different understandings of the word allowing for deduction despite the uncertainty of the exact meaning
A quick note, I think logic is an artwork expressed by us humans to approach truth to make the world around “usable”. Precise things are maybe a thing for the future.
Point on the swan stuff: speciation is a human consensus, understood to be an approximate determination. Every individual is unique and every species has similarities with earlier points on the tree of life. This is a reality of the tree of life itself, it is seamlessly connected , repetitive, and variable, where its variation is something that leads to the very proliferation of complexity that demanded the need for a tree of life in the first place. The things that could be said of biology that are strictly true are already so self-evident that they are rarely imagined or dictated: all life lives, all life comes from other life, life on this planet can only be compared to other life on this planet, etc. Biology seems to be a sort of hopeless attempt to categorize nature, where nature as such can understood as that which defies categorization.
A good example of a conclusion offering less information is the basic equality. That two things are being equal doesn't tell you what they are. And this can be extended to a concept of similarity, which is an approximate equality, sort of. Obviously things become mighty confusing from there, because you have to qualify the error in similarity to be able to wield such a technical monstrosity. What is similar enough? What are the domains of comparison? And who is judging whether it actually means something? Ergo, similarity is a potentially mighty dangerous concept, if only one boils it down through logic.
Imagine a gradient of white and black on a finite line. Is there a clear point of difference between where white begins and where it ends in relation to the black colour? no? does this mean that there is no such thing as the colour black or the colour white? no? see where this is going? just because I can not define what a chair is (because the varieties are endless) doesn't mean that there is no such concept as a chair and doesn't mean I can't say facts about the chair.. food for thought.
Your voice was just swirling around and around towards the end. Honestly the best Soviet philosophy is already overcome this concept like 7 decades ago
I'd say as soon as you start thinking about the real world you've technically stopped doing logic and started doing science. Approximate theories that approximately conserve approximate truth are what science is all about, the key is in keeping track of your uncertainties.
Applicability of any logic to an appropriate domain of objects is fundamental to the natural language application of logic, but perhaps only approximately. However, great precision has been devised out of approximate beginnings. So it is the vector of demonstrable returns, understood partly in terms of empirical testimony and partly in terms of logic's internal apparatus, that gives the ultimate sense of logic's truth beyond merely formal consistency. But a demonstrably growing trend of a theory about the world becoming more precise, even within the bounds of merely approximately so, that is the theoretical and practical unit of assaying what truth there is in a system of thought, even of logic systems though they are so many and of such different types. So a formal theory of all logic would have to account for this in a paradigm that would be a form of metalogic not yet fully articulated in the field.
I guess you could take the argument as concerning not logic in general, but all logical systems other than whichever specific system you think solves the problem. This will probably still have interesting consequences for the application of logic in philosophy and other inquiries, since it's unlikely that most philosophers often use the system in question when assessing arguments.
I’m confused on two points: 1. what do you mean by infinite-order here (do you just mean higher-order logic?) and 2. what do you mean by “do anything”?
@@funktorial I mean the “limit” of higher-order logic, this being metalogic. By “do anything”, I mean capture any conceptual or causal expression in principle, as metalogic has the capacity to function as the metalanguage of science, art, and “spirituality”.
The traditional deductive logic is designed as two valued logic, which can be quantified as 0 and 1. There is nothing in between. If you are looking for an approximate logic, the probability theory is what it is, which leads to a "partial truth" by percentage. In the probability theory, 0 and 1 are merely two possible cases among many others.
Seems to be saying that we're just playing a language game a la Wittgenstein and that we narrow or widen the scope of this language game depending on context
The Sidney opera house sound does have a specific speed at a specific speed between 2 specific points at a specific moment. Only our perception has no specific idea of it
In the event that deductive reasoning needs to be used on approximate ideas, more well defined definitions along with an approximate truth value, i.e. 80%, 33% etc, can be used. The result would be something like approximate likelihood of an outcome rather than a binary truth.
in my opinion all truth in approximate: even things like math and logic where we should know that the consequences of axioms are what they must be. but those axioms can only be defined in terms of natural language, and natural language is itself only approximate. the trick is just that logic and math are among the few fields where the only possible errors would have to be in communication of a small set of simple rules, which turns out to allow it to be so nearly correct we may ignore the approximation and still yield results which are extremely often useful for anything which may be approximately described in their terms. but i'll admit it's hard to really say much about anything, purely in this framework
Nice discussion. The concepts that you characterize as indeterminate-like swan-can be treated as open-textured concepts. Ordinary language is rife with them, and science usually starts with folk concepts from ordinary language. However, we can rationally reconstruct some of these concepts and converge upon greater rigor.
The validity of an argument allows us to check whether someone’s criticism of our conclusions affects our premises. Validity should be renamed the principle of the retransmission of falsity. If our argument is invalid then the criticism of the conclusion doesn’t affect our premises, which means that we have not opened ourselves up to learning whether our theory is false.
I think we just need to be clear about the scope of deductive logic. It's fine as a step in dialectic if the intended logical relationships are unclear to who you're talking to. It's a tool for communication. What follows dialectically is of course whether the ideal premises are adequate, which really is just the soundness phase of analysis. 'True enough' is a matter of consensus appraisal in the context of each communicative process. The aim is to argue to agreement as to the more comprehensive and coherent system of ideals. Rationality is purposive and idealistic, but most importantly a dialectic process.
Is this a dialectic between Ideal/Platonic and Realist/Materialist? Just a general question, my formal philosophical education is decades in the past and mostly to used on cognition,etc. Thanks to all friendly responses regardless of position☺
A thought that I have been stuck on for a while now is that all language is simply "Point-Grunt" communication. One points at a "thing" object/concept/noun, and then grunts a sound. Another looks at the pointed thing, also points and grunts the sound that he heard. This is what all definitions are. Approximation is obviously a factor in that the two are looking at the same thing with different angles, and they cannot hear themselves the same as they hear each other. With modus podens and deduction, I don't see it as anything other than nested definitions, which you seemed close to be getting at. This logic is not measuring anything. It is all about meaning. Are you familiar with WordNet? It basically defines all words into nested syllogisms.
In summary: 1. Words are vague, so no statement is ever precise 2. If something is roughly/generally true, its part might not be (some organisms live for centuries, so human life is really short; but “human life is short” is not necessarily true by itself) 3 If some things are roughly true, then their combination might not be (temperature on the North Pole is not that much different from any other place; North Pole is lethally cold)
If one form of 'approximate truth' is uncertainty, then we do have a logic for that. The Bayesian interpretation of probability as a rational degree of credence can be used to create probability logic. Ernest Adams defines an argument as p-valid if the improbability of the conclusion does not exceed the sum of the improbabilities of the premises. In simple cases this means a valid argument cannot have highly probable premises and an improbable conclusion. The logic differs from ordinary classical logic though.
I call it controlled scepticism it's like when you think as a mini you in your head but you stop yourself from having this linear regression of a mini u inside a mini u and just keep it at one level or we could just doubt every object that exists but out of practical necessity we shouldn't
If there is One True Logic, why should we expect it to be inapplicable to say, arguments in ordinary language? Or arguments about empirical theories? This may be question-begging, but if there is One True Logic, I'd expect it to at least accommodate those cases! The upshot is, I'd expect the One True Logic, if there is such a thing (though I don't think there is), to not be classical logic. But that still leaves a lot of possibilities on the table. And to my eye, there are ways of taking approximation, imprecision, vagueness, etc. into account when we perform our reasoning, and that reasoning is still perfectly "deductive." We can just report our errors and uncertainties along the way. That's what we do in say, statistical inference. And there we find a perfectly deductive theory of non-deductive reasoning. Sometimes imprecision is actually very useful. Another example: What's the referent of 'n' in "let n be an even number?" Well, it's not exactly specified. All we know is that it's some even number. So in our reasoning about 'n' we're not allowed to use properties which aren't true generically of all even numbers. So we can't conclude that 'n' is divisible by four, say. And that's good! (There are different models of what's going on here, and they track pretty closely with different models of vagueness/sorites problems) And finally, what I'd say about classical deductive logic being an idealization is what I'd say about any other kind of idealization we use. It's merely instrumental. It's useful to reason "in the limit" because it has practical implications for deciding what to do and believe.
I couldn't agree more. These are tools that should be aimed at and used in practical matters. Expecting that any logical system will produce an ultimate truth can only lead to frustration and disappointment plus a palpable danger of confusion and disconnection from reality, mind you.
It is true that approximate truth isn’t preserved in logical manipulations of content, that is why it is imperative that a working theory of verisimilitude is created. Verisimilitude is the ratio between truth and content. Many people have just abandoned the verisimilitude project (started by the way by Popper). There has been some advance in the last couple of years, Mormonn for instance has created a topological theory of verisimilitude, that seems to get a lot of what Popper wanted out of such a theory.
This video alone is why probabilistic models of physics will eventually replace our current paradigm. The inseparability of bias, lingual encoding, and mathematical representation of systems.
The logical operations themselves aren't approximations, but the variable representations are approximations of reality or just concepts in general, doesn't have to have any ground in reality. "This statement is false" fails to look at the biological history of linguistic evolution as a byproduct of brain function. Its limited context is what makes the logical operations on the statement possible, thus allowing us to categorize it as an example of what we call a paradox. Without the reduction into simpler workable chunks, we can't make any statements like "that's an example of a paradox" This is a concrete statement on the original paradoxical statement. Math in general uses this form of reduction to make things workable for math people, doesn't say anything about the accuracy of its representations, thus they are mostly discussed in the context of pure math without any attachment to actual reality. When you bridge the gap between the thought process and its relationship to reality like in physics, that's when things become a bit wishy washy, handwavy, etc.
What about the claim that “all the claims we make about the world are approximately true”? Whether that’s an a priori or empirical claim leads to different problems. But insofar as it is self referential, then I think the crisis is a bit overstated, at least with regard to formal logic and it’s useful application to natural language. I wonder whether Saul Kripke’s Naming and Necessity might shed light on the problems of mapping what is rigid onto what is approximate. Thanks for this thoughtful video!
The idea that approximate truth can break formal logic is an interesting one but I would argue the imprecise use of language doesn't make a strong case for idealizations to be wrong or even for them to be of limited use (and by limited I mean to a degree that would significantly alter the reasons why we use them). I would say that at best, it makes a case of the strengthening of the precision of natural languages that we use. To go back to the argument about the speed of sound in a particular room, just because there is no speed of sound for the entire room (unless its temperature gradient field is null), it doesn't mean that there is no such thing as the speed of sound for particular pockets of the room where the temperature is exactly the same. It also doesn't mean we can't say the average speed of sound in the room is X. My point here is that just because there exist a continuous field of ''speed of sound'' in the room, that doesn't mean the whole concept should be thrown out. It's a bit of an equivocation fallacy in my opinion to equate the value of the imprecise language used to the value of the concepts that are underlying the context of a conversation. To give an example : just because I use the word blue to describe turquoise doesn't mean turquoise or blue are invalid or don't exist as colors or even that the idea of precise color descriptions is invalid. At best, it means I used imprecise language and should be more precise next time I speak if my intention is to be more consistent.
Yes, if you know what the 'room' even is and all conditions in it, you can be more precise. But can you ever know all conditions to get a real answer, or is an approximation more meaningful? The point, as I read it, is not to dismiss concepts or say they don't exist. Think rather of newtonian mechanics - they work, so long as you go nowhere near light speed. But logic demands that something is ALWAYS true to be true. So either we don't use it at all or we admit it's an approximation and technically, you can't use it in logic.
@@Leartin not without computational explosion. Hence ideal laws of science that minimize unanswered variables as inconsequential to the desired measurment
On the first point of approximate truth abstract deductive reasoning or reansoing based on abstract concepts can be absolut like math but once applied to the real world like math applied in physics then it becomes uncertain
Aproximate truths aren’t a problem if the part that isn’t strictly true is irrelevant for the argument because if you say: a ^ b therefore a, this holds true even if b is false, this means that even without strictly defining Socrates or any other concept you can make a valid deduction as long as the relevant part is preserved
My private conception of logic is entirely synthetic and approximate: we notice patterns in the world, we make up games where relationships between those patterns become moves in the game, and we play. That's logic and mathematics. But logic and mathematics aren't real; they're just games. And the universe is impossibly creative. It can always come up with something that doesn't fit into the parameters of your game.
It sounds like you’re saying that deduction can’t come from vague premises, so there’s something wrong with deduction itself. If that is what you’re saying, it doesn’t sound valid. Maybe the problem is just vagueness. Specify what you mean, then do the deducing. Problem solved. Once the exact conditions of the concert hall are specified, then there will be an exact speed of sound within it. Whether or not anyone can correctly calculate it is another matter not to be conflated either.
I don't think there's anything wrong with deduction. The point is that deductive validity preserves truth, not approximate truth, so that if we are aiming only for approximate truth, deductive laws do not strictly speaking apply. "Strictly speaking" is an important qualifier there: deductive logic is often extremely useful as an idealization. >> Once the exact conditions of the concert hall are specified In my view this is impossible to do, partly because there is no fact of the matter.
@@KaneB Well, supposedly, there will be a reason why you want to know the speed of sound in a concert hall under a set of conditions. Once you know what those conditions are, you can answer the question. You are correct that there are an infinite amount of conditions, but the key here is that they do not matter. Nobody wants to understand absolute truth. It would be as undifferentiated as reality without thought or conceptualization. It is wrong to say that there is no fact of the matter: there is no useful fact of the matter.
If descriptions are dependent on definitions, then an "object" is as defined. Definitions contain the subset of all possible values inherent to the object or word that is being defined, automatic. To stay consistent, we have to remove our perceptions of existence from the idea of existence. An extreme example: if, on average, we perceive an object to have similar qualia to that of a shoe, but the creator of that object has defined it as belonging to the class of tables, then it is a table, no exception.
I can't get past a sneaking suspicion that philosophy, although not just philosophy, persists not for its logical utility or any other downstream value but because it provides a particular (perhaps peculiar) transmission medium for the generatively self-propagating invocation of more language. Philosophy being a language game without closure because this is precisely how any complex adaptive communications system assures sustainable continuity in any context. Truth as a concept is eminently more valuable as a function of its uncertainty in anything other than the most trivial of circumstances and the competitive narrative communications systems that develop around it are implicitly, perhaps, unconsciously oriented towards an asymptotic approach towards closure. This is in many ways a corollary of the notion that psychological selves are inversely (or is that reflexively) defined by a goal-seeking that, if ever arriving at closure and resolution would then be to entirely invalidate the difference and distance by which that self persists - so, complexity in physics, biology, intelligence, language and logic persist as though they were possessed an intentional agency of not finding resolution, truth, certainty, whatever. The point perhaps being that none of these systems actually possess impermeable or inviolable agency and certain identity - this is their hidden raison d'être. Logic is incomplete because it has to be to persist and your linguistic, competitive and differential reflex to my ad hoc assertions confirms that the only winner in this game is language. TL;DR: Laws persist because they are faulty.
I do not believe that he has the patience required of someone who can produce a logical thought whereas the limitations of our human experience collectively prevents us from developing that level of diligence.
Thought cannot truly cannot be quantified unless we create multiple system to allow us to measure them In a way we can comprehend. As a result we get language and arithmetic. Despite precise numerical values which can be universally replicated through the course of space time, language is limited by universal acceptance of certain words within the given lexicon of a language. Despite it not being specifically finite, language is merely a way of transmission of information of specific emotions. As result, logic is created by the limitations of the English language because we are forced to think in this manner by using specific words to entail one’s view. Until we can transmit our brainwaves and create a translation system, only when will human communication will evolve. However, I’d argue that our vocal systems are perfect by design and we must create a new standard of communication to break free of the limits of logic
Could it be that Reason is flawed in that if you push it to the nitty gritty, it always seems illogical? For example, there is a concept that is true enough about Socrates, it's that he was a man. Whichever definition you use for man, it is true enough the understand the logic that Socrates is mortal.
I think there might be a way to get exactly true descriptions of the world (think of something like "deductive" reference rather than inference), but just like with the typical problems that are raised about deductive inference, it probably wouldn't be super "informative" and we may not get the rich theories that we see in science.
Forgive my naivety as it's been decades since I studied quantitative theory in logic; but I offer that the main (dare I say zen-like-moment) tenet that I gleaned was that *everything* inevitably leads to if/then statements as the only real conclusion to be drawn from any discourse. Perhaps I over-simplified or got it totally wrong. The only thing I feel certain about all that is that I've lived more than 30 years from that vantage point, as I could never conclude otherwise, myself. It seems to be quite relevant to this discussion. What's the scope of deduction at any given moment, as dust specks in a dust storm.
I wonder how much of the logical issues and solutions aren't simply linguistics tricks. Terence McKenna argued that language preceded meaning, and, after delving into Wittgenstein's ideas, it seems to me that Terence made an important point that even the deductive/inductive models are simply a particular linguistic trunk trying to convey information.
So i guess the limit of logic pretty much in a boundary of human ability to validate the true nature of the premise and since we could only describe approximate truth of the premise itself, we could only make do with approximate truth of conclusion from the premises.
We could see logical arguments in the same way we see goods in a market. We could say logical arguments are goods exchanged in a free market of logical arguments. We contribute arguments to the market and receive arguments from the market. We get to use our logical arguments for as long as they can hold up against the forces of nature and discard them when they wear out. This is how we treat clothes, houses, and other goods on the market. With this view, we don't care about whether or not the logical arguments are true, only whether or not they are useful. This is a pragmatic view of logic. Rather than caring about the truth of the argument, we care about what it can help us achieve.
the impression this gives me is the following, please tell me if I misunderstood you: "if approximate truths or undeterminate concepts combined with correct logic, however exact it us, and true premises, however accurate they are, sometimes lead to incoherent or false statements, and since all we have is approximate truths and underminate concepts, it follows that science and the conclusions or theories it reaches are fundamentally flawed, abit like math without its rigour" I'd argue that you're missing here perhaps the most fundamental thing about science, what makes it what it is: that all theories, precisely because one knows what you're saying here, hold not for the mathematical proofs that lead to them, but for they suit experiment! I feel like that only suffices to render "null" (in the sense it's not saying something new or "interesting", and that shoudn't be taken as an insult of course) the point of your reflexion (which I also feel like is either to say math is greater than science, or philosophy is bigger/overarchs it, anyhow I think both are incorrect), as correctly explaining what was observed, and predicting what is to be observed, is what discrimates good from bad theories in science. As such, the scientific approach (at least for the field I know the best, physics) is just about associating undeterminate physical concepts with mathematical objects and using undeterminate truths about those physical concepts to infer conclusions using the exact math, conclusions which once translated back to the world of physics, as previously stated, obviously don't hold the same certainty as their math counterpart. Then, one decides wether or not the original mathematical correspondance and the approximate truths were "good enough", by checking if experiment is coherent with the theory. Repeat That's why it always gives me a smirk when philosophist try to englobe science in their field of thought, there's really not much to do! all they're really doing is englobing the scientific approach, not science! That's the procedure science follows, it's easy, makes sense, works more accuratly than anything else we've ever had and, altough philosophers focus on that, it's really not where the difficulty and the beauty of science resides! In physics at least, it comes from the intuition one gets about the world and the entanglement of those intuitions to form and create new understandings, an epiphany of sorts! That a linguist is able to spell the words music, philosophy, or physics doens't mean he can play the first, practice the second or understand the latter, little more does it mean linguistics 'contain' music, philosophy or physics, and it certainly doesn't overarch them! As I said though, this is only my impression and maybe i misunderstood what you're trying to explain
I interpreted you as saying that natural language sentences are at best approximately true, but if that's an accurate description of what you're stating, then I don't know what this has to do with true simpliciter well-formed formulas and false simpliciter well-formed formulas in logic (at least if we are dealing with classical logic). These formulas are certainly not approximately true in any model-theoretic interpretation or syntactic proof, and neither the truth-functional operators (or, and, negation and entailment) are approximately true in formation rules of logic. It seems to me there's an equivocation here: truth-values (whether interpreted in the semantic way or in the syntactic way of truth-preservation of well-formed formulas) work completely different in this sort of making assertions than in dealing with them in natural languages, so these are not really limits of logic but of natural languages. I find a similar equivocation in the arguments of Gillian Russell's papers on logical nihilism. Besides, some of the points you raised regarding proper names don't seem satisfactory to me. Take the proper name Socrates, you say that the boundaries of such a concept are vague because different people have some different thoughts in mind when using such a concept and even the same people will have different thoughts in mind across time. But this doesn't show that the concept of Socrates is vague, but rather that the usage of the concept is vague, if one maintains that reference/usage can be distinguished. One could maintain that the concept of Socrates may indeed has a fixed reference as “that very object which was thus originally baptized as Socrates”. And even if you can't tell if that object is indeed existent, you can amend the theory of meaning with something along the lines of the Fregean senses for empty names, where empty names are mapped into the singleton of the empty set. It seems to me this shows (with some supplies of Leibniz's law) that vagueness is not to be analyzed in terms of reference and pairs of objects/properties, but rather in some of the ways we use the language or certain (but not all) concepts and sentences therein.
So an approximate truth in logic can equal valid deduction? Doesn't the fact that the individual object isn't intrinsically defined within it's own concept invalidate an assumption within the deduction?
milk is black, therefore a graph with six nodes has to have some three of them all connected or all disconnected premise doesn't even have to be approximately true, the implication is true as long as where the arrow points is true, but then why did we even state the approximate truth if you're not going to use deductive logic, may as well omit the long talk before making your statement
All physics and mathematics are not absolute and are continuously evolving. Theories are being refined over time when new observations contradict established scientific notions. Theories are sort of mental paradigms that help in understanding in a broader context culture and civilization.
this sounds more like a "limitation" of natural language than logic. 9:57 if the premises are approximately true, then it is possible to derive an absolutely true conclusion. the distinction between absolute truth and approximate truth is meaningless, as approximate truths can be used to express absolute truth and vice versa
P1 Concepts are indeterminate P2 If concepts are indeterminate then the results of logical manipulation are indeterminate ∴ The results of logical manipulation are indeterminate In others words, if p then q. But the results of logical manipulations are indeterminate, so we can't believe in the validity of "if p then q", therefore we shouldn't doubt the determinacy of logic. But, if we shouldn't doubt the determinacy of logic then we *can* believe in "if p then q"...
While the variability of precision does of course effect the truth outcome it is not important if it falls within the range of acceptable significant digits, hence the acceptance of the +/-. Real life is quite forgiving of being a little fuzzy around the edges.
At its foundation, do you think logic is prescriptive or descriptive? I personally think its prescriptive which gives us issues such as the white-black swan dilemma that you mentioned. I'm not convinced that capital "L" Logic exists independent of the human mind (such as perhaps how Bertrand Russell argued for with his Logicism). What do you think? I enjoy your work and thx for coming on our show a while back👍
Perfectly representing a system by subjectively selecting variables for their relevance to a system is a fallacy. Asking for the speed of sound is a semantically broad request. When we model systems the quest would look more like “what’s the best way to construct the opera house in order to maximize sound quality”. In this case the speed of sound in the opera house is a consequence. Of our modeling of systems will likely always be approximations, but language will always be insufficient in representing reality simply due to semantics & interpretation.
Would or can the value of speed of sound inside the opera house be something like the entire set that contains all possible measurements within all possible conateained definitions of the opera house object?
Well, from what I understand, it seems to me that this approach to the concept of approximate truth is a new way of relativizing the concept of truth, which, as we know, was received and accepted by the Second Vatican Council and, certainly, constitutes the biggest mistake that currently affects and degenerates our culture. As I have been away from the study of Logic for many years, I think that the concept of tautology is what is applied to consider a formula or proposition reliable of the truth, such as it is in itself, as the Greeks also adopted it and later, much of human history.
6:54 - that's when I realized the answer you are skirting. .... it's all deductive. Your starting approximation is itself a deduction. And I would say, yes, any deductive reasoning must follow the approximation. If you do a thought experiment you have to adhere to the rules you set up for the experiment. But the bigger point is that you are letting the imperfect nature of language weigh on your progress. Things don't have to be described in language in order to understand them. In fact, it's the reverse; with understanding come the words to describe. And, THAT is why nobody else will know what you are talking about, lol Peace
On self-refuting theories:
ruclips.net/video/p22qfddYkXI/видео.html
Could there be no laws of logic?
ruclips.net/video/4B61OYuNEwI/видео.html
ruclips.net/video/PhbWFAbRYvE/видео.html
logic is logic = logic help me this video blew my mind dude lol
If there are no laws of logic, it doesn’t necessarily follow that there are no laws of logic.
@@lawrencedoliveiro9104 hahah
@@luisapaza317 Just funny to you, or do you get the point?
@@lawrencedoliveiro9104 just a curious thing, nothing more
This is similar to rounding a number while evaluating the solution of a mathematical algorithm. Sometimes the answer is not far off, but if you're unlucky, while traveling down the path, the approximation flips something or gets exponentially amplified to the point that your solution is -1000000% off. You can see this happen in computer game physics sometimes. Unless you input definite, non-approximate truths, you can't be sure logic holds.
Is there an explenation to this "flip"? Is it an internal problem within the solution, or does it become "off" by interacting with other solutions to other problems?
Hope that makes sense.
@@DeadEndFrog It's a quirk of the algorithm, for example if you have non-continuous function like division in there somewhere. Math is full of these functions. Take f(x)=1/(100x-0.01). If x is 0.1, f(x) is also approximately 0.1, if you round down 0.1 to 0, f(x) instantly becomes -100. This one is really obvious, but they can become very hard to spot if the algorithm is complicated.
@@CjqNslXUcM thanks!
Chaos theory, good analogy
I would say that even if you input definite, non-approximate truths you can't be sure logic holds but this is rather about you being a human
This was extremely interesting and refreshing. I've been away from academic philosophy for over a decade now, and I kind of expected this to be a rehash of Gödel's incompleteness or something. This was a pleasant surprise. I realise that although I took a special interest in formal logic back then, I never really pushed that interest very far, as I switched to a different field. Deffo subscribed :)
This has me thinking of Deleuze and Guattari, in the plateau on the rhizome, writing that language is precise in being vague. What they argue is that language is not "approximate" but rather that if it had static, unchanging and precise meanings, it would fall apart. When I say "apple", you cannot know whether I'm referring to a green, or red, a rotten one, or one still hanging from the tree. But that doesn't mean the truth you are getting from it is only "approximate" or "idealized", rather it means the description you have access to is not precise. If "apple" could only refer to the perfectly round red apple I found in my garden this morning, the word would be utterly useless and nobody could ever use it to describe anything except the one apple I found in my garden.
For example, if you hear the story of how Newton came up with his theory of gravity when he was sitting under a tree, and an apple fell on his head, you don't know what position "sitting" means and you don't know what the apple looked like, even the tree might be a young one full of apples or a dying old tree that surprisingly still held fruit. But does that mean the truth delivered to you was only an approximation or is it rather the case that the image you get is vague and imprecise? I would certainly tend towards the latter.
For the question whether that means classical logic will hold, I will vaguely (hehe) gesture towards process ontology and what it has to say about classical logic (I am also just too lazy to detail that in this comment which already feels long enough)
Suggestion for new video title - *“How Approximate Truth Breaks Deductive Logic”* - or something. I think the current one is a bit too general :)
Anyway, great lecture! I am working with inferences and natural language atm, so these are concerns that give me sleepless nights :P
That would be an excellent title if the goal were to accurately describe the content of the video. However, the actual goal is to get people to click on the video, and I think a more general title is more likely to encourage that.
@@KaneB One could say that the title is rather approximate
He is not considering the a priori knowledge
This is the first video of yours I watched. Based on the thumbnail I was hoping you would talk about some unique case where boolean algebra doesn't work. I guess I now watched a philosophy video about approximative truth.
Yup, same
Socrates is, in reality, immortal. This upends the whole argument.
I don't think the concept of "approximate truth" is well defined. There are probable truths, and there are approximations of various measurements, but to say that something is approximately true is just to say that it's false. My perspective on the matter is that a given theory is nothing more than a self contained abstract systems, with no necessary relation to reality. In that sense, valid deductions in a theory are absolutely true *of the theory* , so we don't need to worry about what sorts of deductions are valid. What we actually want is not approximate truth, I think, but approximate measures. For example, I don't care if the electric potential of my capacitor is *exactly* what my theory predicts. I know that the prediction is false, and it doesn't matter. I only care that the predicted measures are accurate enough to build a working circuit. If a specific measure, as predicted by a theory, has been tested and verified to be accurate to reality within some amount of error, and that error is tolerable, the model is useful.
To give an abstract example, consider the problem where we pick some positive number x>0, and we want to know if the infinite sum sin(x)+sin(x^2)+sin(x^3)+etcetera will converge. We know that, for small x, we can approximate sin(x)~x, and under this approximation, our sum is just x+x^2+x^3+etcetera. We know this sum converges when 0
I get what you mean, but I'd say the way that "approximate truth" gets used in the video indeed means more than just falsehood. It is true that theories are self-contained logical systems, but their application gives so much more than just an approximate concrete answer. A theory, and a model that is part of that theory can be used to study phenomena qualitatively in an effective way. The thing is, we just need to be aware that the theory doesn't correspond to the truth. But the practical aspects of the conclusions of the theory can indeed work in reality in a way that (qualitatively) it is as if they were indeed the truth, defined by the theory (for example, as if the "Sydney Opera House" was indeed an objective an unique concept which concers us as the problem solvers). And proof of this is, of course, the success of any science, be it an "exact" science, or other natural or social science.
This was my thought as well, if you explicitly say "|x-a|
So is truth utilitarian?
@@eduardoz398 I don't know if it makes sense to say that *truth* is utilitarian, but I personally am pretty utilitarian, so its natural to me to justify the concept of synthetic truth in utilitarian terms. That's not to say that I don't have other philosophical reasons that guide my attitude, I do, but those arguments are pretty esoteric and I wouldn't be able to do it justice in a RUclips comment. The utilitarian argument is probably the most convincing to others, since it makes the fewest assumptions and is compatible with many different philosophical perspectives. The relationship between reality and any given synthetic system is often tenuous, but using synthetic systems to approximate reality has been a very successful strategy, historically.
can you give examples for each term you put out? just for a better guide.
Since the year started, I started seeing more and more and more small yt chanells that are talking about philosophy, philosophers, logic etc. Sometimes I love the algorithm
Here because of the folks from Answers In Reason! Subbed!
After watching this, I feel like the "limits of logic" you've listed come down to variables changing and, as you mentioned at 10:08, us being unaware/disconnected from the "one true logic/oblective reality" which I think we should always strive towards. The sydney opera house example you gave was made vague enough for use in this video, in my opinion; the example could've been "What was the speed of sound in the sydney opera house on May 2nd, 2009 at 2:35 PM Sydney AEDT sharp?" instead of just "What is the speed of sound in the sydney opera house?" This reminds me of the differences between set theory and homotopy type theory (In my understanding, at least) where set theory is more "vague/loose" with what is and isn't an equal to a variable. For example, if all elements of set A are in set B, and vice versa, then A=B; however, in HoTT, two "types (X and Y)" that would be equal if they would be interpreted as "sets" in set theory now have more dimensions that could be used to distinguish between the two types, such as location and time (This also relates to your comments on "equivocation" at 10:38). All that to say that the question "What is the speed of sound in the sydney opera house?" is too approximate of a question to have anything but an approximate answer. Just my initial thoughts, and me being completely wrong is a definite possibility.
to acknowledge that "truth," in some "objective" sense, is never actually accessible, but rather that only "approximate truth," in some utilitarian sense, is all we can grasp, is the first step in realizing that "truth" was never the goal to begin with. Any given "theory," in any field of inquiry, has a degree of utility. That utility is all we need to focus on, while "truth," as classically conceived, can be safely ignored.
@@ryanziller220 so, on a year old comment you are speaking gibberish to make some kind of point, is that it? How droll.
I confess, that OP is a bit convoluted, but perhaps it makes more sense in the context of the video, I don't remember. But the points seems valid: We can only identify "objective truth" by way of our experience, which is why we can focus on our experience and leave "objective truth" to be as it may and yet still be none the worse for that. This is pragmatism.
@@ryanziller220 I expect that my comment is one you either understand the utility of, or you don't. If you do, then we can use it to work together in whatever ways we see fit, and if you don't then we can't. But beyond this understanding and utility, I don't know what "some kind of truth" you are expecting.
@@ericb9804 @ericb9804 A truth is never related to those cognitive environments wherein the objective reality of an individual is expected to be used as he or she mulls over of those facts which are directly related to the proven insights to come of actuality in so much that distortions or dissonance can be misused. Did you expect that you would have provided some kind of truth with those statements which you brought up in your comment?
@@ericb9804 Objective truths do not exist if you were right. Subjectivity negates the need for there to be objective truths. We already know that subjectivity acknowledges the limitations for seeking the meanings behind any truth. Instead, there are apodictic descriptions and certainties which support the truths of subjectivity. Nothing expressed by me was meant to suppose the utility of an unproductive thought. You must be seeing nothing while reading the comment, with that having some relevancy in regards to your approval for misconceptions.
@@ryanziller220 yeah. ok.
From a certain perspective, the study of statistics is first and foremost a way to assign quantities to levels of confidence and perform arithmetic with them. If A and B, then C; A with 90% confidence, B with 90% confidence, therefore C with 81% confidence
There is a lot of Bayesianism in what you say, it is very nice.
I personally like to think of logic as a special case of a probabilistic system, one that would give a value between 0 and 1 to any logical statement such that it would satisfy several axioms, like the value of "false" is 0, the value of "true" is 1 and if "a" and "b" can be proven to be disjoint then P("a or b")=P("a")+P("b"). (I do not give any value to these probability myself, they just have to satisfy those axioms, and for instance logic, which gives values 0 and 1 only could in principle be such a probability distribution).
Now you can play around with this and for instance define conditioning which is kind of fun, for instance if P["if a then b"] is high (higher than 1/2), then P["b"] is lower than P["b" | "a then b"], this means that the knowledge that "a -> b" increases the probability of "b" and by symetry decreases the probability of "a". If you apply this is maths (or science), when you have an axiom "a" and you prove theorem with it, then you increase the probability that the theorem is true, but you decrease the probability that the axiom actually is true, I like how it resonate with the fundations of maths and the way we are trying to change it completly every now and then.
Yo I have never oriented to think of logic as a binary probability system. Wow my friend you are exceptional, it's a pretty simple but base level different concept.
I'm trynna wrap my head around what this practically means.
This is awesome thinking though
Precise language will not help. In "Proofs and Refutations", Lakatos shows that even in mathematics, statements have hidden lemmas that will falsify the claims. What matters is if we deal with the hidden lemmas in a progressive way that leads to new knowledge.
I don't think language can be fully precisified anyway. The concept "species" can decompose into various species concepts: the biological species concept, the ecological species concept, the cladistic species concept, etc. Then each of these can decompose into further concepts: the biological species concept can be understood in different ways depending on how we define what counts as reproductive isolation. Since concepts are indeterminate, it will always be possible to draw finer lines.
@@KaneB @Kane B yes, and each of these finer lines are a hidden lemmas that can falsify the claim about species. But again what matters is how we deal with the hidden lemmas. If we save the theory from refutation by changing the definition of "swan" to include "whiteness", we are degenerating the theory because it does not lead to new knowledge. If we instead discover biological reproductive differences between coscoroba swans and other swans, then we have progressed the theory. Hidden lemmas and finer lines will always persist, but so what if we can use them to our advantage and learn new knowledge - gaining a finer grasp of comprehension.
In the Sydney opera house idea, the key is to clearly state the exact premise so that some logic can be performed on it. Problems like this often crops up in programing, where some user/business functionality is unclear, and further requirements/specifications needed to be spelled out; so that a program can perform automation/logic on the premise and reach the intended conclusion.
Often times, the input needs to be transformed, and edge cases for scenarios need to be tackled. Most programs are deterministic models of logic, unless you are looking at the probabilistic models used in recent AI programs.
Edit: Much of these problems that you've went over are because the logic is being performed on the natural language (which is ambiguous and prone to evolving). This is why pure logic is done via programming languages, which are specific.
Great response
I'm also a programmer, and I disagree with you, even programming, the best we have are useful approximations, because we build things based on assumptions that are subjective to interpretation, and with limited resources to use to handle the problems.
Even in the most specific implementation, we are doing inference based on subjective concepts and incomplete models.
So our inferences are not precise, because we are limited by the computational power both of the computer and from our brain, and those are based on approximate concepts, with gives us a good enough response, but we can't say this response is completely correct.
Although it's true that you can't necessarily maintain approximate truths through a deductive chain of reasoning, if you treat claims as having probabilities of being true, then you can observe every possible branch individually, then sum their respective probabilities using Bayes' Rule to deduce approximate truths. There is one problem with this, which is that this process can be highly sensitive on the initial approximate truth values you assign to your premises, so if the chain of Bayesian reasoning is long enough, you could arrive at approximately true conclusions, which nonetheless seem very unlikely.
All of this is to say that usually this works. Using logic and by extension probability theory (Bayes' Theorem) along with inductive reasoning (investigating the approximate truth values of the initial claims/premises) tends to work. "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is an article, after all.
I have this concept that -- Theories can be *creative* and generative of reality. For example, in computer science, somebody will come up with a theory based on observations of the world, and it'll be directed towards some end, and the theory is like a possibility -- it is descriptive of a possibility. And then if it works out, it creates a real life situation that is well described by the theory, and proves the plausibility ("That was a true description of a possibility, and we know that because we could do it.")
The truth of a deductive conclusion is contingent upon the truth of the axioms, and the truth of the axioms cannot be determined deductively because you run into an infinite regression. Premises can only be demonstrated inductively, and induction is necessarily probabilistic.
Is that a deductive conclusion?
@@lawrencedoliveiro9104 Yes, its a deductive conclusion about induction, and this deductive conclusion about induction can only be verified inductively
@@grahamhenry9368 Can you prove that?
@@lawrencedoliveiro9104 Perhaps re-read my comment? I literally said that this claim can only be verified inductively. Induction does not allow certainty or proof, only probabilistic confidence
@@grahamhenry9368 Sounds like an unsupported assertion.
6:14 Suppose the sentence "the only thing we can get are approximately true descpriptions of the world" is true, then that sentence is, by definition, only an approximately true description of the world. That means that there's a sense in which there exists descriptions of the world that are not only approximately true but also strictly true (given that the sentence "the only thing we can get are approximately true descpriptions of the world" would be strictly true of that wasn't the case). But if stricly true descrpitions of the world necesarily exist this constradicts our premise.
It's like saying "everything is subjective" Which means that what you just said is subjective but if that is the case, there exists a subjective frame in which everything is not subjetive, but if something is objective in a subjective frame, it has to be objective in all reference frames. Therfore contradicting the premise. It's an argument similar to the barbers paradox.
All communication involves translation, which is Abductive.
One could reason thusly: Everyone is playing the abduction game, whether they know it or not, and it is up to the cognizant to let those that call "foul" when truth can't be pinned down precisely continue to play the game with their own made up, extra rules. Just... over there a ways, at the kids table.
Thanks for another great video. I enjoyed it and might possibly agree with large portions of the arguments... I think. Bayesian Regress imminent while I sort it out. ;)
absolutely, regress to the mean. there is no certainty, so which is the most probable 'mean' result.
"All communication involves translation, which is Abductive.
One could reason thusly: Everyone is playing the abduction game, whether they know it or not...."
Isn't this a deduction?
I mean, I like your post, but I also like shitposting, so here we are.
@@YawnGod nah, abduction it is proper kind of inference.
And it is currently the most important in scientific practice. For example, when looking for the best form of medical treatment, purely deduction is not used, the current method is abductive (evidence based medicine) where you gather several hypotheses about the patient and various evidence about the context. And the result will always be approximate never absolute, even if you received the best true clinical proposition last year, this year you may receive a completely different true answer as the evidence may have changed.
@@claytoncardoso4538 Ah! So have you had your 5th booster yet?
@@YawnGod So let me do a little test:
A. If I perceive that you do not understand the basics of logic.
B. makes a joke about vaccines.
C. Writes like a psychotic.
If my conclusion is: you are a complete imbecile. Is it deductive, inductive or abductive logic?
As someone with scientific training, I've never really considered anything other than "approximate truth" to be "truth" when we use that word colloquially. Or something like "an idea that provides explanation and allows us to interact with the world" - that's plenty enough for me, I don't hunger for more. The banal (to me) kind of deductive truth, where the conclusion is just the premises rearranged, is something I've never taken very seriously, it seems to have no application when it comes to understanding reality, outside maths.
I've never considered this question before (and I know nothing about logic), but off the top of my head, it seems to me that logic is a post-hoc analysis of the cognitive tools we've evolved to understand and interact with the world. Should our expectations be particularly grand? I think there's some structure to reality that makes logic and maths work, and allows maths to describe nature - and that's just how nature is.
For someone with contemporary scientific training, 'approximation' is directly proportional to the measure theory applied. For example, choosing infinite strings of "real numbers" as a measure entails that everything is approximate and nothing exact, choosing a finite level of Stern-Brocot tree as your measure, every numerically valued measured gives either a rational value or value between neighboring rationals.
On very general level, prenumeric measure theory can consist of only more-less relations, and equality can be defined as neither more nor less.
_"The banal (to me) kind of deductive truth, where the conclusion is just the premises rearranged, is something I've never taken very seriously, it seems to have no application when it comes to understanding reality, outside maths."_
But the process of rearranging premises is useful in pretty much all fields of science. Whenever you solve the equations of a model for some variable of interest, you are performing this kind of deductive reasoning. There would be no point in creating models if you were not interested in deducing the logical consequences of the model's assumptions. The problem is just that the uncertainty in the model, the measurements and numerical solvers forces us to think about validation and error bounds. It would be more appropriate to describe the perfect objects of interest in maths as banal, compared to the real world, than to describe the idea of deducing logical consequences as banal.
@@MelodiousThunk Deductive (mathematical) reasoning is of course absolutely crucial to our understanding of the world, I'm not saying that it isn't important. I'm saying that when we ask what we mean by "truth", citing the fact that Z follows deductively from X and Y seems to me a banal response. Yes, sure, Z follows deductively from X and Y, but what I care about is whether X and Y are "true" and what that means. And as I said, I think it's something like "X and Y are ideas that provide explanation and allow us to interact with the world".
@@jonstewart464 I don't know what examples of premises and conclusions you have in mind, so I can't comment on the value of these conclusions. I suppose they would be banal if they were obvious consequences of the premises though. But in general, it can be extremely difficult to deduce some logical consequences of a given set of premises, and solving such problems can greatly enhance our understanding of the world. E.g. there's a million-dollar prize for answering a seemingly basic question about the nearly 200-year-old Navier-Stokes equations, because it's thought that this will bring us closer to understanding turbulence.
@Jacob B >[True is] always "degrees of true" or "probabilities of truth" or "on the spectrum of truth"
Is that true only to a degree?
Ah yes, paradox of heap. I don't think it is even a paradox but still definitely beneficial to think about
The more and more formal mathematics and logic I studied, the clearer it was that we were just winging it regarding its application to the real world.
The more you see proofs, you realise there's no way the real world could be so strict.
Then you see model theory and you realise that we've privileged some mathematical axioms, deductive systems, and results with our attention, but out of practical concerns.
We have tunnel vision about mathematics: we've historically only focused on results in those parts of mathematics that can be approximately applied to the real world , and now most people think that that's all the math there is and that the logical system that supports it is the one true logic
Regarding the opera house. Once you define your parameters you could ask the question did sound travel from point A to point B in X seconds as predicted in a hypothesis? The answers Yes or No would be logic. What was the approximate time sound traveled such distance is a tentative and physical question that requires scientific experimentation, but did the sound travel as fast as predicted Yes or No doesn't require relentless testing; It either did or didn't. You couldn't even practice science without laws of logic.
Is this just a matter of definitions rather than logic? We experience complex patterns and name subsets of this when we spot sufficient similarities.If this is useful to live and to communicate then we don't need to make stricter definitions...until we need to do so. Everything is approximate. Is there more being said here?
We study logic in school and I understand nothing of it despite actually liking logical schemes in cpu as a child. I still think that the knowledge is very limited if it can be actually acquired and it is really hard to deal with the irrationality of the world we live in. I hope the philosophy guys are doing well cause they think a lot
This is basically the same idea of the vSauce video "Do Chairs Exist." Another great illustrative example is that when asked "which ship is the real ship of theseus", the problem is not one of identity, but one again of natural language. If the real ship of theseus is meant to mean the one that is, say, registered under port authority as the ship of Theseus, being used actively as the ship of theseus, then it's clearly the new one. If the real ship is meant to refer to the collection of original boards, then it's clearly the old one. It is a matter of the lack of precision inherent in natural language
I'd say not only language but logic as a whole.
6:00 that you can get only approximations doesn't mean that phenomenon doesn't exist, that truth perhaps its instrumental, but its in the way of truth. If the prerequisites of a category can exist, such category can exist, like the speed of sound on that place. Is there a material medium? Yes, then a speed of sound exist.
There has been since Carnap or maybe Tarski an attempt to understand the content of a theory. Popper had two notions of content: firstly he had logical content, which is all the claims that it logically implies (possibly excluding tautologies, since they have zero content) and then there is informative content, which is any state of affairs or theories or statements or propositions that the theory rules out.
Popper thought that nothing (empirical) follows logically from a universal theory unless supplemented with auxiliary hypotheses. Yet the theory can exclude states of affairs without auxiliary hypotheses.
My point here is that content is always best construed as the set of models that are excluded by the theory. It makes a lot of things logically neater.
Approximately true theories exclude more true models - in the model theory sense - than they should (in other words they claim that certain things are false that are in fact true) It is in this sense you should understand approximate truth. The reason is, like Dummet said, what is primary about a sentence is what it rules out. So you could say that approximately true theories claim things are true when they are false. But the things that it claims are true, exclude other things from being true and that is what the (informative) content of those claims are.
My other point is that in order to see what a theory rules out you have to entertain it as true. So your argument that approximate truth somehow makes deductive logic inapplicable is mistaken, since it construed logic (like most logical explanations) as a mode of justification when it is simply a way to manipulate content. To show what is included in what.
just discovered this channel and i m glad
When talking about the different ways the same word can be used in such a way that deductive logic doesn't hold, you could prove or assume that an implication is true for all different understandings of the word allowing for deduction despite the uncertainty of the exact meaning
A quick note, I think logic is an artwork expressed by us humans to approach truth to make the world around “usable”. Precise things are maybe a thing for the future.
Point on the swan stuff: speciation is a human consensus, understood to be an approximate determination. Every individual is unique and every species has similarities with earlier points on the tree of life. This is a reality of the tree of life itself, it is seamlessly connected , repetitive, and variable, where its variation is something that leads to the very proliferation of complexity that demanded the need for a tree of life in the first place. The things that could be said of biology that are strictly true are already so self-evident that they are rarely imagined or dictated: all life lives, all life comes from other life, life on this planet can only be compared to other life on this planet, etc. Biology seems to be a sort of hopeless attempt to categorize nature, where nature as such can understood as that which defies categorization.
A good example of a conclusion offering less information is the basic equality. That two things are being equal doesn't tell you what they are. And this can be extended to a concept of similarity, which is an approximate equality, sort of. Obviously things become mighty confusing from there, because you have to qualify the error in similarity to be able to wield such a technical monstrosity. What is similar enough? What are the domains of comparison? And who is judging whether it actually means something? Ergo, similarity is a potentially mighty dangerous concept, if only one boils it down through logic.
Imagine a gradient of white and black on a finite line. Is there a clear point of difference between where white begins and where it ends in relation to the black colour? no? does this mean that there is no such thing as the colour black or the colour white? no? see where this is going? just because I can not define what a chair is (because the varieties are endless) doesn't mean that there is no such concept as a chair and doesn't mean I can't say facts about the chair.. food for thought.
Your voice was just swirling around and around towards the end. Honestly the best Soviet philosophy is already overcome this concept like 7 decades ago
Truth is always "tensed."
I'd say as soon as you start thinking about the real world you've technically stopped doing logic and started doing science. Approximate theories that approximately conserve approximate truth are what science is all about, the key is in keeping track of your uncertainties.
Agree.
So, when you are discussing the distinction between logic and science in this way, does that discussion come under logic or is it science?
@@lawrencedoliveiro9104 Neither.
@@commentarytalk1446 So, not either logical or scientific?
@@lawrencedoliveiro9104 The question is neither. It's semantics.
Applicability of any logic to an appropriate domain of objects is fundamental to the natural language application of logic, but perhaps only approximately. However, great precision has been devised out of approximate beginnings. So it is the vector of demonstrable returns, understood partly in terms of empirical testimony and partly in terms of logic's internal apparatus, that gives the ultimate sense of logic's truth beyond merely formal consistency. But a demonstrably growing trend of a theory about the world becoming more precise, even within the bounds of merely approximately so, that is the theoretical and practical unit of assaying what truth there is in a system of thought, even of logic systems though they are so many and of such different types. So a formal theory of all logic would have to account for this in a paradigm that would be a form of metalogic not yet fully articulated in the field.
How cold is your house for you to be wearing gloves indoors?
Technically, you can “do anything” in infinite-order metalogic.
I guess you could take the argument as concerning not logic in general, but all logical systems other than whichever specific system you think solves the problem. This will probably still have interesting consequences for the application of logic in philosophy and other inquiries, since it's unlikely that most philosophers often use the system in question when assessing arguments.
huh? how so?
@@funktorial Are you familiar with infinite-order structures?
I’m confused on two points: 1. what do you mean by infinite-order here (do you just mean higher-order logic?) and 2. what do you mean by “do anything”?
@@funktorial I mean the “limit” of higher-order logic, this being metalogic. By “do anything”, I mean capture any conceptual or causal expression in principle, as metalogic has the capacity to function as the metalanguage of science, art, and “spirituality”.
The traditional deductive logic is designed as two valued logic, which can be quantified as 0 and 1. There is nothing in between. If you are looking for an approximate logic, the probability theory is what it is, which leads to a "partial truth" by percentage. In the probability theory, 0 and 1 are merely two possible cases among many others.
Seems to be saying that we're just playing a language game a la Wittgenstein and that we narrow or widen the scope of this language game depending on context
The Sidney opera house sound does have a specific speed at a specific speed between 2 specific points at a specific moment. Only our perception has no specific idea of it
In the event that deductive reasoning needs to be used on approximate ideas, more well defined definitions along with an approximate truth value, i.e. 80%, 33% etc, can be used. The result would be something like approximate likelihood of an outcome rather than a binary truth.
in my opinion all truth in approximate: even things like math and logic where we should know that the consequences of axioms are what they must be. but those axioms can only be defined in terms of natural language, and natural language is itself only approximate. the trick is just that logic and math are among the few fields where the only possible errors would have to be in communication of a small set of simple rules, which turns out to allow it to be so nearly correct we may ignore the approximation and still yield results which are extremely often useful for anything which may be approximately described in their terms. but i'll admit it's hard to really say much about anything, purely in this framework
Nice discussion. The concepts that you characterize as indeterminate-like swan-can be treated as open-textured concepts. Ordinary language is rife with them, and science usually starts with folk concepts from ordinary language. However, we can rationally reconstruct some of these concepts and converge upon greater rigor.
@@Continential Agreed.
Conceptual analysis and critique...sounds like a job for PhilosopherMan (or Woman, I was going for the joke) 😁
The validity of an argument allows us to check whether someone’s criticism of our conclusions affects our premises. Validity should be renamed the principle of the retransmission of falsity. If our argument is invalid then the criticism of the conclusion doesn’t affect our premises, which means that we have not opened ourselves up to learning whether our theory is false.
I think we just need to be clear about the scope of deductive logic. It's fine as a step in dialectic if the intended logical relationships are unclear to who you're talking to. It's a tool for communication. What follows dialectically is of course whether the ideal premises are adequate, which really is just the soundness phase of analysis. 'True enough' is a matter of consensus appraisal in the context of each communicative process. The aim is to argue to agreement as to the more comprehensive and coherent system of ideals. Rationality is purposive and idealistic, but most importantly a dialectic process.
Is this a dialectic between Ideal/Platonic and Realist/Materialist? Just a general question, my formal philosophical education is decades in the past and mostly to used on cognition,etc. Thanks to all friendly responses regardless of position☺
A thought that I have been stuck on for a while now is that all language is simply "Point-Grunt" communication. One points at a "thing" object/concept/noun, and then grunts a sound. Another looks at the pointed thing, also points and grunts the sound that he heard.
This is what all definitions are. Approximation is obviously a factor in that the two are looking at the same thing with different angles, and they cannot hear themselves the same as they hear each other.
With modus podens and deduction, I don't see it as anything other than nested definitions, which you seemed close to be getting at. This logic is not measuring anything. It is all about meaning.
Are you familiar with WordNet? It basically defines all words into nested syllogisms.
In summary:
1. Words are vague, so no statement is ever precise
2. If something is roughly/generally true, its part might not be (some organisms live for centuries, so human life is really short; but “human life is short” is not necessarily true by itself)
3 If some things are roughly true, then their combination might not be (temperature on the North Pole is not that much different from any other place; North Pole is lethally cold)
If one form of 'approximate truth' is uncertainty, then we do have a logic for that. The Bayesian interpretation of probability as a rational degree of credence can be used to create probability logic. Ernest Adams defines an argument as p-valid if the improbability of the conclusion does not exceed the sum of the improbabilities of the premises. In simple cases this means a valid argument cannot have highly probable premises and an improbable conclusion. The logic differs from ordinary classical logic though.
I call it controlled scepticism it's like when you think as a mini you in your head but you stop yourself from having this linear regression of a mini u inside a mini u and just keep it at one level or we could just doubt every object that exists but out of practical necessity we shouldn't
If there is One True Logic, why should we expect it to be inapplicable to say, arguments in ordinary language? Or arguments about empirical theories? This may be question-begging, but if there is One True Logic, I'd expect it to at least accommodate those cases! The upshot is, I'd expect the One True Logic, if there is such a thing (though I don't think there is), to not be classical logic. But that still leaves a lot of possibilities on the table.
And to my eye, there are ways of taking approximation, imprecision, vagueness, etc. into account when we perform our reasoning, and that reasoning is still perfectly "deductive." We can just report our errors and uncertainties along the way. That's what we do in say, statistical inference. And there we find a perfectly deductive theory of non-deductive reasoning. Sometimes imprecision is actually very useful.
Another example: What's the referent of 'n' in "let n be an even number?" Well, it's not exactly specified. All we know is that it's some even number. So in our reasoning about 'n' we're not allowed to use properties which aren't true generically of all even numbers. So we can't conclude that 'n' is divisible by four, say. And that's good! (There are different models of what's going on here, and they track pretty closely with different models of vagueness/sorites problems)
And finally, what I'd say about classical deductive logic being an idealization is what I'd say about any other kind of idealization we use. It's merely instrumental. It's useful to reason "in the limit" because it has practical implications for deciding what to do and believe.
Also side note, I do like the idea that Quine was committed to denying conjunction elimination, because I know he wouldn't like that lol
I couldn't agree more. These are tools that should be aimed at and used in practical matters. Expecting that any logical system will produce an ultimate truth can only lead to frustration and disappointment plus a palpable danger of confusion and disconnection from reality, mind you.
It is true that approximate truth isn’t preserved in logical manipulations of content, that is why it is imperative that a working theory of verisimilitude is created. Verisimilitude is the ratio between truth and content. Many people have just abandoned the verisimilitude project (started by the way by Popper).
There has been some advance in the last couple of years, Mormonn for instance has created a topological theory of verisimilitude, that seems to get a lot of what Popper wanted out of such a theory.
This video alone is why probabilistic models of physics will eventually replace our current paradigm. The inseparability of bias, lingual encoding, and mathematical representation of systems.
hes wearing gloves inside
The logical operations themselves aren't approximations, but the variable representations are approximations of reality or just concepts in general, doesn't have to have any ground in reality. "This statement is false" fails to look at the biological history of linguistic evolution as a byproduct of brain function. Its limited context is what makes the logical operations on the statement possible, thus allowing us to categorize it as an example of what we call a paradox. Without the reduction into simpler workable chunks, we can't make any statements like "that's an example of a paradox" This is a concrete statement on the original paradoxical statement. Math in general uses this form of reduction to make things workable for math people, doesn't say anything about the accuracy of its representations, thus they are mostly discussed in the context of pure math without any attachment to actual reality. When you bridge the gap between the thought process and its relationship to reality like in physics, that's when things become a bit wishy washy, handwavy, etc.
What about the claim that “all the claims we make about the world are approximately true”? Whether that’s an a priori or empirical claim leads to different problems. But insofar as it is self referential, then I think the crisis is a bit overstated, at least with regard to formal logic and it’s useful application to natural language. I wonder whether Saul Kripke’s Naming and Necessity might shed light on the problems of mapping what is rigid onto what is approximate. Thanks for this thoughtful video!
Obviously if you apply the claim to itself, you can show it to be false by _reductio ad absurdum_ .
The speed of sound in the Sydney Opera House is a probabilistic quantity.
The idea that approximate truth can break formal logic is an interesting one but I would argue the imprecise use of language doesn't make a strong case for idealizations to be wrong or even for them to be of limited use (and by limited I mean to a degree that would significantly alter the reasons why we use them). I would say that at best, it makes a case of the strengthening of the precision of natural languages that we use.
To go back to the argument about the speed of sound in a particular room, just because there is no speed of sound for the entire room (unless its temperature gradient field is null), it doesn't mean that there is no such thing as the speed of sound for particular pockets of the room where the temperature is exactly the same. It also doesn't mean we can't say the average speed of sound in the room is X. My point here is that just because there exist a continuous field of ''speed of sound'' in the room, that doesn't mean the whole concept should be thrown out. It's a bit of an equivocation fallacy in my opinion to equate the value of the imprecise language used to the value of the concepts that are underlying the context of a conversation. To give an example : just because I use the word blue to describe turquoise doesn't mean turquoise or blue are invalid or don't exist as colors or even that the idea of precise color descriptions is invalid. At best, it means I used imprecise language and should be more precise next time I speak if my intention is to be more consistent.
Yes, if you know what the 'room' even is and all conditions in it, you can be more precise. But can you ever know all conditions to get a real answer, or is an approximation more meaningful?
The point, as I read it, is not to dismiss concepts or say they don't exist. Think rather of newtonian mechanics - they work, so long as you go nowhere near light speed. But logic demands that something is ALWAYS true to be true. So either we don't use it at all or we admit it's an approximation and technically, you can't use it in logic.
@@Leartin not without computational explosion. Hence ideal laws of science that minimize unanswered variables as inconsequential to the desired measurment
Unmeasured (sorry, spellcheck)
On the first point of approximate truth abstract deductive reasoning or reansoing based on abstract concepts can be absolut like math but once applied to the real world like math applied in physics then it becomes uncertain
Aproximate truths aren’t a problem if the part that isn’t strictly true is irrelevant for the argument because if you say:
a ^ b therefore a, this holds true even if b is false, this means that even without strictly defining Socrates or any other concept you can make a valid deduction as long as the relevant part is preserved
My private conception of logic is entirely synthetic and approximate: we notice patterns in the world, we make up games where relationships between those patterns become moves in the game, and we play. That's logic and mathematics.
But logic and mathematics aren't real; they're just games. And the universe is impossibly creative. It can always come up with something that doesn't fit into the parameters of your game.
"There is an idea of a Socrates - some kind of abstraction - but there is no real him - only an entity, something illusory."
It sounds like you’re saying that deduction can’t come from vague premises, so there’s something wrong with deduction itself.
If that is what you’re saying, it doesn’t sound valid. Maybe the problem is just vagueness. Specify what you mean, then do the deducing. Problem solved. Once the exact conditions of the concert hall are specified, then there will be an exact speed of sound within it. Whether or not anyone can correctly calculate it is another matter not to be conflated either.
I don't think there's anything wrong with deduction. The point is that deductive validity preserves truth, not approximate truth, so that if we are aiming only for approximate truth, deductive laws do not strictly speaking apply. "Strictly speaking" is an important qualifier there: deductive logic is often extremely useful as an idealization.
>> Once the exact conditions of the concert hall are specified
In my view this is impossible to do, partly because there is no fact of the matter.
@@KaneB Well, supposedly, there will be a reason why you want to know the speed of sound in a concert hall under a set of conditions. Once you know what those conditions are, you can answer the question. You are correct that there are an infinite amount of conditions, but the key here is that they do not matter.
Nobody wants to understand absolute truth. It would be as undifferentiated as reality without thought or conceptualization. It is wrong to say that there is no fact of the matter: there is no useful fact of the matter.
@@KaneB ah- I see! Thanks for responding
If descriptions are dependent on definitions, then an "object" is as defined. Definitions contain the subset of all possible values inherent to the object or word that is being defined, automatic. To stay consistent, we have to remove our perceptions of existence from the idea of existence. An extreme example: if, on average, we perceive an object to have similar qualia to that of a shoe, but the creator of that object has defined it as belonging to the class of tables, then it is a table, no exception.
I can't get past a sneaking suspicion that philosophy, although not just philosophy, persists not for its logical utility or any other downstream value but because it provides a particular (perhaps peculiar) transmission medium for the generatively self-propagating invocation of more language. Philosophy being a language game without closure because this is precisely how any complex adaptive communications system assures sustainable continuity in any context. Truth as a concept is eminently more valuable as a function of its uncertainty in anything other than the most trivial of circumstances and the competitive narrative communications systems that develop around it are implicitly, perhaps, unconsciously oriented towards an asymptotic approach towards closure. This is in many ways a corollary of the notion that psychological selves are inversely (or is that reflexively) defined by a goal-seeking that, if ever arriving at closure and resolution would then be to entirely invalidate the difference and distance by which that self persists - so, complexity in physics, biology, intelligence, language and logic persist as though they were possessed an intentional agency of not finding resolution, truth, certainty, whatever. The point perhaps being that none of these systems actually possess impermeable or inviolable agency and certain identity - this is their hidden raison d'être. Logic is incomplete because it has to be to persist and your linguistic, competitive and differential reflex to my ad hoc assertions confirms that the only winner in this game is language.
TL;DR: Laws persist because they are faulty.
I do not believe that he has the patience required of someone who can produce a logical thought whereas the limitations of our human experience collectively prevents us from developing that level of diligence.
Thought cannot truly cannot be quantified unless we create multiple system to allow us to measure them In a way we can comprehend. As a result we get language and arithmetic. Despite precise numerical values which can be universally replicated through the course of space time, language is limited by universal acceptance of certain words within the given lexicon of a language. Despite it not being specifically finite, language is merely a way of transmission of information of specific emotions. As result, logic is created by the limitations of the English language because we are forced to think in this manner by using specific words to entail one’s view. Until we can transmit our brainwaves and create a translation system, only when will human communication will evolve. However, I’d argue that our vocal systems are perfect by design and we must create a new standard of communication to break free of the limits of logic
Could it be that Reason is flawed in that if you push it to the nitty gritty, it always seems illogical?
For example, there is a concept that is true enough about Socrates, it's that he was a man. Whichever definition you use for man, it is true enough the understand the logic that Socrates is mortal.
I think there might be a way to get exactly true descriptions of the world (think of something like "deductive" reference rather than inference), but just like with the typical problems that are raised about deductive inference, it probably wouldn't be super "informative" and we may not get the rich theories that we see in science.
"Some old guy, going around annoying the people of athens"
lmao i wish i could have been there just to see/hear some of those conversations.
So real
Forgive my naivety as it's been decades since I studied quantitative theory in logic; but I offer that the main (dare I say zen-like-moment) tenet that I gleaned was that *everything* inevitably leads to if/then statements as the only real conclusion to be drawn from any discourse. Perhaps I over-simplified or got it totally wrong. The only thing I feel certain about all that is that I've lived more than 30 years from that vantage point, as I could never conclude otherwise, myself. It seems to be quite relevant to this discussion. What's the scope of deduction at any given moment, as dust specks in a dust storm.
Great stuff man keep it up I love this kind of content
I wonder how much of the logical issues and solutions aren't simply linguistics tricks. Terence McKenna argued that language preceded meaning, and, after delving into Wittgenstein's ideas, it seems to me that Terence made an important point that even the deductive/inductive models are simply a particular linguistic trunk trying to convey information.
So i guess the limit of logic pretty much in a boundary of human ability to validate the true nature of the premise and since we could only describe approximate truth of the premise itself, we could only make do with approximate truth of conclusion from the premises.
We could see logical arguments in the same way we see goods in a market. We could say logical arguments are goods exchanged in a free market of logical arguments. We contribute arguments to the market and receive arguments from the market. We get to use our logical arguments for as long as they can hold up against the forces of nature and discard them when they wear out. This is how we treat clothes, houses, and other goods on the market. With this view, we don't care about whether or not the logical arguments are true, only whether or not they are useful. This is a pragmatic view of logic. Rather than caring about the truth of the argument, we care about what it can help us achieve.
the impression this gives me is the following, please tell me if I misunderstood you:
"if approximate truths or undeterminate concepts combined with correct logic, however exact it us, and true premises, however accurate they are, sometimes lead to incoherent or false statements, and since all we have is approximate truths and underminate concepts, it follows that science and the conclusions or theories it reaches are fundamentally flawed, abit like math without its rigour"
I'd argue that you're missing here perhaps the most fundamental thing about science, what makes it what it is: that all theories, precisely because one knows what you're saying here, hold not for the mathematical proofs that lead to them, but for they suit experiment!
I feel like that only suffices to render "null" (in the sense it's not saying something new or "interesting", and that shoudn't be taken as an insult of course) the point of your reflexion (which I also feel like is either to say math is greater than science, or philosophy is bigger/overarchs it, anyhow I think both are incorrect), as correctly explaining what was observed, and predicting what is to be observed, is what discrimates good from bad theories in science.
As such, the scientific approach (at least for the field I know the best, physics) is just about associating undeterminate physical concepts with mathematical objects and using undeterminate truths about those physical concepts to infer conclusions using the exact math, conclusions which once translated back to the world of physics, as previously stated, obviously don't hold the same certainty as their math counterpart.
Then, one decides wether or not the original mathematical correspondance and the approximate truths were "good enough", by checking if experiment is coherent with the theory. Repeat
That's why it always gives me a smirk when philosophist try to englobe science in their field of thought, there's really not much to do! all they're really doing is englobing the scientific approach, not science! That's the procedure science follows, it's easy, makes sense, works more accuratly than anything else we've ever had and, altough philosophers focus on that, it's really not where the difficulty and the beauty of science resides! In physics at least, it comes from the intuition one gets about the world and the entanglement of those intuitions to form and create new understandings, an epiphany of sorts!
That a linguist is able to spell the words music, philosophy, or physics doens't mean he can play the first, practice the second or understand the latter, little more does it mean linguistics 'contain' music, philosophy or physics, and it certainly doesn't overarch them!
As I said though, this is only my impression and maybe i misunderstood what you're trying to explain
I interpreted you as saying that natural language sentences are at best approximately true, but if that's an accurate description of what you're stating, then I don't know what this has to do with true simpliciter well-formed formulas and false simpliciter well-formed formulas in logic (at least if we are dealing with classical logic). These formulas are certainly not approximately true in any model-theoretic interpretation or syntactic proof, and neither the truth-functional operators (or, and, negation and entailment) are approximately true in formation rules of logic. It seems to me there's an equivocation here: truth-values (whether interpreted in the semantic way or in the syntactic way of truth-preservation of well-formed formulas) work completely different in this sort of making assertions than in dealing with them in natural languages, so these are not really limits of logic but of natural languages. I find a similar equivocation in the arguments of Gillian Russell's papers on logical nihilism.
Besides, some of the points you raised regarding proper names don't seem satisfactory to me. Take the proper name Socrates, you say that the boundaries of such a concept are vague because different people have some different thoughts in mind when using such a concept and even the same people will have different thoughts in mind across time. But this doesn't show that the concept of Socrates is vague, but rather that the usage of the concept is vague, if one maintains that reference/usage can be distinguished. One could maintain that the concept of Socrates may indeed has a fixed reference as “that very object which was thus originally baptized as Socrates”. And even if you can't tell if that object is indeed existent, you can amend the theory of meaning with something along the lines of the Fregean senses for empty names, where empty names are mapped into the singleton of the empty set. It seems to me this shows (with some supplies of Leibniz's law) that vagueness is not to be analyzed in terms of reference and pairs of objects/properties, but rather in some of the ways we use the language or certain (but not all) concepts and sentences therein.
Now i want to see data charted from a speed of sound sampler in the Sydney Opera House, which takes a sample every hour for 12 months.
So an approximate truth in logic can equal valid deduction? Doesn't the fact that the individual object isn't intrinsically defined within it's own concept invalidate an assumption within the deduction?
milk is black, therefore a graph with six nodes has to have some three of them all connected or all disconnected
premise doesn't even have to be approximately true, the implication is true as long as where the arrow points is true, but then why did we even state the approximate truth
if you're not going to use deductive logic, may as well omit the long talk before making your statement
Great video as always!
All physics and mathematics are not absolute and are continuously evolving. Theories are being refined over time when new observations contradict established scientific notions. Theories are sort of mental paradigms that help in understanding in a broader context culture and civilization.
this sounds more like a "limitation" of natural language than logic.
9:57 if the premises are approximately true, then it is possible to derive an absolutely true conclusion. the distinction between absolute truth and approximate truth is meaningless, as approximate truths can be used to express absolute truth and vice versa
P1 Concepts are indeterminate
P2 If concepts are indeterminate then the results of logical manipulation are indeterminate
∴ The results of logical manipulation are indeterminate
In others words, if p then q. But the results of logical manipulations are indeterminate, so we can't believe in the validity of "if p then q", therefore we shouldn't doubt the determinacy of logic. But, if we shouldn't doubt the determinacy of logic then we *can* believe in "if p then q"...
While the variability of precision does of course effect the truth outcome it is not important if it falls within the range of acceptable significant digits, hence the acceptance of the +/-.
Real life is quite forgiving of being a little fuzzy around the edges.
Just saw your channel.. I really like thus content .. can you name 12 books that can lead me to get an understanding of what your channel is about
Thinking illogically gets the answers you need when conventional methods are no longer available
At its foundation, do you think logic is prescriptive or descriptive? I personally think its prescriptive which gives us issues such as the white-black swan dilemma that you mentioned. I'm not convinced that capital "L" Logic exists independent of the human mind (such as perhaps how Bertrand Russell argued for with his Logicism). What do you think?
I enjoy your work and thx for coming on our show a while back👍
Perfectly representing a system by subjectively selecting variables for their relevance to a system is a fallacy. Asking for the speed of sound is a semantically broad request. When we model systems the quest would look more like “what’s the best way to construct the opera house in order to maximize sound quality”. In this case the speed of sound in the opera house is a consequence.
Of our modeling of systems will likely always be approximations, but language will always be insufficient in representing reality simply due to semantics & interpretation.
Thanks for this video.
Would or can the value of speed of sound inside the opera house be something like the entire set that contains all possible measurements within all possible conateained definitions of the opera house object?
Good vídeo.
Speed of sound depends on air pressure/density. Or not?
Well, from what I understand, it seems to me that this approach to the concept of approximate truth is a new way of relativizing the concept of truth, which, as we know, was received and accepted by the Second Vatican Council and, certainly, constitutes the biggest mistake that currently affects and degenerates our culture. As I have been away from the study of Logic for many years, I think that the concept of tautology is what is applied to consider a formula or proposition reliable of the truth, such as it is in itself, as the Greeks also adopted it and later, much of human history.
6:54 - that's when I realized the answer you are skirting. .... it's all deductive. Your starting approximation is itself a deduction. And I would say, yes, any deductive reasoning must follow the approximation. If you do a thought experiment you have to adhere to the rules you set up for the experiment.
But the bigger point is that you are letting the imperfect nature of language weigh on your progress. Things don't have to be described in language in order to understand them. In fact, it's the reverse; with understanding come the words to describe. And, THAT is why nobody else will know what you are talking about, lol
Peace
You articulate good