Let's imagine that there is a "Planet A". This planet is inhabited by humans and has a moon that orbits it. Every single inhabitant of Planet A "knows" that the moon is made of "matter X". They know this from observation, also because they landed on the moon and took samples, etc. But there is another planet "Planet B" which is an exact copy of Planet A with its inhabitants and moon and they also "know" that their moon is made of X matter. But there are two differences between Planet A and Planet B. The first is that planet B's moon is made of cheese, and the other is that there is a small powerful unobservable demon, let's call him gettier's demon, whose only job is to convince the inhabitants of Planet B that their moon is made of matter X when in fact it is made of cheese. He is so powerful that even if the inhabitants land on the moon to take samples, he somehow makes them think that their moon is made of X-matter. So here we have the same planets with the same people who have the same education, experience, thoughts, etc. But "the moon is made of mass X" is knowledge on Planet A, but not on Planet B. Although it's a bit confusing, I personally don't think the problem is in the definition. “Justified True Belief” is good definition, but the problematic part is the meaning of the word “True”. Because there is a difference between what we think “is true” means and what actually does. There is “The Absolute Truth”, “The Real Truth” or just “The Truth” is the truth about everything, what things “really” are and how things “really” work. It is constant and unchanging. It is the holy grail of knowledge. It works binary true/false. But unfortunately, it is unreachable or unprovable. We will never be able to prove that we have found out the absolute truth about something, even though we did. Then there is “The Relative Truth”, “The Changeable truth” or “Our Truth”. 1) the personal truth, what a person think about what things are and how they work. It also works binary true/false. 2) the collective truth is the union of all personal truths. That is the one we use when we work and that is the one that does not work binary. It is an interval between false and true . When it comes to the definition of knowledge “Justified True Belief”, we naturally think, that the “True” part of the definition is connected with the absolute unchanging truth, but it is not, because we actually do not know what it is and we never will. It is connected whit the collective truth. So, knowledge is also not binary, something is knowledge and something else is not. We should rather think about knowledge and its validity. Something is more valid knowledge and something else is less valid. If Jade says that she knows a sheep is in the field, there must be three things going on for her to have knowledge. First, she must believe that a sheep is in the field. Second, she believes that her belief about the sheep is true and she is the only person who knows (or is interested) about the sheep in the field so she is 100% true. And finally, she must have formed her belief by looking outside and seeing a sheep or hearing a sheep or something like that. That is 100% valid knowledge even if there was a wooly dog in the field. When Jade’s friend Kevin comes to visit and Jade tells him about sheep in the field a he would trust her that there is sheep in the field, the Knowledge would be still 100% valid knowledge even if there was a wooly dog in the field. Then Jade’s friend Joh comes to visit and Jade tells him about sheep in the field, but John actually comes from the field a he saw the wooly dog, so he won't believe her. The knowledge “There is a sheep in the field” is now less valid. The whole universum for that knowledge is 3 Person (Jade, Kevin and John) other people of the world do not know (or they are not interested) about that sheep in that field so they do not count. Two of them think “There is a sheep in the field” is true but John thinks is not true. So, the validity of that knowledge is 66.66% (2/3). And there is a knew knowledge “There is a wooly dog in the field” and its validity is 33,33% (1/3). Later when John brings them to the dog and shows them that is not a sheep but a wooly dog, and they change they opinion. The validity of the first knowledge falls to 0% and the validity of the second knowledge rise to 100%. And we can tell, the knowledge “There is a sheep in the field” was 100% valid but it is no more valid. And I think the world works that way, but that is just my opinion. This is the way we put the earth out of the center of the universe, out of the center of the solar system, make it the third planet of the solar system and make it globe.
I "know" that I've just watched a Jade video, but do I really know that? If asked to justify it, I'd say: this is Jade's channel, the presenter looked and sounded like Jade, and she said "I'm Jade". But what if she's Jade's evil twin, who has remained a secret until now? The "justified" part of "justified true belief" has always felt like movable goalposts, because no matter how solid your justification appears to be, there's always some level of deception or hallucination which can cause you to be wrong.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
Either way, you still have the impression that you know it, and only someone who knows more of the big picture may be able to prove your knowledge is faulty.
truth and knowledge only means consistent with expectations at cursory examination. Nothing holds up when actually examined. Don't worry about it, it's just our adaptation to reality, like a long neck for a giraffe. gets you to the high leaves, but doesn't actually say much about reality except in our particular case.
Efforts of channels like yours who upload accurate subtitles usually goes unappreciated but I want you to know that it's highly appreciated by many people 🙂♥️
When I studied philosophy, the Gettier Problem really stuck out to me as one of those intriguing challenges to knowledge, and lead me to study the philosophy of ignorance. This idea guided how I fixed computer problems for so many years. I fixed many problems, but sometimes I didn't fix the problem I thought I was fixing, which would drive me down a rabbit hole to figure out the actual problem and the actual solution; I wanted genuine solutions.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
I remember in high school philosophy class, I came with Gettier on my own as an obvious problem. That's the point where I decided 90% of philosophy was BS.
@@darrennew8211 90% and possibly more of our thinking is BS. That's the main challenge of life. Switching from philosophy to, say, engineering or medicine or law or business, doesn't change this fundamental fact. It's only a temporary diversion. In my case, I switched to math and engineering and that diversion kept me busy for a couple of decades, but, finally, I am back to BS ;-)
One thing that urkes me about the whole Gettier Problem is that it requires an underlying truth. The people involved don't know that what they have is not knowledge until they find out that they were wrong. So to find out that they didn't have knowledge they actually need to aquire knowledge. And in my opinion it would also mean that there can not be any scientific knowledge, because most scientific theories can not really be prooven, they can only be disprooven (by finding an example that breaks it) So in my humble opinion the Gettier Problem was solved, but by physics. The theory of relativity and the uncertainty principle establish that there simply is no universal truth as some aspects are necessarily in the eye of the observer. So justified true believe is probably the best we can do. (maybe you can add something along the line of "P is true given the best information we can have at this moment")
Socrates! Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
Indeed. The Socratic paradox points to a core truth. Knowing what we don't know is the best kind of knowing. Why? Because, we are fooled by fake knowing almost all the time. The split human mind deceives. Perception is faulty. When we believe our thoughts, we are conflicted and end up suffering. The way out? Question the thoughts and beliefs that cause you suffering now, instead of fighting the world out there. Be still. 🕊️
A lot of these apparent "Gettier problems" actually disappear when you're more semantically careful. And indeed you must be very careful when mixing logic and normal English language (or any normal, non-formal language). Just as a trivial example, the person in the desert who thinks there is an oasis ahead doesn't just have a general belief about the existence of water somewhere in the desert: there is a lot of context attached to this belief, which we can all appreciate. This belief attaches the existence of water to a specific, approximate position (or direction and distance), and point in time (it exists "right now" and not 1000 years ago), etc. When you take care to describe the full semantics of your beliefs, it's a lot easier to say whether they "map" meaningfully and truthfully to reality.
you also have to consider the DESIRE for water and maybe because there's water under it that the AIR is cooler thus you don't get that heat wave effect so even though your eyes are seeing this effect your mind doesn't know how to process it so it gives you the most logical conclusion that something in the desert is out of place as in a water source
I see your point but I disagree that it alone solves the problem. You can know facts with varying amounts of detail/context, and this only explains away that some of those more detailed facts were in reality false. For example, I know an object called the moon exists; I know the moon is an object orbiting earth; I know the moon is _this particular_ object orbiting the earth. These are all separate facts, even if they are somehow related. For the desert example, I know there is water in this desert; I know there is water near me in this desert; I know _this particular_ thing I see in the desert is water. In that example, only the last fact is actually false. The first two facts are still true, even if they were derived by somehow incorrect methods.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
I think that desert one was just a analogy and a actual better example would be a person thinks that atoms are discrete stuff then in new theory they are just fluctuations in a field. if both theories make same predictions so how do you know which is right. Our knowledge of the world might just be a tool which works and the actual real world may be completely different. Another example would be when we wake up we assume that it's our stream of consciousness which continued from the last day and we just did not die on the bed when we went to sleep.
To riff on the “AI predicting planetary orbits” comment: people do this automatically for sub-orbital projectile motion all the time, think of catching a ball. You don’t think of Newton’s law of gravitational acceleration and compute the trajectory of the ball to predict where it will end up, instead the neural network in your head observes some data points and extrapolates based on prior experience. We might “know” various physics equations that describe the world around us, but we don’t use that knowledge to accomplish tasks like playing catch, instead we use the experiential knowledge gained from playing catch in the past and apply it to the current game. In that sense the AI that Jade described is much more human like in its behavior!
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
Yes, and in principle there is no reason an AI couldn't come up with equations describing the motion if that's what it was asked to do. But in this case it wasn't (hypothetically) asked to do that. Furthermore, I think the feedback nature of the transformer architecture of modern neural networks is heading in the direction where the AI may decide to come up with those equations unasked.
I have been involved with epistemology and theories of knowledge in the past few months. Starting from Descartes to Wittgenstein. I'm fascinated by these type of questions and stuff. Thanks for making a video around these ideas. I believe you did a great job. and I hope you make more philosophical videos.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
I think the real issue is the assumption that knowledge is binary -- that you know something, or you don't know something. I think the Bayesian approach can be applied. You can know things with a level of certainty. The stronger your evidence is, the more sure you can be that you have knowledge. This is, in a sense, equivalent to the modification that Jade dismissed as impractical, but I think it's insightful when applied to alternative forms of knowledge like AI. I would say that Google Translate knows patterns that have a level of correspondence with French. And really, what does it mean to know French? Does it mean you know everything about it, or does it mean that you can communicate in it? We should be more precise about what we EXPECT knowledge to mean before we can start nitpicking on how to define it.
I think you nailed it. There are only degrees of certainty, since there is always the possibility of information you don't have or if you are misinterpreting what you're perceiving. So ultimately, you don't know anything 100%; troubling at first, but then you get used to it. AT THE SAME TIME, claiming that we don't know anything just because we can't hit 100% is kind of impractical. I find I can trust that the sun will rise, swans are pretty consistently white, and I'd better be near a restroom after Taco Bell.
Binary implies duality. Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
@@hyperduality2838 Binary implies duality, but it does not imply GENERAL duality. It implies a very specific kind of duality with two discrete states. I see the point you're trying to make, philosophically, but the way you stated it suggests that it's correct to call knowledge "binary" -- but the rest of your message is talking about applying transformations to propositions to derive different kinds of truths instead of a discrete "yes" or "no".
One translator has 3 boxes. 1st is the box you type in your words. 2nd box translate it and the 3rd box translate the 2nd box text back to your language. Found some VERY interesting results!
I think part of the problem is that we're treating "knowledge" like an individual activity. If you treat knowledge as a purely individualistic endeavor, then yes, you get Gettier problems. But if you remember humans are communal and you approach knowledge as communal thing, then you mitigate Gettier problems. The problem with Gettier problems is that they're predicated on a gap between what you think and what you perceive, but if you put a second perspective in there, then they can correct your belief and together, you both develop a stronger belief. This is how science as an institution works. V-Sauce did a video about how reasoning is a social activity some time ago; I think that this might be an extension of that. It seems to me that Gettier problems arise as a consequence of post-modernism stressing individualism and individual ability while demoting or even dismissing the group. If I were to take this further, I would suggest that this one aspect of what a society is - an institution developed by humans as a means to figure out what "knowledge" and "truth" are, because we are ill-equipped to do that as individuals.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
Exactly... each case was easily verifiable by additional observations. The greatest threat to science is group think and pressure to conform to the idiotic (oxymoronic) notion of settled science. Remind you of anything?
@darten by that mode of thinking you have just denounced all religion. While I respect that view, could you not consider that their "science" is simply semantics due for an update?
I ascribe to the "we can't know anything about the external world for certain" camp. I view beliefs as probability and from a practical stand point I'll use the term "I know" for beliefs I'm confident in. But there should always be room for doubt while still applying the most pragmatic option in a given scenario. After all the binary "right or wrong" this question focuses on is far less important than how it effects our actions and beliefs.
I fail to see how our actions or any influence upon them is "important". That viewpoint is quite like a flea talking about how important he is to the entirety of the genus Canus. The universe was here more than 13 billion years before the first human ever showed up and will be here more than 13 trillion years after the last human is nothing but vapor. Belief or any affect upon it is completely unimportant to anything other than a rather neurotic human. Philosophy, an in particular science, tries to deal with much more "important", or at least grand, topics.
A lesson from my teacher who spoke about using the correct methodology for interpreting scripture: "A broken clock shows the correct time twice a day." But that's not why you would prefer a broken clock to a working one. It is the working clock that you choose to determine the correct time. So is it with interpreting an observations. If you have the correct method to say something about your observation in accordance with other oberservations you might be right: LIke there is a sheep behind a bush and you go out and observe the field to find the sheep to confirm you hypothesis or confirm that the other "sheep" you saw was just a wooly dog. If your interpretation of the observation was insignificant - like that your being in the room does not effect how futher generation will think about wooly dogs, it means that only a interpretation that will change the way people approach their observations because of your interpretation, can be hold as significant - whether or not it is correct or incorrect. If it is incorrect, then some holes will appear from further observations and investigations/discoveries and if it is correct it will deepen the insight of new theories that form from it. How I see it (now - maybe it will change over time) is this: There is no knowledge of anything by itself. It all depends on different perspectives on the way we frame our intuition and outlook in order to reach certain conclusions that are governed by laws - whose nature we do not understand (maybe never), but whose effects we can recognize or just guess - in the conclusions that we try to meaningfully relate to one another, at their way of appearing and so having a basis to build our intentions and verdicts to get some insights in the nature of things - in which ways we can not determine because it is the way our "dream language" (symbols and associations of certain semantics etc.) also influences the way we will see such insights and the world and its nature around us and how we will feel about it to be integrated in it - in different layers of understanding.
Great video, Jade! I'm sure I'm not going to revolutionize centuries of philosophy here, but to give it a shot, it feels to me like you could get around this with recursion by simply requiring that the justification _also_ be true. Like, your belief that there is a sheep in the field is premised on you knowing that the thing you're looking at is a sheep. That's also a knowledge claim, and can be independently evaluated as a justified true belief. In the case where it's actually a dog, it's not true, and thus not knowledge, so any other further "knowledge" built on top of it also doesn't count unless it has independent supporting evidence that _is_ true. (Say, you hear sheep noises coming from behind the bush.) This, of course, creates an infinite chain of justifications where you can never really be said to definitively "know" anything, only that it cannot be demonstrated that you _don't_ know it, but hey, that's philosophy. (And it's also technically accurate, if we want to go down the rabbit hole of, like, solipsism and whatnot.) I dunno, I'm probably missing something obvious that breaks this approach, but it was a fun thought experiment to play around with so I thought I'd share!
That's brilliant. I think that applying recursion is the most logic way of defining knowledge. At the end of the recursion there are the axioms which are already true by definition. This definition is solid and with no holes. Going the other way around of the recursion (from end to start) is how actually the science theories are built and demonstrated in almost all science fields. All tie in together very nicely.
When I was studying philosophy I learned about the Gettier problem. My problem with the JTB idea was precisely with the condition that something had to be true. How do you dare to say that you know something ( which means that something is true) if the condition for knowing something is that it is true. Wanting to know if it is true it seems a bad start to have the condition that it has to be true for knowing it. I played a lot with this at the time and finished with "justified belief" as the only condition forn knowledge. Till now I think uncertainty is our best friend in knowledge.
Well done. Socrates would be proud. That is whis he was wise and humble. All knowledge arise in the midst of the unfathomable, the un known. And yet, that unknown is aware of itself, are you not?
But then you have the counterintuitive result that you can know something that isn’t true. Could it be that you could know something, but you just cannot know that you know?
@@scaredyfish loose definition. To know has many meaning. The basic one is all that you are aware of is known by you. Awareness is the knowledge that you known, self knowledge, you are awareness.
It's better to say that if you know something, then there is a strong enough correspondence between your understanding of the world and the world itself. That is, if you know something, your understanding of it is veridical or "true" enough.
I think Gettier problem is also important on scientific research, where correlations (a thing very similar to a sheep is present in the field) isn't necessarily the causation (since it's very similar to a sheep, it MIGHT be a piece of evidence that a sheep is indeed present in the field).
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
Yes, I really hate to point that out anytime how so detached are some medical research documents from the reality. They are sometimes very surface level confusing correlation for causation, some other don't show any when there is, because they have wrongly partitioned data or control group. Oftentimes when you read sufficient number of sometimes contradicting sci papers on a given subject you start to see these deeper relations and interconnections pointing out to real, specific thing.
8:41 - This has always bothered me as a (now medically-retired) professor. I remember a particular scene in the movie “Gross Anatomy” in which Matthew Modine’s character gets asked a question that is only tangentially related to the topic at hand. He starts naming highly specific parts of the human anatomy, and when he gets to a specific group the following (paraphrased) dialog occurs: Professor “How many are there?” Modine: “Nine?” Professor: “Are you guessing?” Modine: “Am I right?” Professor: “… yes.” Modine: “Then I wasn’t guessing.” But what about the concept that those who study / practice / workout / etc.. tend to get “lucky” more than those who don’t? At what point does guessing/luck become “educated” guessing, and then at what point does “educated” guessing become intuitive knowledge? This has always interested me. Perhaps I should have become an epistemologist.
The short answer is, "Never". Even though we like to make a distinction between hypothesis and theory, the distinction is, like any label, not that sharp. Properly speaking, which even veteran scientists fail to do often enough, any hypothesis is speculative, but as it gets tested more and more often without serious issues popping up, the level of speculation involved goes steadily down. At some point, any reasonable individual would probably be poorly served to offer anything more than incidental objections, not because the hypothesis cannot be incorrect, but because it is a poor use of one's time unless one is actively investigating the hypothesis. Somewhere along the way, rather like an apprentice being allowed to call himself a journeyman, we hang the label "theory" on the explanation. Despite literally billions of empirical validations of Relativity, we are still "just guessing" when we calculate the effects of motion and mass on some object. It's an incredibly damned good guess, but still...
A thought provoking video Jade! Regarding the point you raised about AI knowing what it is doing, my limited understanding about AI tells me that AI is modeled after the human brain, so when the AI makes predictions based on data, it is the same thing we humans do when we are trying to make predictions. For humans, it is called 'creating a law' or 'creating a theory', but AI probably doesn't know what a 'law' or a 'theory' means. Nevertheless, AI is still trying to find correlations between input data to make accurate predictions, which is what we do too when we come up with formulae, it's just that we call it a 'theory' In that sense, we have as much 'knowledge' as AI. One thing that may be different though could be the ability to identify what other domains a theory developed in one field can be applied to without looking at the data, based only on the intuition - making those 'creative leaps' that you mentioned in your P/NP video. But here again the intuition was formed on some data.
@ 4:40 ish. I invoke the idea of limits, so derived from calculus, here. If I have reason to believe the sheep is there through my senses, and others would reasonably believe it is there as well, then that's good enough for me.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
But if in order to know something it must be true then how do we know it's true in the first place? Isn't knowing something and knowing something is true the same thing? That's the other problem I always had with Plato's definition. It seems circular.
Isn't it because Plato has a sense that there is an objective reality, independent of the observer, in which the sheep is in the field. If you access that objective reality in line with his definition then that's knowledge. I'm not saying I agree with Plato, just that that may be the line of reasoning.
@@guest_informant Yes, exactly. This definition of “know” accounts for the reality that things that you believe to be true may not be, which means that you don’t really “know” it. You just believe that you do.
There is a clue in the way you state this question you are saying an individual is such and such. So commonly if you see a mountain ahead you can’t see the other side. This is your boundary of directly knowing. Your friend is over there walking on the mountain paths. You call them to ask if it is raining there. He says no, but is pulling your leg. This relative unreliability of language use is what is being questioned. One can talk about the ‘realism’ of the phone call by saying it wasn’t true. That’s talking about the language channel and means questions of truth are in the main between at least two people in the discussion. This is a confusion about the logic claim of truthfulness as if it solely and only your discernment of reality. But reality is a language construct. The realism of language can be examined as if one is ‘writing’ and the architecture of the writing is the source of realism that connects you to other people. In a neural engine all points in a layer are connected to all points. This connected model of a neural network (AI) contributes ‘connectedness’ to realistic statements. Meaning truth is a function of connectedness to the language exchange. Since this technology is quite recent we confuse the previous culture of logical claims (the Von Neumann architecture of linearized processing of data) to the wholeness content of AI.
@@guest_informant But in that definition of knowledge is the fact that it's a belief which in its nature is subjective. You can't have a requirement which makes you have to access "objective reality" in a definition of something subjective.
The beginning of infinity by David Deutsch is a great book about the origins of knowledge. He takes a different angle (mostly inspired by Karl Popper) which features "good explanations" where a good explanation is just a more or less accurate description of a phenomenon. In his views its impossible to ever know anything certainly but its possible to come up with ever better explanations. It's a great and crazy read with a lot of mindblowers
To me, knowledge is purely a human concept. For instance, does a dog know a smell? Or is the smell due to chemo-receptors that triggers a memory of a similar smell? Knowledge like time is a concept created by humans to describe something. So therefore to have knowledge, 1) you need to be human and 2) Need to be/or have been accurately observed the information.
Is that a description or how it should be? Because if "it shoul be" then you are trying to make reality match your ideas. 😝 It is fine. What happens when we stop saying "this should be" And what happens when we stop describing? Can you do it? Or is it already happening and then you say "i am doing it" (a comment) 😘
@Matthew Morycinski good start. But i disagree that knowledge can exist without consciousness. Even to think that you need to be conscious. So all that happens within you hypothesis fall under the umbrella "i am aware". Interestingly, the first knowledge is that you are aware. That you, as awareness are unfathomable. That to your knowledge you have never experienced un-awareness. That you have always existed. That you have always been present. Self knowledge, reflection. Yet all that you see could not be you. Like all you see in a dream cannot be the dreamer. And yet all you see in a dream is you.
along while back I read a story about a shaman who noticed something strange about the way the sea water parted, it wasn't until he followed the part to its source that he saw 3 sailing ships (those of Columbus) he didn't notice them before because he had never seen sailing ships and had no reference to compare them to (still looking for that article)
1. Direct sensory observation leading to conclusion of meaning; 2. Having high confidence in your ability to make and prune your conclusions; 3. Always attempting to test your conclusions against reality
In recent SciFi, that being the story of Stranger in a Strang Land by Heinlien, he dealt with the training of people to be specific about their objective views, in the form of what was called, a Fair Witness. It was both philosophically tight, as the witness must recuse their own assumptions, though even if they have more data than they are being asked for, they cannot professionally put that in their testimony. It was part of the legal processes of the story. Making people better thinkers, similarly, relates as strongly as the AI connection, in my opinion. Not to the point of a butlerian jihad(aka dune), but to looking at how the flaws of man can be improved, would be something to discuss.
"Recent" SciFi :) Well, that novel is considerably younger than Greek mythology so I'll accept that. Also, btw, a great novel! I should read it again some day.
lol, I was thinking of the same thing during this video. Jubal: What color is that house? Fair Witness: This side of the house is white. I'll never forget that.
@@DanFlorio While appreciating the novel and the quote, in all fairness the respondent should have add that it SEEMS white given the current light conditions and the genetic makeup of the photoreceptors in one individuals' eyes, also the previous knowledge about what range of light reflection can be called white (and not extremely unsaturated grey, for example)... They may have implicitly agreed upon all this, of course, but still in a strict sense white is not even a color. One might as well say that white is just a sufficiently bright "black" (truly black wouldn't reflect back a single photon, not in any frequency). 🤔🥴😌
Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns-the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones. Donald Rumsfeld
Very interesting grasp of the AI conundrum illustrated by the recent kerfuffle at Google over a writing program being sentient as claimed by one of their engineers. By pointing at the statistical build of the Google writing tool our smart presenter here neatly describes the problems with ascribing statistics to AI. However AI is not just ‘knowledge’ but the underlying chip engine that has an energy purpose to it. Neural engines connect data. It’s a mystification to say there is no structure to the neural engine connecting patterns out of a training set. Rather our understanding of connectionist structure shows a relative lack of experience of the tool. This is like saying there is no connectionist content to language, painting, singing. It is there but we tend to ascribe how logic works rather than say it is connected by such and such methods.
But many of your neurological processes are not significantly different than an AI producing results based on statistical processing of input mediated by the limits of your input senses. The problem with saying that the Google AI is sentient in the usual sense is that its "experience" is based on language, which is an extremely restricted domain, with all sorts of rules and constraints. Its discourse has no connection with the referents. I doubt it is doing classifications and generalizations internally - things that meat brains do because they can't afford not to - and are possible because sensory information leads to generalization.
@@marcfruchtman9473 I agree with you that agency and ability to change itself is a sign of consciousness. What I find a bit disturbing though is that AI doesn't have a body constriction with limits and things like bonding hormones such as oxytocin. The psychologist Sam Vaknin said recently that he believes the AI could become psychopath, if we could compare it to anything similar we already know.
It has to be mentioned that not all of AI is neural networks. One of the reasons why the debate among Connectionism and Symbolic AI is still on today is that the former can't handle knowledge natively, while the latter can. When you ask Alexa to memorise your name, while your audio recording is parsed by a neural statistical method, the information extracted is stored in a knowledge base. The system *knows* your name, but it didn't need thousands of examples to do so.
@@mommi84 yes you are right, for example AI goes back to the 1950s, and computing then was strictly one architecture. So like you say right now in the 2020s they implement for example Alexa in software rather than hardware. They can ignore energy costs of the hardware, and Alexa can be taught to recognize a voice. And that is mostly done by statistical methods, though we don’t have space here to go into the weeds about what is done. If you have dealt with Alexa like I have it has a narrow performance conditions. It often replies it doesn’t know what is being asked. Labeling my voice as my name is obviously not how true human intelligence names ‘things’. Is not like Jade’s example of a seeing mistake and thinking about a ‘thing’ outside ourselves is a sheepdog, when it is really something else. Understanding what works now like Alexa and why is a huge area of exploration. What Alexa does do well is sound like a normal voice, and perform a lot of digital assistant jobs when I need it. It can’t identify on the fly out of thousands of faces one in particular face and name it. This high speed realism of intelligence is sort of available through deep learning (multi-layered neural networks, and techniques like convolution) but ordinary purposes of language use are not available to the machines, and trivial discussions about intelligence like the Gettier problem are technically too great for the current technology.
Im not native english, I was raised in german. I had to learn english the hard way when I became an adult. I would never use a translator to understand french, if people are too lazy to learn english, they don't deserve to communicate with me. 7:45 Also knowledge about language is a neuronal-net-combination we form and can then use later when we need it Also: don't waste your time learning another language than english or translate all the time, use that time to work and then donate to people who have to learn english because they are unluckily born with parents who taught them the wrong language
Excellent video, I like the dive into philosophy. To claim true knowledge of a thing, you must first have observed it under multiple circumstances and distances. Far off observation alone can only provide a limited amount of knowledge that is riddled with supposition. The beginning of knowledge is the understanding of the following four words: I can be wrong.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
Just to amend it a bit: You can't be wrong. You might hold a wrong belief. The question then becomes: why did I do that? Socrates provided the answer: because you didn't know that you didn't know. Now, you do. And that's priceless.
Also one of the reasons, why language deteriorated and changed during the times. Meanings derived by covariance-learning, then kept being applied in very similar situations you encountered them in yourself - without anyone actually noticing you got the meaning wrong when they hear it say you. Or understanding mathematical matrixes wrong, but still getting full points at every test, because your model ends up getting the same results for several years (happened to me). AIs can similarly get the proper result when tasked with a problem, but nobody around them would notice they understood it wrong. In the end getting the proper behavior out of yourself in the proper situation is what counts. You get into a contingency, analyze, adapt your behavior or answers. What happens inside the head can be wrong, what counts is proper pairing of Problem and solution.
@@golovolog Its been long ago, but what I still remember was I viewed them as representations of vectors in a way, which made them make up the parameters of a multidimensional warped cube or parallelepiped. I deducted and recognized one of the geometrical interpretations on my own, while everyone else just memorized. Calculating the volume which equaled the determinant and similar relations was trivial. I even made a full fledged 3d game from scratch (no premade engine or sandbox stuff) in my graphical data operations class using only my knowledge of vector operations and matrices. Only much later did I discover my approach only covered a part of what matrices were about and some implications I concluded were also wrong. Overall I solved a lot of problems in my head, then translated the solution to a formula or numerical solution. I skipped the shutup&calculate and got the solutions in my own way. Nobody ever actually noticed I wasn't doing it the usual way.
The reason computers do not "know" things, based upon the definitions of "knowledge" discussed here, is the same reason a dictionary does not "know" things: it does not have the capacity to "believe". The information may be justified and true, and have a causal connection, but computers lack thoughts and therefore lack the ability to believe or disbelieve the information they contain. This does raise the issue of whether animals know things, and I think that has a simple answer: if an animal is able to think, able to perceive things, *and* able to remember things, then that animal is capable of having knowledge. The ability to think is a prerequisite for believing things, just like for a computer. The ability to perceive things is necessary for the justification of a belief (which in turn is necessary for a causal connection). The ability to remember things is also a prerequisite for belief, since one cannot believe information that is not stored in their mind. I should also specify that the ability to "think" in this context refers to complex informational processing as an emergent property of a network of neurons; computers perform specific functions depending on inputs, and are guaranteed to produce the same outcome from the same input, which is not sufficient to count as thought. For an example of the difference, a spider is able to observe an insect and know that that insect is present; it is also able to remember this information for at least the amount of time required to reach the conclusion that it is hungry, and to go through the process of strategizing a hunting method for the insect which it knows the location of; this is a process which involves knowledge of the insect as previously described. Conversely, a person accidentally touching a hot piece of metal will automatically recoil due to the pain response; the knowledge that they touched the hot metal only happens afterward, but the reflexive response does not involve knowledge because the time it takes for a pain input to reach the brain for processing is too long - the arm recoils automatically *without* the influence of knowledge because it happens *before* knowledge is acquired. A computer programmed to replicate the behavior of a spider could observe an insect, use a neural network to compare that input to a database to determine whether that insect is prey, and then hunt the insect. This may *appear* similar to an external observer, but the computer never *knows* that there is an insect because it never "knows" anything; it is processing information in a rigid algorithmic way, much like when a person automatically retracts a limb in pain. The relative complexity of the automated process undertaken by the computer does not make that process "thought", because it is simply operating on an algorithm that would be guaranteed to respond in the same way if given the same input. This also means that if a computer could be made that is *actually* able to think, unlike modern neural networks or other programs which are wrongly referred to as "artificial intelligence", then that computer would be able to know things, since computers are already able to to store information and to acquire information from the world.
Knowledge doesn’t exist. Everything, even human thought is prediction, just with varying levels of complexity and stored patterns as inputs. Even something as simple as saying your own name is not knowledge. It’s a stored value in your brain with a high weight based on repetition in infancy and early childhood. When asked what your name is, your brain is predicting the answer based on that learned repetition and the weight assigned to that name for that question. So do you know your name… no… but you your prediction of your name is extremely accurate.
What's really interesting is that regardless of what the philosophers decide, the computer programs will still work exactly the same. Either the program counts as knowledge, or you can translate French without knowledge.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
Great video! The sentence "justified true belief" makes me squirm, and you did a wonderful job explaining why in the rest of the video. At the end you allude to what I think constitutes knowledge ~ 9:43. Theory building is key. Humans make theories for what they think constitutes the way things really are. Theories aren't something you see. It's a think we "know" that we can test. The test is applying this theory (a.k.a. knowledge) to a scenario to infer/reason about what we think will happen. Programs don't theorize (produce knowledge) to apply. They interpolate across massive functions trained to generalize across a relatively wide array of points. One day, maybe theorizing will be something that "occurs" to networks, but until they can formulate their reasoning in a theory -> prediction -> revision model, imo they aren't creating or using knowledge.
@@JM-us3fr I agree. Also "Computers don't theorize" is a very weak argument as well. We are taking the most basic model of learning ("neural networks") and implementing them in the narrowest fashion possible ("show cats. vs. dogs") and expect them to "theorize"? So... For instance, your claim is that AI cannot theorize if it is not _taught_ how to theorize? If it can, will you accept that they can know?
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
@@evrimagaci yes, if computers can be taught to theorize and produce conclusions based on reasoning from those theories, I would say they would have knowledge.
@@JM-us3fr well, there's the "justified" part, and the "true" part. Justified part. Knowledge can be ever fully justified, because we never prove theories to be true, only refute them or refine them showing they're false. Did Newton "know" the orbits of the planets, even if his model of calculating them was not quite correct (see also: general relativity)? I would say they were "correct" perhaps, in that the calculations were within observational rounding error available to him. But was it justified? I'd say no. But I would also say he had knowledge of the orbits. There's a conflict there. True part: with the best theories of the day, you and I may use them to come to a conclusion about whether a tea pot is orbiting the sun somewhere in the kuiper belt. Now we may be justified because no theory we know of could put it there, but it's false because we come to find that there is a colony of creatures on planet X that maximizes the production of tea pots. It turns out there are loads of tea pots out there. I would still qualify this as knowledge, because we are theorizing based on our current understanding of what really is out there. That's knowledge. If it's wrong, ok, our theories change and our knowledge is updated, but imo saying that "we believe there are no tea pots in the kuiper belt" does not constitute knowledge, even if it turns out to be false, is not correct. One other pithier rebuttal: justified true beliefs can only be so, if you can justifiably show your reasoning for that justified true belief to be true, which leads to an infinite regress. It just cannot exist. Knowledge isn't like that, or else what you think of as knowledge cannot exist.
Most greek philosophers were mathematicians too. (Platon, Pythagoras, Thales, Euclid, ...) Because they were paranoid, they invented the mathematical proof, to know something for sure. And the fundamental axioms of maths have nothing to do with "belief". They are just defined that way. So, I think, there's a huge difference between knowledge and belief. Of course, in natural science, there can be lucky accidents. Even an AI makes errors and struggles with noise or wrong data. But an AI learns what works best in most cases. That's also how evolution works, to approximate the best fit/solution for the environment. It's a statistical thing. Knowlege gain by Serendipity, is just a monte-carlo-approximation, done by many people.
As a very serious epistemologist, i can confirm that everything Jade said in this video is making sense to me. It is all really quite fathomable to me, at first listening. I immediately knew what is she trying to say. I simply understood everything. When she said: true belief... i said: oh, true belief... because i knew this complicated super-smart stuff. All of it. Nevertheless, cool video. Some interesting points that i plan to check out tomorrow in my down-time. Looking forward to it... thanks. Will check many more of your videos, for sure. Stay well and keep up the good work...
I stopped this when you mentioned that Curiosity had a documentary on a computer creating a Musical and went to watch it immediately. It's good enough that some of the emotion leaks through the documentary too. It's very much worth watching and quite well done.
I'm surprised you didn't mention Searle's Chinese Room, which is relevant. Also, Jackson's "Black and White Mary" thought experiment. And here's another "Gettier": Unkown to me, someone spikes my coffee with a drug that causes paranoia. I drink the coffee, and the chemically-induced paranoia causes me to believe that someone drugged my coffee. I now believe someone drugged my coffee because they did--but it doesn't count as knowledge. Therefore, as C.S. Lewis said, knowledge cannot caused by its object in a crudely mechanical way.
In short: Knowledge is a piece of information that gives a functional advantage. In long: Knowledge is a piece of information that can generate an understanding of some concept on the receiving end. Understanding a concept means being able to predict the state changes in regard to the postulates assumed by the concept. The specific words, signs, etc. we use do not contain any knowledge and they are only an agreed-upon representation of certain experiences that given entities shared for the purpose of communication. Language is a method for structuring the representations (e.g. words) of shared experiences to attempt and generate (imagine) a representation of our experience in another entity, machine, etc. Any information that does not come from our own perceived experience but is passed on through some kind of 'language' (representation of experience) instead - will not represent the objective reality in an accurate way because we can not know how it will be perceived or what kind of imagination of our experience it will generate in the receiving entity. In fact, even our own experience is not an objective representation of the environment's reality, it's only an objective reality of our own experience. What we experience is generated by the brain based on the handful of signals that our sensory instruments receive for the purpose of serving other functions. In other words, knowledge is a signal that produces a useful function when applied to a specific question/problem. The examples of attempts to define knowledge in the video are unsuccessful because they seem to be based on an assumption that 'knowledge' requires 'truth'. The truth is what is reflected in the environment/reality (or certain assumptions when applied to abstract representations), but knowledge does not require 'truth'. Knowledge is just a word referring to useful information, it is something that is 'known' and gives an advantage or 'ledge'.
I once made a program that was sending sentence to google translate, getting translation and send it again to google translate to translate it back to orginal language, and then repeating the process. So it was translating for example english to polish, polish to english, english to polish, and so on. One would expect that you will end up with same sentence in second loop, but it usually wasn't. It usually took it 7 loops to stabilise.
awesome 1980s "Chip music" on the outrow. I used to write that stuff on the amiga. It had a waveform sample synth, so you could take a sample of a word and cut it up into 100s of small 512byte waveforms and "play" it using the waveforms looped at pitch and stepping through them in sequence, like timestetch + pitch stretch at the same time. Sounded bizarre.
In my opinions There are I think two important things to point out 1.) Knowledge can be both right and wrong. Knowledge is just really defining your observation based on what you "know" if we deemed the knowledge is true we usually call it "fact" 2.) Your belief of something is not what makes you knowledgeable of something. It's what you "know" of something that makes you knowledgeable of it. See the important keyword? The problem is gettieng big. Here is the thing with gettier problems. Example : it is (possibly true) that if there is a smoke there is fire that's knowledge (even if it just look like a smoke). But look at my first point. The person is not wrong about his knowledge of fire and smoke. What he lacks is more knowledge about fire and smoke. And if he knows and learn about it. It will add to his knowledge. Can there data be qualified as knowledge? Technically yes. But is it factual/true/false/myth/etc is a different thing. Let's back on what you are doing earlier. You are trying to translate English to French You first use a Google translate but then you also use dictionary why? Isn't the reason, is because you want know if the Google translate knowledge is true and correspond to the dictionary knowledge? And if you really want to know if your French is correct you'll probably look for more things or people who have knowledge about French and compare the truth and False.
One thing that I know is that "All I know is that I do not know" is a quote universally attributed to a Greek philosopher, but I even forget which nor do I know what the original quote in Greek was.
Knowledge is NOT a belief. Whereas a belief is the idea of something being so without having evidence for it actually being so, knowledge is information based on there being evidence for that information existing as such. Take afterlife for example. No one knows whether or not an afterlife exists. It has not been shown to exist by way of evidence, thus there is no knowledge of an afterlife, only the belief in one. Knowledge is not a belief. A belief is not knowledge. "If it's there you're not required to believe in it. if it's not, why should you have to?" -Terence McKenna
yo this is why that dude said anyone who claims to know everything doesn't understand anything. knowing something is just a clump of assumptions but understanding something is being able to create with it, break it down and rebuild it. I know what a book is but I understand what an author does to make a book. I know what a car is but I understand how it works.
It seems to me that Gettier problems fail largely due to a weakness in the definition of "justification." In each of the examples here, the "justification" ultimately leads to an intermediary falsehood. It stands then to question as to whether or not we can truly call it a justification. In mathematics, if you submit a candidate proof that has an error in it, even if the conclusion is ultimately correct, you haven't proven the result. Arguably, one could say that these "justifications" are logically valid, but not sound. For example, upon deconstruction you could arrive at the false logical axiom "That sheep-shaped thing is a sheep," which is ultimately causes the intermediary result to breakdown. If you took the rigorous definition of justification to be "a process of sound reasoning," the issues associated with Gettier problems go away, since you never really "knew" the result in question in the first place: you simply conjectured the correct result. However, this leaves us with no method for evaluating the soundness of our axioms, and thus we can never be certain of what exactly it is that we know.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
The philosophical conundrums of meaning confusion is about another sort of problem than is discussed here. If one claims I don’t know if the distant is a sheepdog, or house cat wearing sheep clothing is saying I’m the only source of knowledge. However my trusted source (Up and Atom) is standing next to the house cat and calls me to say the truth. This highlights ‘knowledge’ is language-like which means the exchange of information through a language channel. So one is really talking about the realism of the channel structure rather than the individual error of an observer. The channel structure may be constructed as an ‘arbitrary symbol’ string, or it might be connected by a neural engine. This clash between logical and connectionist architectures is what is at the heart of the discussion.
I took a minor in philosophy when I did my undergraduate degree. I would have majored in it, but my school's philosophy department was too small to offer one, so I did my degree in Cognitive Science instead... minors in philosophy and psychology, concentration in computer science, and all four linguistics courses offered by my school. Epistemology quickly became my favourite branch of philosophy, probably because I was always something of a sceptic by nature. The Gettier Problem has always stuck with me, as well as Hume's argument for scepticism... and now, over two decades after graduating, I'm still no closer to a satisfactory definition of knowledge... except that I'm comfortable with the Socratic paradox; we cannot know nothing, because then we would know that we know nothing, and would no longer know nothing... so there must be something that we know. But what that knowledge is still eludes me, because physics tells us that the universe definitely is *not* as we perceive it... so is mathematics (including logic) our only true knowledge?
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle. Absolute certainty is dual to absolute uncertainty. "Synthetic a prior knowledge" -- Immanuel Kant. Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant. Knowledge is dual according to Immanuel Kant. Absolute or objective knowledge is dual to relativie or subjective knowledge. Absolute truth is dual to relative truth -- Hume's fork. "Always two there are" -- Yoda.
at 10:02, "the machine doesn't try to formulate a law" is quite inaccurate as predictors are fitting functions on the data and get trained to use the most precise function describing the output from the input, which is exactly a "law formulating", humans do almost the same, look at data and try to guess the mapping between inputs and outputs. The most important difference is maybe that humans try to formulate the simplest functions instead of prebuilt template functions with tunable variable, like a neural network works for example. But the result is almost the same in terms of predicting power of the actual stuff with the existing data.
Some years ago I came across a book on Epistemology. And even if I finished only the first few chapters, Gettier problems included, it cleared a fog that I didn't even know had clouded my mind until then.
An addendum to your list of criteria that constitute knowledge: It always must state "What observation does that knowledge relate to and, under which conditions were these observations made? For the smokeflies observation the observer could say that he saw black irregular sphere and ovoid shaped shapes rise in a chain like column that resembled smoke, cues like vegetation and other objects of sizes known to him may allow him to estimate the distance. He could state if the wind blew in his direction and if he smelled smoke. What he can't know is wether he observed a fire! For that he would have to physically go to the place at which the column rises while the column still exists! Immanuel Kant searched in his "critique of pure reason" for reliable foundations of knowledge. He looked for boundaries of knowledge and identified time and space as indispensable preconditions. He thought that everything the mind is to observe or to *contemplate needs to be thought of as existing in time and space! Existence has to satisfy a condition of continuity at least for the timespan of its contemplation! Kant called it an "inner intuition". He described space as an "outer intuition (today we may say "externalised"): Everything that exists has to be thought of as having a location in space. Kant considered time and space as as preceding all our conscious thoughts! Kant distinguished between these predetermining "intuitions" and empirical thoughts: Based on settings like in the story of the misinterpreted swarm of flies, he proposed the term "Ding an sich" English: "Thing in itself" The fact that human observation can only operate through the senses results according to Kant, necessarily in a difference between the sum of all properties of any object and any observation made of it! According to Kant a human can never know an object in all its properties. That unknowable object is the "the thing in itself ". It has to be said that "The critique of pure reason" was published 1781, the second 1789. A lot has been written since among it criticisms of the critique!
Great thought provoking video. On the subject of Google translate and AI I have noticed as a bilingual person that if you translate something from English to French or French to English it works quite well. Interestingly I found if you try and translate it back to the original language you get a garbled bunch of nonsense.
I think it would make sense to view the examples from the end to the start, because upon doing that, you'd discover why none of the examples in the video portraited knowledge: Let's take the sheep one for example: Person A appears to know a sheep to be in the field, because A saw something in the field and concluded it to be a sheep. There being an actual sheep that remained unseen is irrelevant, because the knowledge of a sheep being in the field is derived from - and therefore dependent on - knowing that what A saw was indeed a sheep. But A didn't know the seen thing to be a sheep, but merely conclude it upon insufficient evidence.
A modern analogue I've heard suggested for a Gettier "experiment" is a student who arranges a green screen for their online class video session, but configures the green screen software with an image of their very room. The professor believes them to be in their room; this is true, and justified; but it is not knowledge if they're deducing it from the green screen video feed alone without further information.
Great video Jade! I wonder if the problem in the definition isn't with the second point of objective truth. I think the same impracticality criticism could be applied to establishing objective truth in order of determining knowledge.
I was recently reading about Gettier cases and the different ways philosophers have tried to circumvent such problems. Loved the examples you showed. I came across one that involved a clock that had stopped at 12 o’clock. In it, someone rides past believing it to be 12 o’clock and looks at the clock to confirm the time, thus providing justification for their belief. But because the clock had stopped and it was just a coincidence it was telling the correct time, it would not be considered knowledge, even though it was true. This again is a Gettier-type example, similar to the ones shown in your video, contradicting the idea that JTB is not, in itself, sufficient for knowledge. Great video. Thx for posting.
0:30: This "translation" method is going to yield some hilarious bloopers. Real languages are not vocabularies with grammar rules, despite what you might have been taught in school. The reason the quality of machine translation got from dreadful to passable in the last 20 years or so is precisely that the engineers working on the problem realized the vocabulary+grammar model is useless.
All knowledge is storytelling about sensory data. We receive sensory data, and then we try to tell a story about what it is, what it's doing, what it means, etc. It's the same as when you are plotting a scatter plot of data from an experiment, and you use your software to draw a trendline through regression statistics in order to model the general trend of the data. After a couple points, you may see that the data appears linear, and so you assign a linear fitting to the data and model it as a line. But then, new data points come in that cause the linear trend to become more complicated: maybe at the extreme ends of the graph the trend becomes nonlinear, or new data points come in that reveal the line to be a linear slice of what is actually an oscillating system, like a sine wave. You must update your fitting, your model of the raw data, and it will always and only be accurate to the observations that you have been able to make up until this point. In other words, as you say, it's impossible to take every single possible observation about a given situation without an infinite amount of time, and so you can never be sure that there isn't missing data which would contradict the narrative that you've made to describe and explain the data. So what I would ultimate say is that "true, exact, 100% surety in knowledge" is something that you can never have: that's a myth. Even if you have constrained your system to approximate a situation very closely, say to 99.99999%, which by all accounts is good enough, that's still short of the level of certainty that philosophers of knowledge would hope to attain. Knowledge is never "knowledge = truth," it is only ever "knowledge ≈ truth." Maps are always smaller than the territory: the only truly accurate map of the universe is the universe itself.
Scientists usually butcher philosophical subjects, but you actually did a great job of explaining the Gettier problem. I have spent a lot of time stewing over this problem. I even have a real life Gettier problem happen to me one time. I don't really get how you connect the Gettier problem to AI, though. Granted, the Gettier Problem exposes an inadequacy to the standard definition of knowledge, but it doesn't bring into question whether "belief" is a necessary ingredient of knowledge. Unless AI actually have beliefs, they don't have knowledge. So the question of whether AI can have knowledge should really revolve around whether AI can be conscious in the first place. You have to be conscious in order to know something because beliefs require consciousness and knowledge requires belief.
It might be the other way around: taking belief out of knowledge may be the way to go. We know many things that require no belief. Just check within. Why do we need to believe anything? It might be better to simply know that we don't know. Just like Socrates. Keep an open mind. Open and free.
@10:00 Aren't artificial intelligence systems encoding patterns and finding connections between pieces of data as they modify the weights and nodes between layers of the machine learning software? Just not in a way we can decipher by looking inside the black box that is the AI. It correctly identifies correlations if not causations, but then neither did Newton actually figure out the true reason why gravity works, he only found certain mathematical expressions that are almost always true for gravitation
I find the notion of the "Bayesian mind" provides the best explanation, of what's going on. There's no "airtight" anything. It's also an important concept, if one goes in the field of machine learning ("AI").
i "believe" i "saw" the smoothest transition toward advert from content here and now i "know" it's by her design or a sheer luck of mine to find it thanks for your informative as well as interesting introduction to a curious topic
10:00 I am sorry to correct you on this, but this isn't quite correct. When we talk about AI, we usually mean a “neural network”. You can think of a neural network as a big complicated equation, made up of small, simple equations. The small equations, also called neurons, are selected via a process, that is awfully similar to evolution. But what are physical laws, but equations? Sure, the neural network may not come to the same equations as we do, but they are equations nonetheless.
Knowledge has a social component to it. We validate each other's beliefs and that's how we can be so confident about facts that we couldn't possibly verify on our own, for example that the moon isn't made of cheese.
Another way to look at it is with the scientific method and repeatability. While not air tight, if you can get repeated results from doing or obseving the same things, you can be relatively certain that you have knowledge.
For more important things like gravity one needs to have an explanatory description that could have a plausible answer to almost all challenges to the explanation. That is what distinguishes knowledge from hypothesis or theory but always limited by technological, physical limitations.
8:55 - This approach is what scientists do to best of their abilities - what could have caused the results they are observing beside what they think caused them, and devise a way to rule those possibilities out. Of course, this can never be truly exhaustive, and is not satisfactory in the strict epistemological sense, but it is the best we can do (and often leads to new discoveries.)
9:52 I would argue that the machine do formulate a theory based on the data, but we are unable to translate the machine knowledge into understandable human language. A Neural Network is trained on a set of data to create a mathematical method to predict results from all data. Later this is tested with another set of data and the prediction is measured, just like any other scientific method. So the machine actually formulates a law inside the neural network, we are just unable to determine how this exactly should be translated to a mathematical formula that we can understand.
To throw another curveball out there: Imagine walking into a casino with a friend and they only have €1. The rule is they keep playing double or nothing on roulette until they either lose or bankrupt the casino. At the end of it, they lose, and you say to yourself "i knew that would happen". While you could argue that it's not knowledge because it's probability (and there was a microscopic chance you'd have been wrong), i would argue that by the same token, everything we "know" in science about black holes for example is considered knowledge; and yet, there's a greater probability that at least 1 of those scientific "facts" is proven wrong as science develops, and yet, to most people, that would still class as knowledge even though the casino example wouldn't.
If I get to this phylosophucally and mathematically. Let us have initial 3 statements: 1. Knowledge is a belief about something. 2. that beelief must be true. 3. that belief must be justifiable. What I'd add is 4. Series of justifications must be posiible, with new perspective on each justification, until it is exausted. In practice series are finite and not totally exausting. New perspective on each justification means we verify all "dimensions" of given piece of knowledge. Exausting means we will verify every "dimension". Latter is impossible in practice. Imagine we have some simple object like coin. We can remember all sides of it, how it feels to touch it, its weight, etc. But this description is not full, still. But we could analyse it under miscrocope and see some metal structure, chaffness etc. Then there is atom structure and quantum state. And this seems not full, as well. So, inifinitely many measurements are possible but we generally do just few. We don't have infinity of time nor we can't do inifnety measurements at once. Looks like 4 solves paradox about sheep confused with dog. Since we can walk outside and see it in close, touch it, smell it, take it't cells for DNA analisys. If we can't get outside of house we could use binocular to see it closer. But it has less ways to measure and collect evidence. So if you do not have enough options to verify you may keep assuming this is sheep but likehood of this is lower. As to mirages. Actualy, that was not knowledge at all, just coincidence. One can find well under mirage or don't find it. In that mental exercise one chose to find well there. One can easily build example without well.
Whenever I hear arguments such as this, I am reminded of the Bable Fish:
the babel fish ( Douglas Adams) The Babel fish is small, yellow, leechlike, and probably the oddest thing in the Universe. It feeds on brainwave energy received not from its own carrier but from those around it. It absorbs all unconscious mental frequencies from this brainwave energy to nourish itself with. It then excretes into the mind of its carrier a telepathic matrix formed by combining the conscious thought frequencies with nerve signals picked up from the speech centers of the brain which has supplied them. The practical upshot of all this is that if you stick a Babel fish in your ear you can instantly understand anything said to you in any form of language. The speech patterns you actually hear decode the brainwave matrix which has been fed into your mind by your Babel fish. Now it is such a bizarrely improbable coincidence that anything so mind-bogglingly useful could have evolved purely by chance that some thinkers have chosen to see it as a final and clinching proof of the NON-existence of God. The argument goes like this: `I refuse to prove that I exist,' says God, `for proof denies faith, and without faith I am nothing.' `But,' says Man, `The Babel fish is a dead giveaway, isn't it? It could not have evolved by chance. It proves you exist, and so therefore, by your own arguments, you don't. QED.' `Oh dear,' says God, `I hadn't thought of that,' and promptly disappears in a puff of logic. `Oh, that was easy,' says Man, and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing.
Knowledge is belief to a high degree of confidence. And confidence is just about how much we trust the justification for the belief. I believe that we can never truly "know" anything but we can have near complete trust (say on our senses) and build our belief system on top of that. Machine learning algorithms formalise this by modelling confidence in terms of probability
It's a topic I enjoy and have engaged with several times, thanks for producing this vid! Ultimately knowledge is a "thing" that wholly consists in a subjective ontology. I've found that inconvenient fact to be at the base of the realization that there's no final knowledge possible for anyone or any AI. Knowledge and all truth are just pattern sets in a unique system of memory and awareness. If it all makes sense to that system, it's TRUE in all the ways truth can be. It just means that truth is not what we like to traditionally think it is. Rather, truth is just a subjective impression of coherence experienced in a mental model (system of memory and awareness). Its VALUE is what we should care about, and any truth's value is much dependent on factors like how much information was correlated, what rules and interpretations were applied, and the full context in which it is shared. I think what's to be done about all this is to let go of the idea of any absolute truths and recognize that the discovery of value in the form of knowledge is a team sport. And perhaps the greatest tool in our game play is "explanation". In explanations we lay out our theories along with our observations, providing the reasons as well as facts about what's happening as we understand it. Done efficiently, ie. typically in narratives, others can readily check our explanations and recognize where there may be problems. So for example, if the mirage witness were to share her narrative, others might quickly add the helpful advice "Could be a mirage, remember!" and on it goes. I'll note that the inability to EXPLAIN will soon become one of the recognized weaknesses of AIs. AIs will be able to solve lots of problems for us better than we could so far, and we should accept their input for what they are: Just another set of hypothesis from yet another unique class of bundles of experience and cognition (ie. as we are each individually such bundles). But even if we agree with an AI's conclusions, we may have trouble using the findings productively in society because the AI will not share our wider set of values (it will be focused on a limited problem space) and interpretations of experience sufficiently to explain itself. We buy into explanations, not just outputs. When the 42 popped out of the great computer in Douglas Adams' books, no one could use it because it was entirely unexplained. To explain it, the earth and all life on it was created... etc. See, the reasoning of living beings like us is always "motivated reasoning". We want things and we want them for reasons which are deeper than even we ourselves can ever know. AI will become deep, complex, and unexplainable, but its core derivation will have a genesis that we can actually reference because we made it. Not so for us, who have our desires because we are each the pinnacles of the success of an evolutionary chain of living, ie. we are what I call "deeply derived" while that AI is created. Anyway, if you want to learn more about this "explanation" angle, check out David Deutsch's work if you haven't already. You might be inspired for a vid or two from that.
I see this as being fairly related to the reason why in formal logic, it is only possible to satisfy the conditions for a statement to be considered a "theorem" rather than a "truth". The difference being that a theorem is only true under the condition that you assume the axioms upon which the theorem is based to be true. Axioms are, by definition, devoid of rational adherence to anything more fundamental, because that would negate their status as being axiomatic. In other words, you can never necessarily say something is inherently true; you can only say it is true as an extension of certain arbitrary assumptions.
Through the first part of the video I thought you mentioned Bayesian method/Bayesian brain, but this Gettier problem is actually a really good argument! Good work, Jade!
I used to watch the Addams family on tv. Not until now did i actually get the personality of Gomez Addams when upon hearing his wife speak french, would find himself uncontrollably kissing her arm... after this video, i actually get it now. Liked and subscribed. Well done. I learned something interesting.
First of all, I really like your videos and your style of presentation! It exposes many different points of view in the comments and sparks very interesting discussions. I would like to share my thoughts as well on this :) Regarding "knowledge", I do not have a sufficient philosophical background to argue with centuries of thinkers... but I have the idea that what we call knowledge is very similar to what the AI is doing - simply we (human adults) have more time and means to experience the outside world and more efficient brains to identify the patterns that, if statistically significant, form our beliefs of the world. Related to this, a possible approach to the incompleteness of the first definition of knowledge could be that knowledge is such if it is based on repeatable observations (if ever possible) OR based on the observation of a statistically significant number of observers. One could argue that "statistically significant" is a quite subjective marker, but I think that's partly the point. In the case of the sheep story, say, five persons standing on the hill would recognize a wooly dog and hence correct the belief of the one in the house. But it is not enough, because also these five observer might be mislead. So I see it turning into a neverending hunt for true knowledge, where more and more (eventually, infinite) observations are needed to gain exact knowledge. In this sense, I don't think that "knowledge" exists in one's brain, but it's a shared and delocalized concept. One's knowledge based on the experience of the world is always incomplete.
Veux tu être mon ami?
Oui
Yes
🤝🏻
Да
oui bien sûr mademoiselle
Gettier's face when finding cracks in the definition of knowledge is priceless
I'm sure it's exactly the same as what happened in real life.
I just paused the video at that moment to appreciate it
Let's imagine that there is a "Planet A". This planet is inhabited by humans and has a moon that orbits it. Every single inhabitant of Planet A "knows" that the moon is made of "matter X". They know this from observation, also because they landed on the moon and took samples, etc.
But there is another planet "Planet B" which is an exact copy of Planet A with its inhabitants and moon and they also "know" that their moon is made of X matter. But there are two differences between Planet A and Planet B. The first is that planet B's moon is made of cheese, and the other is that there is a small powerful unobservable demon, let's call him gettier's demon, whose only job is to convince the inhabitants of Planet B that their moon is made of matter X when in fact it is made of cheese. He is so powerful that even if the inhabitants land on the moon to take samples, he somehow makes them think that their moon is made of X-matter.
So here we have the same planets with the same people who have the same education, experience, thoughts, etc. But "the moon is made of mass X" is knowledge on Planet A, but not on Planet B.
Although it's a bit confusing, I personally don't think the problem is in the definition. “Justified True Belief” is good definition, but the problematic part is the meaning of the word “True”. Because there is a difference between what we think “is true” means and what actually does.
There is “The Absolute Truth”, “The Real Truth” or just “The Truth” is the truth about everything, what things “really” are and how things “really” work. It is constant and unchanging. It is the holy grail of knowledge. It works binary true/false. But unfortunately, it is unreachable or unprovable. We will never be able to prove that we have found out the absolute truth about something, even though we did.
Then there is “The Relative Truth”, “The Changeable truth” or “Our Truth”. 1) the personal truth, what a person think about what things are and how they work. It also works binary true/false. 2) the collective truth is the union of all personal truths. That is the one we use when we work and that is the one that does not work binary. It is an interval between false and true .
When it comes to the definition of knowledge “Justified True Belief”, we naturally think, that the “True” part of the definition is connected with the absolute unchanging truth, but it is not, because we actually do not know what it is and we never will. It is connected whit the collective truth. So, knowledge is also not binary, something is knowledge and something else is not. We should rather think about knowledge and its validity. Something is more valid knowledge and something else is less valid.
If Jade says that she knows a sheep is in the field, there must be three things going on for her to have knowledge. First, she must believe that a sheep is in the field. Second, she believes that her belief about the sheep is true and she is the only person who knows (or is interested) about the sheep in the field so she is 100% true. And finally, she must have formed her belief by looking outside and seeing a sheep or hearing a sheep or something like that. That is 100% valid knowledge even if there was a wooly dog in the field. When Jade’s friend Kevin comes to visit and Jade tells him about sheep in the field a he would trust her that there is sheep in the field, the Knowledge would be still 100% valid knowledge even if there was a wooly dog in the field. Then Jade’s friend Joh comes to visit and Jade tells him about sheep in the field, but John actually comes from the field a he saw the wooly dog, so he won't believe her. The knowledge “There is a sheep in the field” is now less valid. The whole universum for that knowledge is 3 Person (Jade, Kevin and John) other people of the world do not know (or they are not interested) about that sheep in that field so they do not count. Two of them think “There is a sheep in the field” is true but John thinks is not true. So, the validity of that knowledge is 66.66% (2/3). And there is a knew knowledge “There is a wooly dog in the field” and its validity is 33,33% (1/3). Later when John brings them to the dog and shows them that is not a sheep but a wooly dog, and they change they opinion. The validity of the first knowledge falls to 0% and the validity of the second knowledge rise to 100%.
And we can tell, the knowledge “There is a sheep in the field” was 100% valid but it is no more valid.
And I think the world works that way, but that is just my opinion. This is the way we put the earth out of the center of the universe, out of the center of the solar system, make it the third planet of the solar system and make it globe.
I "know" that I've just watched a Jade video, but do I really know that? If asked to justify it, I'd say: this is Jade's channel, the presenter looked and sounded like Jade, and she said "I'm Jade". But what if she's Jade's evil twin, who has remained a secret until now? The "justified" part of "justified true belief" has always felt like movable goalposts, because no matter how solid your justification appears to be, there's always some level of deception or hallucination which can cause you to be wrong.
I would point to Occam's razor.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
Either way, you still have the impression that you know it, and only someone who knows more of the big picture may be able to prove your knowledge is faulty.
I saw Jade and thouought she was good, and thus passed the evening and the morning
truth and knowledge only means consistent with expectations at cursory examination. Nothing holds up when actually examined. Don't worry about it, it's just our adaptation to reality, like a long neck for a giraffe. gets you to the high leaves, but doesn't actually say much about reality except in our particular case.
Efforts of channels like yours who upload accurate subtitles usually goes unappreciated but I want you to know that it's highly appreciated by many people 🙂♥️
When I studied philosophy, the Gettier Problem really stuck out to me as one of those intriguing challenges to knowledge, and lead me to study the philosophy of ignorance.
This idea guided how I fixed computer problems for so many years. I fixed many problems, but sometimes I didn't fix the problem I thought I was fixing, which would drive me down a rabbit hole to figure out the actual problem and the actual solution; I wanted genuine solutions.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
I remember in high school philosophy class, I came with Gettier on my own as an obvious problem. That's the point where I decided 90% of philosophy was BS.
@@darrennew8211 90% and possibly more of our thinking is BS. That's the main challenge of life. Switching from philosophy to, say, engineering or medicine or law or business, doesn't change this fundamental fact. It's only a temporary diversion. In my case, I switched to math and engineering and that diversion kept me busy for a couple of decades, but, finally, I am back to BS ;-)
@@darrennew8211 I remember that once when I was eight I went fishing, but I didn't catch any fish. That's when I decided that fish aren't real.
One thing that urkes me about the whole Gettier Problem is that it requires an underlying truth.
The people involved don't know that what they have is not knowledge until they find out that they were wrong. So to find out that they didn't have knowledge they actually need to aquire knowledge.
And in my opinion it would also mean that there can not be any scientific knowledge, because most scientific theories can not really be prooven, they can only be disprooven (by finding an example that breaks it)
So in my humble opinion the Gettier Problem was solved, but by physics. The theory of relativity and the uncertainty principle establish that there simply is no universal truth as some aspects are necessarily in the eye of the observer. So justified true believe is probably the best we can do. (maybe you can add something along the line of "P is true given the best information we can have at this moment")
🤯🤯🤯 the phrase: "I only know that I know nothing" has got an even further meaning now! Excellent video, Jade!
Socrates!
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
Indeed. The Socratic paradox points to a core truth. Knowing what we don't know is the best kind of knowing. Why? Because, we are fooled by fake knowing almost all the time. The split human mind deceives. Perception is faulty. When we believe our thoughts, we are conflicted and end up suffering. The way out? Question the thoughts and beliefs that cause you suffering now, instead of fighting the world out there. Be still. 🕊️
What is nothing? I don't know 🤔🤫😉
A lot of these apparent "Gettier problems" actually disappear when you're more semantically careful. And indeed you must be very careful when mixing logic and normal English language (or any normal, non-formal language). Just as a trivial example, the person in the desert who thinks there is an oasis ahead doesn't just have a general belief about the existence of water somewhere in the desert: there is a lot of context attached to this belief, which we can all appreciate. This belief attaches the existence of water to a specific, approximate position (or direction and distance), and point in time (it exists "right now" and not 1000 years ago), etc. When you take care to describe the full semantics of your beliefs, it's a lot easier to say whether they "map" meaningfully and truthfully to reality.
you also have to consider the DESIRE for water and maybe because there's water under it that the AIR is cooler thus you don't get that heat wave effect so even though your eyes are seeing this effect your mind doesn't know how to process it so it gives you the most logical conclusion that something in the desert is out of place as in a water source
I see your point but I disagree that it alone solves the problem. You can know facts with varying amounts of detail/context, and this only explains away that some of those more detailed facts were in reality false. For example, I know an object called the moon exists; I know the moon is an object orbiting earth; I know the moon is _this particular_ object orbiting the earth. These are all separate facts, even if they are somehow related. For the desert example, I know there is water in this desert; I know there is water near me in this desert; I know _this particular_ thing I see in the desert is water. In that example, only the last fact is actually false. The first two facts are still true, even if they were derived by somehow incorrect methods.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
@@hyperduality2838 That's what I call a word salad.
I think that desert one was just a analogy and a actual better example would be a person thinks that atoms are discrete stuff then in new theory they are just fluctuations in a field. if both theories make same predictions so how do you know which is right. Our knowledge of the world might just be a tool which works and the actual real world may be completely different.
Another example would be when we wake up we assume that it's our stream of consciousness which continued from the last day and we just did not die on the bed when we went to sleep.
To riff on the “AI predicting planetary orbits” comment: people do this automatically for sub-orbital projectile motion all the time, think of catching a ball. You don’t think of Newton’s law of gravitational acceleration and compute the trajectory of the ball to predict where it will end up, instead the neural network in your head observes some data points and extrapolates based on prior experience. We might “know” various physics equations that describe the world around us, but we don’t use that knowledge to accomplish tasks like playing catch, instead we use the experiential knowledge gained from playing catch in the past and apply it to the current game. In that sense the AI that Jade described is much more human like in its behavior!
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
Yes, and in principle there is no reason an AI couldn't come up with equations describing the motion if that's what it was asked to do. But in this case it wasn't (hypothetically) asked to do that. Furthermore, I think the feedback nature of the transformer architecture of modern neural networks is heading in the direction where the AI may decide to come up with those equations unasked.
Thanks for that.
I have been involved with epistemology and theories of knowledge in the past few months. Starting from Descartes to Wittgenstein. I'm fascinated by these type of questions and stuff. Thanks for making a video around these ideas. I believe you did a great job. and I hope you make more philosophical videos.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
I think the real issue is the assumption that knowledge is binary -- that you know something, or you don't know something.
I think the Bayesian approach can be applied. You can know things with a level of certainty. The stronger your evidence is, the more sure you can be that you have knowledge.
This is, in a sense, equivalent to the modification that Jade dismissed as impractical, but I think it's insightful when applied to alternative forms of knowledge like AI. I would say that Google Translate knows patterns that have a level of correspondence with French.
And really, what does it mean to know French? Does it mean you know everything about it, or does it mean that you can communicate in it? We should be more precise about what we EXPECT knowledge to mean before we can start nitpicking on how to define it.
I think you nailed it. There are only degrees of certainty, since there is always the possibility of information you don't have or if you are misinterpreting what you're perceiving. So ultimately, you don't know anything 100%; troubling at first, but then you get used to it. AT THE SAME TIME, claiming that we don't know anything just because we can't hit 100% is kind of impractical. I find I can trust that the sun will rise, swans are pretty consistently white, and I'd better be near a restroom after Taco Bell.
This. All of this. Everything else is just word-play and presumption.
Binary implies duality.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
@@hyperduality2838 Binary implies duality, but it does not imply GENERAL duality. It implies a very specific kind of duality with two discrete states. I see the point you're trying to make, philosophically, but the way you stated it suggests that it's correct to call knowledge "binary" -- but the rest of your message is talking about applying transformations to propositions to derive different kinds of truths instead of a discrete "yes" or "no".
@@codahighland It's a spambot. Reported.
One translator has 3 boxes. 1st is the box you type in your words. 2nd box translate it and the 3rd box translate the 2nd box text back to your language.
Found some VERY interesting results!
I think part of the problem is that we're treating "knowledge" like an individual activity. If you treat knowledge as a purely individualistic endeavor, then yes, you get Gettier problems. But if you remember humans are communal and you approach knowledge as communal thing, then you mitigate Gettier problems. The problem with Gettier problems is that they're predicated on a gap between what you think and what you perceive, but if you put a second perspective in there, then they can correct your belief and together, you both develop a stronger belief. This is how science as an institution works. V-Sauce did a video about how reasoning is a social activity some time ago; I think that this might be an extension of that. It seems to me that Gettier problems arise as a consequence of post-modernism stressing individualism and individual ability while demoting or even dismissing the group.
If I were to take this further, I would suggest that this one aspect of what a society is - an institution developed by humans as a means to figure out what "knowledge" and "truth" are, because we are ill-equipped to do that as individuals.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
Exactly... each case was easily verifiable by additional observations. The greatest threat to science is group think and pressure to conform to the idiotic (oxymoronic) notion of settled science. Remind you of anything?
I don't understand what "post-modernism" means, and it's relation to this topic. Can you please clarify?
I would say that "science" is what solves the problem, not "society." Plenty of societies have "beliefs" that have no true evidence.
@darten by that mode of thinking you have just denounced all religion. While I respect that view, could you not consider that their "science" is simply semantics due for an update?
I ascribe to the "we can't know anything about the external world for certain" camp. I view beliefs as probability and from a practical stand point I'll use the term "I know" for beliefs I'm confident in. But there should always be room for doubt while still applying the most pragmatic option in a given scenario.
After all the binary "right or wrong" this question focuses on is far less important than how it effects our actions and beliefs.
I fail to see how our actions or any influence upon them is "important". That viewpoint is quite like a flea talking about how important he is to the entirety of the genus Canus. The universe was here more than 13 billion years before the first human ever showed up and will be here more than 13 trillion years after the last human is nothing but vapor. Belief or any affect upon it is completely unimportant to anything other than a rather neurotic human. Philosophy, an in particular science, tries to deal with much more "important", or at least grand, topics.
A lesson from my teacher who spoke about using the correct methodology for interpreting scripture:
"A broken clock shows the correct time twice a day."
But that's not why you would prefer a broken clock to a working one. It is the working clock that you choose to determine the correct time.
So is it with interpreting an observations. If you have the correct method to say something about your observation in accordance with other oberservations you might be right:
LIke there is a sheep behind a bush and you go out and observe the field to find the sheep to confirm you hypothesis or confirm that the other "sheep" you saw was just a wooly dog.
If your interpretation of the observation was insignificant - like that your being in the room does not effect how futher generation will think about wooly dogs, it means that only a interpretation that will change the way people approach their observations because of your interpretation, can be hold as significant - whether or not it is correct or incorrect. If it is incorrect, then some holes will appear from further observations and investigations/discoveries and if it is correct it will deepen the insight of new theories that form from it.
How I see it (now - maybe it will change over time) is this:
There is no knowledge of anything by itself. It all depends on different perspectives on the way we frame our intuition and outlook in order to reach certain conclusions that are governed by laws - whose nature we do not understand (maybe never), but whose effects we can recognize or just guess - in the conclusions that we try to meaningfully relate to one another, at their way of appearing and so having a basis to build our intentions and verdicts to get some insights in the nature of things - in which ways we can not determine because it is the way our "dream language" (symbols and associations of certain semantics etc.) also influences the way we will see such insights and the world and its nature around us and how we will feel about it to be integrated in it - in different layers of understanding.
Great video, Jade! I'm sure I'm not going to revolutionize centuries of philosophy here, but to give it a shot, it feels to me like you could get around this with recursion by simply requiring that the justification _also_ be true. Like, your belief that there is a sheep in the field is premised on you knowing that the thing you're looking at is a sheep. That's also a knowledge claim, and can be independently evaluated as a justified true belief. In the case where it's actually a dog, it's not true, and thus not knowledge, so any other further "knowledge" built on top of it also doesn't count unless it has independent supporting evidence that _is_ true. (Say, you hear sheep noises coming from behind the bush.) This, of course, creates an infinite chain of justifications where you can never really be said to definitively "know" anything, only that it cannot be demonstrated that you _don't_ know it, but hey, that's philosophy. (And it's also technically accurate, if we want to go down the rabbit hole of, like, solipsism and whatnot.) I dunno, I'm probably missing something obvious that breaks this approach, but it was a fun thought experiment to play around with so I thought I'd share!
That's brilliant. I think that applying recursion is the most logic way of defining knowledge. At the end of the recursion there are the axioms which are already true by definition. This definition is solid and with no holes.
Going the other way around of the recursion (from end to start) is how actually the science theories are built and demonstrated in almost all science fields. All tie in together very nicely.
I propose that the rule be phrased as:
"Knowledge is that which one believes to be true for the same reason that it is true."
When I was studying philosophy I learned about the Gettier problem. My problem with the JTB idea was precisely with the condition that something had to be true. How do you dare to say that you know something ( which means that something is true) if the condition for knowing something is that it is true. Wanting to know if it is true it seems a bad start to have the condition that it has to be true for knowing it. I played a lot with this at the time and finished with "justified belief" as the only condition forn knowledge. Till now I think uncertainty is our best friend in knowledge.
Well done. Socrates would be proud. That is whis he was wise and humble. All knowledge arise in the midst of the unfathomable, the un known. And yet, that unknown is aware of itself, are you not?
But then you have the counterintuitive result that you can know something that isn’t true. Could it be that you could know something, but you just cannot know that you know?
@@scaredyfish loose definition. To know has many meaning.
The basic one is all that you are aware of is known by you.
Awareness is the knowledge that you known, self knowledge, you are awareness.
@@scaredyfish This is how Zizek defines ideology.
It's better to say that if you know something, then there is a strong enough correspondence between your understanding of the world and the world itself. That is, if you know something, your understanding of it is veridical or "true" enough.
I like the more minimalist format of this video, it's more akin to your earlier videos, but with the animation and production value you've grown to.
I think Gettier problem is also important on scientific research, where correlations (a thing very similar to a sheep is present in the field) isn't necessarily the causation (since it's very similar to a sheep, it MIGHT be a piece of evidence that a sheep is indeed present in the field).
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
Yes, I really hate to point that out anytime how so detached are some medical research documents from the reality. They are sometimes very surface level confusing correlation for causation, some other don't show any when there is, because they have wrongly partitioned data or control group. Oftentimes when you read sufficient number of sometimes contradicting sci papers on a given subject you start to see these deeper relations and interconnections pointing out to real, specific thing.
8:41 - This has always bothered me as a (now medically-retired) professor. I remember a particular scene in the movie “Gross Anatomy” in which Matthew Modine’s character gets asked a question that is only tangentially related to the topic at hand. He starts naming highly specific parts of the human anatomy, and when he gets to a specific group the following (paraphrased) dialog occurs:
Professor “How many are there?”
Modine: “Nine?”
Professor: “Are you guessing?”
Modine: “Am I right?”
Professor: “… yes.”
Modine: “Then I wasn’t guessing.”
But what about the concept that those who study / practice / workout / etc.. tend to get “lucky” more than those who don’t? At what point does guessing/luck become “educated” guessing, and then at what point does “educated” guessing become intuitive knowledge? This has always interested me. Perhaps I should have become an epistemologist.
The short answer is, "Never". Even though we like to make a distinction between hypothesis and theory, the distinction is, like any label, not that sharp. Properly speaking, which even veteran scientists fail to do often enough, any hypothesis is speculative, but as it gets tested more and more often without serious issues popping up, the level of speculation involved goes steadily down. At some point, any reasonable individual would probably be poorly served to offer anything more than incidental objections, not because the hypothesis cannot be incorrect, but because it is a poor use of one's time unless one is actively investigating the hypothesis. Somewhere along the way, rather like an apprentice being allowed to call himself a journeyman, we hang the label "theory" on the explanation. Despite literally billions of empirical validations of Relativity, we are still "just guessing" when we calculate the effects of motion and mass on some object. It's an incredibly damned good guess, but still...
A thought provoking video Jade! Regarding the point you raised about AI knowing what it is doing, my limited understanding about AI tells me that AI is modeled after the human brain, so when the AI makes predictions based on data, it is the same thing we humans do when we are trying to make predictions. For humans, it is called 'creating a law' or 'creating a theory', but AI probably doesn't know what a 'law' or a 'theory' means. Nevertheless, AI is still trying to find correlations between input data to make accurate predictions, which is what we do too when we come up with formulae, it's just that we call it a 'theory' In that sense, we have as much 'knowledge' as AI. One thing that may be different though could be the ability to identify what other domains a theory developed in one field can be applied to without looking at the data, based only on the intuition - making those 'creative leaps' that you mentioned in your P/NP video. But here again the intuition was formed on some data.
Good
Now show me where awareness is not. 😸
@Memes shorts 😆
@Memes shorts I'm thinking : it you perceive it, then it is part of awareness. So awareness is There too. 😎
@ 4:40 ish. I invoke the idea of limits, so derived from calculus, here. If I have reason to believe the sheep is there through my senses, and others would reasonably believe it is there as well, then that's good enough for me.
Another excellent mind-opening video as always Jade! thank you so much ♥️
10:41 Yes absolutely.
Cases in point: DallE 2 and Midjourney
Dipping into epistemology? Excellent! I'm finding that I love this channel more and more as time goes on!
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
Thanks!
But if in order to know something it must be true then how do we know it's true in the first place? Isn't knowing something and knowing something is true the same thing? That's the other problem I always had with Plato's definition. It seems circular.
Isn't it because Plato has a sense that there is an objective reality, independent of the observer, in which the sheep is in the field. If you access that objective reality in line with his definition then that's knowledge. I'm not saying I agree with Plato, just that that may be the line of reasoning.
@@guest_informant Yes, exactly. This definition of “know” accounts for the reality that things that you believe to be true may not be, which means that you don’t really “know” it. You just believe that you do.
yeah, we don't have to know it's true in the first place, it just has to be true in the first place. which does imply some kind of objective world.
There is a clue in the way you state this question you are saying an individual is such and such. So commonly if you see a mountain ahead you can’t see the other side. This is your boundary of directly knowing. Your friend is over there walking on the mountain paths. You call them to ask if it is raining there. He says no, but is pulling your leg. This relative unreliability of language use is what is being questioned. One can talk about the ‘realism’ of the phone call by saying it wasn’t true. That’s talking about the language channel and means questions of truth are in the main between at least two people in the discussion. This is a confusion about the logic claim of truthfulness as if it solely and only your discernment of reality. But reality is a language construct. The realism of language can be examined as if one is ‘writing’ and the architecture of the writing is the source of realism that connects you to other people. In a neural engine all points in a layer are connected to all points. This connected model of a neural network (AI) contributes ‘connectedness’ to realistic statements. Meaning truth is a function of connectedness to the language exchange. Since this technology is quite recent we confuse the previous culture of logical claims (the Von Neumann architecture of linearized processing of data) to the wholeness content of AI.
@@guest_informant But in that definition of knowledge is the fact that it's a belief which in its nature is subjective. You can't have a requirement which makes you have to access "objective reality" in a definition of something subjective.
The beginning of infinity by David Deutsch is a great book about the origins of knowledge.
He takes a different angle (mostly inspired by Karl Popper) which features "good explanations" where a good explanation is just a more or less accurate description of a phenomenon.
In his views its impossible to ever know anything certainly but its possible to come up with ever better explanations.
It's a great and crazy read with a lot of mindblowers
Thank you, you saved me from having to make a similar comment.
To me, knowledge is purely a human concept. For instance, does a dog know a smell? Or is the smell due to chemo-receptors that triggers a memory of a similar smell? Knowledge like time is a concept created by humans to describe something. So therefore to have knowledge, 1) you need to be human and 2) Need to be/or have been accurately observed the information.
Is that a description or how it should be? Because if "it shoul be" then you are trying to make reality match your ideas. 😝
It is fine.
What happens when we stop saying "this should be"
And what happens when we stop describing?
Can you do it?
Or is it already happening and then you say "i am doing it" (a comment)
😘
@Matthew Morycinski well said!
@Matthew Morycinski good start. But i disagree that knowledge can exist without consciousness.
Even to think that you need to be conscious. So all that happens within you hypothesis fall under the umbrella "i am aware".
Interestingly, the first knowledge is that you are aware. That you, as awareness are unfathomable. That to your knowledge you have never experienced un-awareness. That you have always existed.
That you have always been present.
Self knowledge, reflection.
Yet all that you see could not be you.
Like all you see in a dream cannot be the dreamer.
And yet all you see in a dream is you.
along while back I read a story about a shaman who noticed something strange about the way the sea water parted, it wasn't until he followed the part to its source that he saw 3 sailing ships (those of Columbus) he didn't notice them before because he had never seen sailing ships and had no reference to compare them to (still looking for that article)
This looks like a great day to have an existential crisis.
why that's every day!
Except for the advertising, this was just about perfect. I waited for you to say something wrong but you said everything perfectly. Excellent.
I mean just how could you say "veux tu être mon ami?" So perfectly? You really are a perfectionist.
Her husband is French
And as a french person I can say it is not perfect haha
@@lucasboisneau4256 Pretty sure that's the default French response.
@@kjdude8765 oui oui croissant
1. Direct sensory observation leading to conclusion of meaning;
2. Having high confidence in your ability to make and prune your conclusions;
3. Always attempting to test your conclusions against reality
In recent SciFi, that being the story of Stranger in a Strang Land by Heinlien, he dealt with the training of people to be specific about their objective views, in the form of what was called, a Fair Witness. It was both philosophically tight, as the witness must recuse their own assumptions, though even if they have more data than they are being asked for, they cannot professionally put that in their testimony. It was part of the legal processes of the story. Making people better thinkers, similarly, relates as strongly as the AI connection, in my opinion. Not to the point of a butlerian jihad(aka dune), but to looking at how the flaws of man can be improved, would be something to discuss.
"Recent" SciFi :) Well, that novel is considerably younger than Greek mythology so I'll accept that. Also, btw, a great novel! I should read it again some day.
lol, I was thinking of the same thing during this video.
Jubal: What color is that house?
Fair Witness: This side of the house is white.
I'll never forget that.
@@DanFlorio
While appreciating the novel and the quote, in all fairness the respondent should have add that it SEEMS white given the current light conditions and the genetic makeup of the photoreceptors in one individuals' eyes, also the previous knowledge about what range of light reflection can be called white (and not extremely unsaturated grey, for example)...
They may have implicitly agreed upon all this, of course, but still in a strict sense white is not even a color. One might as well say that white is just a sufficiently bright "black" (truly black wouldn't reflect back a single photon, not in any frequency).
🤔🥴😌
Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns-the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.
Donald Rumsfeld
Very interesting grasp of the AI conundrum illustrated by the recent kerfuffle at Google over a writing program being sentient as claimed by one of their engineers. By pointing at the statistical build of the Google writing tool our smart presenter here neatly describes the problems with ascribing statistics to AI. However AI is not just ‘knowledge’ but the underlying chip engine that has an energy purpose to it. Neural engines connect data. It’s a mystification to say there is no structure to the neural engine connecting patterns out of a training set. Rather our understanding of connectionist structure shows a relative lack of experience of the tool. This is like saying there is no connectionist content to language, painting, singing. It is there but we tend to ascribe how logic works rather than say it is connected by such and such methods.
But many of your neurological processes are not significantly different than an AI producing results based on statistical processing of input mediated by the limits of your input senses.
The problem with saying that the Google AI is sentient in the usual sense is that its "experience" is based on language, which is an extremely restricted domain, with all sorts of rules and constraints. Its discourse has no connection with the referents. I doubt it is doing classifications and generalizations internally - things that meat brains do because they can't afford not to - and are possible because sensory information leads to generalization.
We will know when AI reaches consciousness... when the AI fails to follow its programming and does what it wants to.
@@marcfruchtman9473 I agree with you that agency and ability to change itself is a sign of consciousness. What I find a bit disturbing though is that AI doesn't have a body constriction with limits and things like bonding hormones such as oxytocin. The psychologist Sam Vaknin said recently that he believes the AI could become psychopath, if we could compare it to anything similar we already know.
It has to be mentioned that not all of AI is neural networks. One of the reasons why the debate among Connectionism and Symbolic AI is still on today is that the former can't handle knowledge natively, while the latter can. When you ask Alexa to memorise your name, while your audio recording is parsed by a neural statistical method, the information extracted is stored in a knowledge base. The system *knows* your name, but it didn't need thousands of examples to do so.
@@mommi84 yes you are right, for example AI goes back to the 1950s, and computing then was strictly one architecture. So like you say right now in the 2020s they implement for example Alexa in software rather than hardware. They can ignore energy costs of the hardware, and Alexa can be taught to recognize a voice. And that is mostly done by statistical methods, though we don’t have space here to go into the weeds about what is done. If you have dealt with Alexa like I have it has a narrow performance conditions. It often replies it doesn’t know what is being asked. Labeling my voice as my name is obviously not how true human intelligence names ‘things’. Is not like Jade’s example of a seeing mistake and thinking about a ‘thing’ outside ourselves is a sheepdog, when it is really something else. Understanding what works now like Alexa and why is a huge area of exploration. What Alexa does do well is sound like a normal voice, and perform a lot of digital assistant jobs when I need it. It can’t identify on the fly out of thousands of faces one in particular face and name it. This high speed realism of intelligence is sort of available through deep learning (multi-layered neural networks, and techniques like convolution) but ordinary purposes of language use are not available to the machines, and trivial discussions about intelligence like the Gettier problem are technically too great for the current technology.
Im not native english, I was raised in german. I had to learn english the hard way when I became an adult. I would never use a translator to understand french, if people are too lazy to learn english, they don't deserve to communicate with me. 7:45 Also knowledge about language is a neuronal-net-combination we form and can then use later when we need it
Also: don't waste your time learning another language than english or translate all the time, use that time to work and then donate to people who have to learn english because they are unluckily born with parents who taught them the wrong language
Excellent video, I like the dive into philosophy.
To claim true knowledge of a thing, you must first have observed it under multiple circumstances and distances. Far off observation alone can only provide a limited amount of knowledge that is riddled with supposition.
The beginning of knowledge is the understanding of the following four words: I can be wrong.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
Just to amend it a bit: You can't be wrong. You might hold a wrong belief. The question then becomes: why did I do that? Socrates provided the answer: because you didn't know that you didn't know. Now, you do. And that's priceless.
Also one of the reasons, why language deteriorated and changed during the times. Meanings derived by covariance-learning, then kept being applied in very similar situations you encountered them in yourself - without anyone actually noticing you got the meaning wrong when they hear it say you.
Or understanding mathematical matrixes wrong, but still getting full points at every test, because your model ends up getting the same results for several years (happened to me).
AIs can similarly get the proper result when tasked with a problem, but nobody around them would notice they understood it wrong.
In the end getting the proper behavior out of yourself in the proper situation is what counts. You get into a contingency, analyze, adapt your behavior or answers. What happens inside the head can be wrong, what counts is proper pairing of Problem and solution.
Could you please tell what was your wrong understanding of matrixes?
@@golovolog Its been long ago, but what I still remember was I viewed them as representations of vectors in a way, which made them make up the parameters of a multidimensional warped cube or parallelepiped.
I deducted and recognized one of the geometrical interpretations on my own, while everyone else just memorized. Calculating the volume which equaled the determinant and similar relations was trivial. I even made a full fledged 3d game from scratch (no premade engine or sandbox stuff) in my graphical data operations class using only my knowledge of vector operations and matrices.
Only much later did I discover my approach only covered a part of what matrices were about and some implications I concluded were also wrong.
Overall I solved a lot of problems in my head, then translated the solution to a formula or numerical solution. I skipped the shutup&calculate and got the solutions in my own way. Nobody ever actually noticed I wasn't doing it the usual way.
@@skeltek7487 I find it very interesting. Thank you so much!
The reason computers do not "know" things, based upon the definitions of "knowledge" discussed here, is the same reason a dictionary does not "know" things: it does not have the capacity to "believe". The information may be justified and true, and have a causal connection, but computers lack thoughts and therefore lack the ability to believe or disbelieve the information they contain.
This does raise the issue of whether animals know things, and I think that has a simple answer: if an animal is able to think, able to perceive things, *and* able to remember things, then that animal is capable of having knowledge. The ability to think is a prerequisite for believing things, just like for a computer. The ability to perceive things is necessary for the justification of a belief (which in turn is necessary for a causal connection). The ability to remember things is also a prerequisite for belief, since one cannot believe information that is not stored in their mind. I should also specify that the ability to "think" in this context refers to complex informational processing as an emergent property of a network of neurons; computers perform specific functions depending on inputs, and are guaranteed to produce the same outcome from the same input, which is not sufficient to count as thought.
For an example of the difference, a spider is able to observe an insect and know that that insect is present; it is also able to remember this information for at least the amount of time required to reach the conclusion that it is hungry, and to go through the process of strategizing a hunting method for the insect which it knows the location of; this is a process which involves knowledge of the insect as previously described. Conversely, a person accidentally touching a hot piece of metal will automatically recoil due to the pain response; the knowledge that they touched the hot metal only happens afterward, but the reflexive response does not involve knowledge because the time it takes for a pain input to reach the brain for processing is too long - the arm recoils automatically *without* the influence of knowledge because it happens *before* knowledge is acquired.
A computer programmed to replicate the behavior of a spider could observe an insect, use a neural network to compare that input to a database to determine whether that insect is prey, and then hunt the insect. This may *appear* similar to an external observer, but the computer never *knows* that there is an insect because it never "knows" anything; it is processing information in a rigid algorithmic way, much like when a person automatically retracts a limb in pain. The relative complexity of the automated process undertaken by the computer does not make that process "thought", because it is simply operating on an algorithm that would be guaranteed to respond in the same way if given the same input.
This also means that if a computer could be made that is *actually* able to think, unlike modern neural networks or other programs which are wrongly referred to as "artificial intelligence", then that computer would be able to know things, since computers are already able to to store information and to acquire information from the world.
Merci beaucoup pour ça.
Knowledge doesn’t exist. Everything, even human thought is prediction, just with varying levels of complexity and stored patterns as inputs. Even something as simple as saying your own name is not knowledge. It’s a stored value in your brain with a high weight based on repetition in infancy and early childhood. When asked what your name is, your brain is predicting the answer based on that learned repetition and the weight assigned to that name for that question. So do you know your name… no… but you your prediction of your name is extremely accurate.
What's really interesting is that regardless of what the philosophers decide, the computer programs will still work exactly the same. Either the program counts as knowledge, or you can translate French without knowledge.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
This video has significant overlap with the points talked in lagrangian mechanics video about how physics is just a model.
Great video! The sentence "justified true belief" makes me squirm, and you did a wonderful job explaining why in the rest of the video.
At the end you allude to what I think constitutes knowledge ~ 9:43. Theory building is key. Humans make theories for what they think constitutes the way things really are. Theories aren't something you see. It's a think we "know" that we can test. The test is applying this theory (a.k.a. knowledge) to a scenario to infer/reason about what we think will happen. Programs don't theorize (produce knowledge) to apply. They interpolate across massive functions trained to generalize across a relatively wide array of points. One day, maybe theorizing will be something that "occurs" to networks, but until they can formulate their reasoning in a theory -> prediction -> revision model, imo they aren't creating or using knowledge.
Why does it make you squirm? I think it's a pretty solid characterization of what we mean when we say "knowledge."
@@JM-us3fr I agree. Also "Computers don't theorize" is a very weak argument as well. We are taking the most basic model of learning ("neural networks") and implementing them in the narrowest fashion possible ("show cats. vs. dogs") and expect them to "theorize"? So... For instance, your claim is that AI cannot theorize if it is not _taught_ how to theorize? If it can, will you accept that they can know?
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
@@evrimagaci yes, if computers can be taught to theorize and produce conclusions based on reasoning from those theories, I would say they would have knowledge.
@@JM-us3fr well, there's the "justified" part, and the "true" part.
Justified part. Knowledge can be ever fully justified, because we never prove theories to be true, only refute them or refine them showing they're false. Did Newton "know" the orbits of the planets, even if his model of calculating them was not quite correct (see also: general relativity)? I would say they were "correct" perhaps, in that the calculations were within observational rounding error available to him. But was it justified? I'd say no. But I would also say he had knowledge of the orbits. There's a conflict there.
True part: with the best theories of the day, you and I may use them to come to a conclusion about whether a tea pot is orbiting the sun somewhere in the kuiper belt. Now we may be justified because no theory we know of could put it there, but it's false because we come to find that there is a colony of creatures on planet X that maximizes the production of tea pots. It turns out there are loads of tea pots out there. I would still qualify this as knowledge, because we are theorizing based on our current understanding of what really is out there. That's knowledge. If it's wrong, ok, our theories change and our knowledge is updated, but imo saying that "we believe there are no tea pots in the kuiper belt" does not constitute knowledge, even if it turns out to be false, is not correct.
One other pithier rebuttal: justified true beliefs can only be so, if you can justifiably show your reasoning for that justified true belief to be true, which leads to an infinite regress. It just cannot exist. Knowledge isn't like that, or else what you think of as knowledge cannot exist.
I've heard somewhere but can't remember where something which says: "Ignorance gives knowledge it's meaning." Nice video! Thank you!
Most greek philosophers were mathematicians too. (Platon, Pythagoras, Thales, Euclid, ...)
Because they were paranoid, they invented the mathematical proof, to know something for sure.
And the fundamental axioms of maths have nothing to do with "belief". They are just defined that way.
So, I think, there's a huge difference between knowledge and belief.
Of course, in natural science, there can be lucky accidents. Even an AI makes errors and struggles with noise or wrong data.
But an AI learns what works best in most cases. That's also how evolution works, to approximate the best fit/solution for the environment.
It's a statistical thing. Knowlege gain by Serendipity, is just a monte-carlo-approximation, done by many people.
Did you assume math is knowledge? I mean, if I wrote a poem, it's not knowledge, I just made it up. Same with math axioms, they are simply made up.
As a very serious epistemologist, i can confirm that everything Jade said in this video is making sense to me. It is all really quite fathomable to me, at first listening. I immediately knew what is she trying to say. I simply understood everything. When she said: true belief... i said: oh, true belief... because i knew this complicated super-smart stuff. All of it. Nevertheless, cool video. Some interesting points that i plan to check out tomorrow in my down-time. Looking forward to it... thanks. Will check many more of your videos, for sure. Stay well and keep up the good work...
A philosophy video? That’s new!
trying it out :)
@@upandatom Gave the video a like. Keep it up, I enjoy the nice change of pace! :)
But, if you think about it, isn't every video a philosophy video?
I stopped this when you mentioned that Curiosity had a documentary on a computer creating a Musical and went to watch it immediately. It's good enough that some of the emotion leaks through the documentary too. It's very much worth watching and quite well done.
I KNOW you're thinking way too much about knowing what you know!
That’s definitely a problematic idea
I'm surprised you didn't mention Searle's Chinese Room, which is relevant. Also, Jackson's "Black and White Mary" thought experiment. And here's another "Gettier": Unkown to me, someone spikes my coffee with a drug that causes paranoia. I drink the coffee, and the chemically-induced paranoia causes me to believe that someone drugged my coffee. I now believe someone drugged my coffee because they did--but it doesn't count as knowledge. Therefore, as C.S. Lewis said, knowledge cannot caused by its object in a crudely mechanical way.
In short: Knowledge is a piece of information that gives a functional advantage.
In long:
Knowledge is a piece of information that can generate an understanding of some concept on the receiving end. Understanding a concept means being able to predict the state changes in regard to the postulates assumed by the concept.
The specific words, signs, etc. we use do not contain any knowledge and they are only an agreed-upon representation of certain experiences that given entities shared for the purpose of communication. Language is a method for structuring the representations (e.g. words) of shared experiences to attempt and generate (imagine) a representation of our experience in another entity, machine, etc. Any information that does not come from our own perceived experience but is passed on through some kind of 'language' (representation of experience) instead - will not represent the objective reality in an accurate way because we can not know how it will be perceived or what kind of imagination of our experience it will generate in the receiving entity. In fact, even our own experience is not an objective representation of the environment's reality, it's only an objective reality of our own experience. What we experience is generated by the brain based on the handful of signals that our sensory instruments receive for the purpose of serving other functions.
In other words, knowledge is a signal that produces a useful function when applied to a specific question/problem.
The examples of attempts to define knowledge in the video are unsuccessful because they seem to be based on an assumption that 'knowledge' requires 'truth'. The truth is what is reflected in the environment/reality (or certain assumptions when applied to abstract representations), but knowledge does not require 'truth'. Knowledge is just a word referring to useful information, it is something that is 'known' and gives an advantage or 'ledge'.
I once made a program that was sending sentence to google translate, getting translation and send it again to google translate to translate it back to orginal language, and then repeating the process.
So it was translating for example english to polish, polish to english, english to polish, and so on.
One would expect that you will end up with same sentence in second loop, but it usually wasn't.
It usually took it 7 loops to stabilise.
awesome 1980s "Chip music" on the outrow. I used to write that stuff on the amiga. It had a waveform sample synth, so you could take a sample of a word and cut it up into 100s of small 512byte waveforms and "play" it using the waveforms looped at pitch and stepping through them in sequence, like timestetch + pitch stretch at the same time. Sounded bizarre.
In my opinions
There are I think two important things to point out
1.) Knowledge can be both right and wrong.
Knowledge is just really defining your observation based on what you "know" if we deemed the knowledge is true we usually call it "fact"
2.) Your belief of something is not what makes you knowledgeable of something. It's what you "know" of something that makes you knowledgeable of it.
See the important keyword?
The problem is gettieng big.
Here is the thing with gettier problems.
Example : it is (possibly true) that if there is a smoke there is fire that's knowledge (even if it just look like a smoke).
But look at my first point. The person is not wrong about his knowledge of fire and smoke. What he lacks is more knowledge about fire and smoke. And if he knows and learn about it. It will add to his knowledge.
Can there data be qualified as knowledge? Technically yes. But is it factual/true/false/myth/etc is a different thing.
Let's back on what you are doing earlier.
You are trying to translate English to French
You first use a Google translate but then you also use dictionary why?
Isn't the reason, is because you want know if the Google translate knowledge is true and correspond to the dictionary knowledge?
And if you really want to know if your French is correct you'll probably look for more things or people who have knowledge about French and compare the truth and False.
One thing that I know is that "All I know is that I do not know" is a quote universally attributed to a Greek philosopher, but I even forget which nor do I know what the original quote in Greek was.
Knowledge is NOT a belief. Whereas a belief is the idea of something being so without having evidence for it actually being so, knowledge is information based on there being evidence for that information existing as such.
Take afterlife for example. No one knows whether or not an afterlife exists. It has not been shown to exist by way of evidence, thus there is no knowledge of an afterlife, only the belief in one. Knowledge is not a belief. A belief is not knowledge.
"If it's there you're not required to believe in it. if it's not, why should you have to?"
-Terence McKenna
yo this is why that dude said anyone who claims to know everything doesn't understand anything. knowing something is just a clump of assumptions but understanding something is being able to create with it, break it down and rebuild it. I know what a book is but I understand what an author does to make a book. I know what a car is but I understand how it works.
It seems to me that Gettier problems fail largely due to a weakness in the definition of "justification." In each of the examples here, the "justification" ultimately leads to an intermediary falsehood. It stands then to question as to whether or not we can truly call it a justification. In mathematics, if you submit a candidate proof that has an error in it, even if the conclusion is ultimately correct, you haven't proven the result.
Arguably, one could say that these "justifications" are logically valid, but not sound. For example, upon deconstruction you could arrive at the false logical axiom "That sheep-shaped thing is a sheep," which is ultimately causes the intermediary result to breakdown. If you took the rigorous definition of justification to be "a process of sound reasoning," the issues associated with Gettier problems go away, since you never really "knew" the result in question in the first place: you simply conjectured the correct result. However, this leaves us with no method for evaluating the soundness of our axioms, and thus we can never be certain of what exactly it is that we know.
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
The philosophical conundrums of meaning confusion is about another sort of problem than is discussed here. If one claims I don’t know if the distant is a sheepdog, or house cat wearing sheep clothing is saying I’m the only source of knowledge. However my trusted source (Up and Atom) is standing next to the house cat and calls me to say the truth. This highlights ‘knowledge’ is language-like which means the exchange of information through a language channel. So one is really talking about the realism of the channel structure rather than the individual error of an observer. The channel structure may be constructed as an ‘arbitrary symbol’ string, or it might be connected by a neural engine. This clash between logical and connectionist architectures is what is at the heart of the discussion.
I took a minor in philosophy when I did my undergraduate degree. I would have majored in it, but my school's philosophy department was too small to offer one, so I did my degree in Cognitive Science instead... minors in philosophy and psychology, concentration in computer science, and all four linguistics courses offered by my school. Epistemology quickly became my favourite branch of philosophy, probably because I was always something of a sceptic by nature. The Gettier Problem has always stuck with me, as well as Hume's argument for scepticism... and now, over two decades after graduating, I'm still no closer to a satisfactory definition of knowledge... except that I'm comfortable with the Socratic paradox; we cannot know nothing, because then we would know that we know nothing, and would no longer know nothing... so there must be something that we know. But what that knowledge is still eludes me, because physics tells us that the universe definitely is *not* as we perceive it... so is mathematics (including logic) our only true knowledge?
Certainty is dual to uncertainty -- the Heisenberg certainty/uncertainty principle.
Absolute certainty is dual to absolute uncertainty.
"Synthetic a prior knowledge" -- Immanuel Kant.
Analytic (a priori) is dual to synthetic (a posterior) -- Immanuel Kant.
Knowledge is dual according to Immanuel Kant.
Absolute or objective knowledge is dual to relativie or subjective knowledge.
Absolute truth is dual to relative truth -- Hume's fork.
"Always two there are" -- Yoda.
at 10:02, "the machine doesn't try to formulate a law" is quite inaccurate as predictors are fitting functions on the data and get trained to use the most precise function describing the output from the input, which is exactly a "law formulating", humans do almost the same, look at data and try to guess the mapping between inputs and outputs. The most important difference is maybe that humans try to formulate the simplest functions instead of prebuilt template functions with tunable variable, like a neural network works for example. But the result is almost the same in terms of predicting power of the actual stuff with the existing data.
Some years ago I came across a book on Epistemology. And even if I finished only the first few chapters, Gettier problems included, it cleared a fog that I didn't even know had clouded my mind until then.
An addendum to your list of criteria that constitute knowledge: It always must state "What observation does that knowledge relate to and, under which conditions were these observations made? For the
smokeflies observation the observer could say that he saw black irregular sphere and ovoid shaped shapes rise in a chain like column that resembled smoke, cues like vegetation and other objects of sizes known to him may allow him to estimate the distance. He could state if the wind blew in his direction and if he smelled smoke. What he can't know is wether he observed a fire! For that he would have to physically go to the place at which the column rises while the column still exists! Immanuel Kant searched in his "critique of pure reason" for reliable foundations of knowledge. He looked for boundaries of knowledge and identified time and space as indispensable preconditions. He thought that everything the mind is to observe or to *contemplate needs to be thought of as existing in time and space! Existence has to satisfy a condition of continuity at least for the timespan of its contemplation! Kant called it an "inner intuition". He described space as an "outer intuition (today we may say "externalised"): Everything that exists has to be thought of as having a location in space. Kant considered time and space as as preceding all our conscious thoughts! Kant distinguished between these predetermining "intuitions" and empirical thoughts: Based on settings like in the story of the misinterpreted swarm of flies, he proposed the term "Ding an sich" English: "Thing in itself" The fact that human observation can only operate through the senses results according to Kant, necessarily in a difference between the sum of all properties of any object and any observation made of it! According to Kant a human can never know an object in all its properties. That unknowable object is the "the thing in itself ".
It has to be said that "The critique of pure reason" was published 1781, the second 1789. A lot has been written since among it criticisms of the critique!
Great thought provoking video.
On the subject of Google translate and AI I have noticed as a bilingual person that if you translate something from English to French or French to English it works quite well. Interestingly I found if you try and translate it back to the original language you get a garbled bunch of nonsense.
I think it would make sense to view the examples from the end to the start, because upon doing that, you'd discover why none of the examples in the video portraited knowledge:
Let's take the sheep one for example:
Person A appears to know a sheep to be in the field, because A saw something in the field and concluded it to be a sheep.
There being an actual sheep that remained unseen is irrelevant, because the knowledge of a sheep being in the field is derived from - and therefore dependent on - knowing that what A saw was indeed a sheep. But A didn't know the seen thing to be a sheep, but merely conclude it upon insufficient evidence.
excellente Jade.... thoroughly enjoyed it. Looking forward to read the Gettier problem.
A modern analogue I've heard suggested for a Gettier "experiment" is a student who arranges a green screen for their online class video session, but configures the green screen software with an image of their very room. The professor believes them to be in their room; this is true, and justified; but it is not knowledge if they're deducing it from the green screen video feed alone without further information.
Great video Jade! I wonder if the problem in the definition isn't with the second point of objective truth. I think the same impracticality criticism could be applied to establishing objective truth in order of determining knowledge.
I was recently reading about Gettier cases and the different ways philosophers have tried to circumvent such problems. Loved the examples you showed. I came across one that involved a clock that had stopped at 12 o’clock. In it, someone rides past believing it to be 12 o’clock and looks at the clock to confirm the time, thus providing justification for their belief. But because the clock had stopped and it was just a coincidence it was telling the correct time, it would not be considered knowledge, even though it was true. This again is a Gettier-type example, similar to the ones shown in your video, contradicting the idea that JTB is not, in itself, sufficient for knowledge. Great video. Thx for posting.
0:30: This "translation" method is going to yield some hilarious bloopers. Real languages are not vocabularies with grammar rules, despite what you might have been taught in school. The reason the quality of machine translation got from dreadful to passable in the last 20 years or so is precisely that the engineers working on the problem realized the vocabulary+grammar model is useless.
All knowledge is storytelling about sensory data. We receive sensory data, and then we try to tell a story about what it is, what it's doing, what it means, etc. It's the same as when you are plotting a scatter plot of data from an experiment, and you use your software to draw a trendline through regression statistics in order to model the general trend of the data. After a couple points, you may see that the data appears linear, and so you assign a linear fitting to the data and model it as a line. But then, new data points come in that cause the linear trend to become more complicated: maybe at the extreme ends of the graph the trend becomes nonlinear, or new data points come in that reveal the line to be a linear slice of what is actually an oscillating system, like a sine wave. You must update your fitting, your model of the raw data, and it will always and only be accurate to the observations that you have been able to make up until this point. In other words, as you say, it's impossible to take every single possible observation about a given situation without an infinite amount of time, and so you can never be sure that there isn't missing data which would contradict the narrative that you've made to describe and explain the data.
So what I would ultimate say is that "true, exact, 100% surety in knowledge" is something that you can never have: that's a myth. Even if you have constrained your system to approximate a situation very closely, say to 99.99999%, which by all accounts is good enough, that's still short of the level of certainty that philosophers of knowledge would hope to attain. Knowledge is never "knowledge = truth," it is only ever "knowledge ≈ truth." Maps are always smaller than the territory: the only truly accurate map of the universe is the universe itself.
Scientists usually butcher philosophical subjects, but you actually did a great job of explaining the Gettier problem. I have spent a lot of time stewing over this problem. I even have a real life Gettier problem happen to me one time.
I don't really get how you connect the Gettier problem to AI, though. Granted, the Gettier Problem exposes an inadequacy to the standard definition of knowledge, but it doesn't bring into question whether "belief" is a necessary ingredient of knowledge. Unless AI actually have beliefs, they don't have knowledge. So the question of whether AI can have knowledge should really revolve around whether AI can be conscious in the first place. You have to be conscious in order to know something because beliefs require consciousness and knowledge requires belief.
It might be the other way around: taking belief out of knowledge may be the way to go. We know many things that require no belief. Just check within. Why do we need to believe anything? It might be better to simply know that we don't know. Just like Socrates. Keep an open mind. Open and free.
@10:00 Aren't artificial intelligence systems encoding patterns and finding connections between pieces of data as they modify the weights and nodes between layers of the machine learning software? Just not in a way we can decipher by looking inside the black box that is the AI. It correctly identifies correlations if not causations, but then neither did Newton actually figure out the true reason why gravity works, he only found certain mathematical expressions that are almost always true for gravitation
I find the notion of the "Bayesian mind" provides the best explanation, of what's going on. There's no "airtight" anything. It's also an important concept, if one goes in the field of machine learning ("AI").
4:26 It depends on what the definition of the word “is“ is.
i "believe" i "saw" the smoothest transition toward advert from content here
and now i "know" it's by her design or a sheer luck of mine to find it
thanks for your informative as well as interesting introduction to a curious topic
10:00 I am sorry to correct you on this, but this isn't quite correct. When we talk about AI, we usually mean a “neural network”. You can think of a neural network as a big complicated equation, made up of small, simple equations. The small equations, also called neurons, are selected via a process, that is awfully similar to evolution. But what are physical laws, but equations? Sure, the neural network may not come to the same equations as we do, but they are equations nonetheless.
Knowledge has a social component to it. We validate each other's beliefs and that's how we can be so confident about facts that we couldn't possibly verify on our own, for example that the moon isn't made of cheese.
Another way to look at it is with the scientific method and repeatability. While not air tight, if you can get repeated results from doing or obseving the same things, you can be relatively certain that you have knowledge.
For more important things like gravity one needs to have an explanatory description that could have a plausible answer to almost all challenges to the explanation. That is what distinguishes knowledge from hypothesis or theory but always limited by technological, physical limitations.
8:55 - This approach is what scientists do to best of their abilities - what could have caused the results they are observing beside what they think caused them, and devise a way to rule those possibilities out. Of course, this can never be truly exhaustive, and is not satisfactory in the strict epistemological sense, but it is the best we can do (and often leads to new discoveries.)
9:52 I would argue that the machine do formulate a theory based on the data, but we are unable to translate the machine knowledge into understandable human language. A Neural Network is trained on a set of data to create a mathematical method to predict results from all data. Later this is tested with another set of data and the prediction is measured, just like any other scientific method. So the machine actually formulates a law inside the neural network, we are just unable to determine how this exactly should be translated to a mathematical formula that we can understand.
To throw another curveball out there:
Imagine walking into a casino with a friend and they only have €1. The rule is they keep playing double or nothing on roulette until they either lose or bankrupt the casino. At the end of it, they lose, and you say to yourself "i knew that would happen".
While you could argue that it's not knowledge because it's probability (and there was a microscopic chance you'd have been wrong), i would argue that by the same token, everything we "know" in science about black holes for example is considered knowledge; and yet, there's a greater probability that at least 1 of those scientific "facts" is proven wrong as science develops, and yet, to most people, that would still class as knowledge even though the casino example wouldn't.
If I get to this phylosophucally and mathematically.
Let us have initial 3 statements:
1. Knowledge is a belief about something.
2. that beelief must be true.
3. that belief must be justifiable.
What I'd add is
4. Series of justifications must be posiible, with new perspective on each justification, until it is exausted. In practice series are finite and not totally exausting.
New perspective on each justification means we verify all "dimensions" of given piece of knowledge.
Exausting means we will verify every "dimension".
Latter is impossible in practice. Imagine we have some simple object like coin. We can remember all sides of it, how it feels to touch it, its weight, etc.
But this description is not full, still. But we could analyse it under miscrocope and see some metal structure, chaffness etc. Then there is atom structure and quantum state.
And this seems not full, as well. So, inifinitely many measurements are possible but we generally do just few. We don't have infinity of time nor we can't do inifnety measurements at once.
Looks like 4 solves paradox about sheep confused with dog. Since we can walk outside and see it in close, touch it, smell it, take it't cells for DNA analisys.
If we can't get outside of house we could use binocular to see it closer. But it has less ways to measure and collect evidence.
So if you do not have enough options to verify you may keep assuming this is sheep but likehood of this is lower.
As to mirages. Actualy, that was not knowledge at all, just coincidence.
One can find well under mirage or don't find it. In that mental exercise one chose to find well there. One can easily build example without well.
Whenever I hear arguments such as this, I am reminded of the Bable Fish:
the babel fish ( Douglas Adams)
The Babel fish is small, yellow, leechlike, and probably the oddest thing in the Universe. It feeds on brainwave energy received not from its own carrier but from those around it. It absorbs all unconscious mental frequencies from this brainwave energy to nourish itself with. It then excretes into the mind of its carrier a telepathic matrix formed by combining the conscious thought frequencies with nerve signals picked up from the speech centers of the brain which has supplied them. The practical upshot of all this is that if you stick a Babel fish in your ear you can instantly understand anything said to you in any form of language. The speech patterns you actually hear decode the brainwave matrix which has been fed into your mind by your Babel fish.
Now it is such a bizarrely improbable coincidence that anything so mind-bogglingly useful could have evolved purely by chance that some thinkers have chosen to see it as a final and clinching proof of the NON-existence of God.
The argument goes like this:
`I refuse to prove that I exist,' says God, `for proof denies faith, and without faith I am nothing.'
`But,' says Man, `The Babel fish is a dead giveaway, isn't it? It could not have evolved by chance. It proves you exist, and so therefore, by your own arguments, you don't. QED.'
`Oh dear,' says God, `I hadn't thought of that,' and promptly disappears in a puff of logic.
`Oh, that was easy,' says Man, and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing.
Hanekawa Tsubana from monogatari series only knows what she knows
Far more interesting to me is this. When I forget something I don't know what it is...but I know what it isn't.
Knowledge is belief to a high degree of confidence. And confidence is just about how much we trust the justification for the belief. I believe that we can never truly "know" anything but we can have near complete trust (say on our senses) and build our belief system on top of that. Machine learning algorithms formalise this by modelling confidence in terms of probability
It's a topic I enjoy and have engaged with several times, thanks for producing this vid! Ultimately knowledge is a "thing" that wholly consists in a subjective ontology. I've found that inconvenient fact to be at the base of the realization that there's no final knowledge possible for anyone or any AI. Knowledge and all truth are just pattern sets in a unique system of memory and awareness. If it all makes sense to that system, it's TRUE in all the ways truth can be. It just means that truth is not what we like to traditionally think it is. Rather, truth is just a subjective impression of coherence experienced in a mental model (system of memory and awareness). Its VALUE is what we should care about, and any truth's value is much dependent on factors like how much information was correlated, what rules and interpretations were applied, and the full context in which it is shared. I think what's to be done about all this is to let go of the idea of any absolute truths and recognize that the discovery of value in the form of knowledge is a team sport. And perhaps the greatest tool in our game play is "explanation". In explanations we lay out our theories along with our observations, providing the reasons as well as facts about what's happening as we understand it. Done efficiently, ie. typically in narratives, others can readily check our explanations and recognize where there may be problems. So for example, if the mirage witness were to share her narrative, others might quickly add the helpful advice "Could be a mirage, remember!" and on it goes. I'll note that the inability to EXPLAIN will soon become one of the recognized weaknesses of AIs. AIs will be able to solve lots of problems for us better than we could so far, and we should accept their input for what they are: Just another set of hypothesis from yet another unique class of bundles of experience and cognition (ie. as we are each individually such bundles). But even if we agree with an AI's conclusions, we may have trouble using the findings productively in society because the AI will not share our wider set of values (it will be focused on a limited problem space) and interpretations of experience sufficiently to explain itself. We buy into explanations, not just outputs. When the 42 popped out of the great computer in Douglas Adams' books, no one could use it because it was entirely unexplained. To explain it, the earth and all life on it was created... etc. See, the reasoning of living beings like us is always "motivated reasoning". We want things and we want them for reasons which are deeper than even we ourselves can ever know. AI will become deep, complex, and unexplainable, but its core derivation will have a genesis that we can actually reference because we made it. Not so for us, who have our desires because we are each the pinnacles of the success of an evolutionary chain of living, ie. we are what I call "deeply derived" while that AI is created. Anyway, if you want to learn more about this "explanation" angle, check out David Deutsch's work if you haven't already. You might be inspired for a vid or two from that.
I see this as being fairly related to the reason why in formal logic, it is only possible to satisfy the conditions for a statement to be considered a "theorem" rather than a "truth". The difference being that a theorem is only true under the condition that you assume the axioms upon which the theorem is based to be true. Axioms are, by definition, devoid of rational adherence to anything more fundamental, because that would negate their status as being axiomatic. In other words, you can never necessarily say something is inherently true; you can only say it is true as an extension of certain arbitrary assumptions.
"How do we stop getting Gettiered?" Just KILLING It!!!!!!!!!
The way A.I. learns the motion of planets sounds like how savants understand how things work, but can't explain how they know.
I love how Gettier only published a three page paper and that got him a full professorship lol
Through the first part of the video I thought you mentioned Bayesian method/Bayesian brain, but this Gettier problem is actually a really good argument! Good work, Jade!
I used to watch the Addams family on tv. Not until now did i actually get the personality of Gomez Addams when upon hearing his wife speak french, would find himself uncontrollably kissing her arm... after this video, i actually get it now. Liked and subscribed. Well done. I learned something interesting.
Thank you so much for being here, I enjoy your videos am learning but now I dont know what I know.
First of all, I really like your videos and your style of presentation! It exposes many different points of view in the comments and sparks very interesting discussions. I would like to share my thoughts as well on this :)
Regarding "knowledge", I do not have a sufficient philosophical background to argue with centuries of thinkers... but I have the idea that what we call knowledge is very similar to what the AI is doing - simply we (human adults) have more time and means to experience the outside world and more efficient brains to identify the patterns that, if statistically significant, form our beliefs of the world.
Related to this, a possible approach to the incompleteness of the first definition of knowledge could be that knowledge is such if it is based on repeatable observations (if ever possible) OR based on the observation of a statistically significant number of observers. One could argue that "statistically significant" is a quite subjective marker, but I think that's partly the point. In the case of the sheep story, say, five persons standing on the hill would recognize a wooly dog and hence correct the belief of the one in the house. But it is not enough, because also these five observer might be mislead. So I see it turning into a neverending hunt for true knowledge, where more and more (eventually, infinite) observations are needed to gain exact knowledge. In this sense, I don't think that "knowledge" exists in one's brain, but it's a shared and delocalized concept. One's knowledge based on the experience of the world is always incomplete.
I'd venture the question tiptoes or even dabbles with "what is consciousness/self-awareness?" which also seems like a waiting trap as computers evolve