This video series is great! To me it seems that a lot of this epistemological debate is around the definition of knowledge. It seems like there is an important dichotomy being missed. Knowing something is different from "thinking that you know something". I think that knowledge must essentially be connected to reality. To the extent that someone's knowledge is not connected to reality, they never had the knowledge, they only thought that they had knowledge. I don't see the value in claiming that people can have knowledge of things that are false. To me, it seems like the claim to knowledge is an expression of our confidence that a proposition is true. Different people have different means of deriving confidence. To the extent that someone has confidence in false facts and proclaims knowledge of false facts, they never truly had knowledge (since knowledge requires truth). I prefer this retroactive definition. One can rationally claim to have knowledge of false things, but one cannot have knowledge of false things. That is the key distinction.
It’s hard to isolate a mechanism, because there isn’t a single mechanism by which we form our beliefs. We Can form them one way and confirm them another. The problem with Getier problems is that they have the person relying on a single faulty warrant. 7:21
In other to mitigate this issue, one must take the perspective that all information is provisional, in this way assumptions and expectation even when made are always subject to change even when they are made without sufficient evidence (which should be discourage with such a perspective). So as Henry observed a barn instead of making assumptions or having expectations of what the barn is or the type of area they're in , all he can truly say is he observed a barn or even several and without further investigation he cannot truly deduce more information. I should also say its not so much that Henry cannot make assumptions or have expectations but he should realise they aren't concrete, so in a way they should be treated as hypothesis making them subject to change.
You probably dont care at all but does anyone know a way to log back into an instagram account..? I was stupid forgot my login password. I would appreciate any assistance you can give me!
@Kian Fabian i really appreciate your reply. I found the site thru google and im in the hacking process now. Seems to take quite some time so I will get back to you later when my account password hopefully is recovered.
Hello, Turkish translation was added in the first video, but there are no translations in the next videos. Could you please add Turkish translation. Thank you
There is an element missing from reliabilism: semantics. Reliabilism is also predicated on the idea that one has a complete understanding of the sensory faculty and its possible faults (which is possible, but it is aquired knowledge not intrinsic knowledge). I personally like the idea of reliabilism or how I came to discover it as discerenment of the senses in coherence with reality. Now, the semantics. Barn: a 3d object adhering to these set of factors. Faults of Perception: Determination of 3d object is possible due to shift in perspective (if anyone knows of a way to tease out whether an object is 3d or not from a single reference point then do let me know. Otherwise I will work under the assumption that a minimum of two perspective points are required for such an evaluation). Set of factors matched, 3d object: inconclusive sensory perception (aka NEI: Not Enough Information). Solution: Identify Possibilities Based on Current Information, Tease out truth based on techniques of truth (deduction, inference, reliabilism, etc.). If not teased-out-able, seek more information. If information is not seekable then accept uncertainty of knowledge (well... there is a barn here or there is a 2d representation of a barn here, which is coherent with reality? I don't know based on all current information). Finally, questions posed at the end of the video in relation to intuition simply lie in the situational fact that he did shift perspective (but whether the shift in perspective was big enough to discern/notice the (actual) reality leads to the difference in opinion).
I would say Henry knows that he is looking at a barn, he just don't know if it's a real barn. As well, I would like to raise another question, not directly based on the topic of this video, but also linked to Scepticalism: If you close your eyes and hold your hand in front of your face, how can you be sure, until you open your eyes, that your hand is actually there where you believe it is? I hope I made my thoughts halfway understandable.
Reliable empirical predictions... This reminds me of Plato's analogy of the cave. While one may think that the only thing that matters is the ability to predict one's experiences, this is ultimately of lesser importance than of knowing what is good and right for example.
+Navyanto Arasma Here's a really great list of recommended readings in a bunch of different areas of philosophy: www.reddit.com/r/philosophy/wiki/readinglist. If you're thinking of starting out with a general introductory text (the list provides a few), I'd also consider Thomas Nagel's "What Does It All Mean?" This is a great little book that's often used in Intro to Philosophy classes. Hope this helps!
I'd, perhaps, still say that 'knowing' is relative. What then am I looking for in terms of arguments and their premise? What then am I looking for in terms of truth?
Interesting that I've seen these videos before yet somehow they take on a new light as if I've never seen them before ever since I've felt that abduction, induction, validity, soundness, syllogism, and discrete mathematics are not necessarily what I'm looking for.
"What then am I looking for in terms of truth? ": Answer: Truth is a state where beliefs correspond accurately with reality. This is determined largely by coherence and internal consistency of the available evidence, as well as conformity of our beliefs to laws of nature, including statistical likelihood. We can never fully acquire "truth", but we can move in the direction of truth, and approach it incrementally. It is irrational to accept as "true", or even plausible, an explanation that is statistically vanishingly unlikely. Thus, "truth" is unavoidably a statistical phenomenon. No way around that.
Some version of reliabilism is obviously sound. Attempts at over-reduction are inherently problematic. It seems we can not always categorize knowledge gaining process (such as 'appropriate causal connection') as reliable or unreliable, but in degrees, (In that case in degrees of appropriateness). So how do we define something like a 'threshold of reasonable reliability' Well that's the question here to me. Attempting to place exact 'odd's is likewise sometimes problematic. A knowledge of the nature of probability is sometimes relevant of course, but especially because of the way interpretations of probability can so easily mislead us, even when considering the topic here. 'Lottery knowledge' can't be used as a comparison against certain kinds of observational knowledge of course. I view reliabilism as a generic, non-proprietary, self-evident basis for analyzing the knowledge process. I think it may require either slightly different versions for each context, or a some conditional wording etc. But the conditional wording itself might sneak back in. So maybe there is no shortcut to 'custom-made' theorems for essentially different scenarios, but maybe there's a finite set of those, idk. For example the knowledge of a moon landing can get more reliable through additional inference. You can have a level of reliability based on the way it was described in the video. But that process alone is not as reliable as then adding in inference to how reliable the knowledge was in terms of it's origination and dissemination. Meaning, all the sociological inferences that 'reasonably falsify' the hypothesis that the moon landing was fake (you've then 'falsified a potential falsification of knowledge') . That inference increases reliability. So was it reliable enough before the inference? After? I'm not trying to get into that now. Just to offer that reliability might sometimes be an irreducible fabric of processes, not always just a thread. You can probably safely assume that an additional input of observation or inference can sometimes make the process more reliable, because those inputs may inform of something rightly presumed to be highly unlikely. I mean to ask: how can we possibly distill a process in all cases to binary 'reliability' or 'unreliability'? The question of context can be irrelevant to reliability. If he saw a barn as in a complete barn, in that scenario context is immaterial to reliability. So there are really 2 distinct scenarios here. I think that was expressed idk. If he only saw the face of the barn, context of course must be more heavily weighted when attempting to quantify reliability. For some reason I'm fascinated with the idea of a 'test' for 'reasonable reliability' given a set of knowledge originators/justifications etc and a specific entirety of context. Sounds pretty fuzzy so far, lol. Idk if there already is one. Maybe for very simple things. But I'd like to learn more.
Why is the hallucinating traveler's belief always expanded from simply "hey, there's water!" (meaning, this specific puddle of [what seems to him to be] water) to the generalized "there's water... *_somewhere_* in the valley ahead"? It always has to be expanded to allow for his being luckily "correct" (there's water _elsewhere_ in the valley, etc.). In my view, if he's wrong about the location of where the true water is, he's as wrong as wrong can be, even if the real water is just a few meters away from where he's hallucinating water to be. It might as well be on the other side of the planet. In fact, as long as we're characterizing his belief, why stop at the valley or even the planet? You might as well just generalize it as "hey, there's water somewhere in the universe". _Well golly be! Turns out he's actually correct, there IS water somewhere in the universe, yet he doesn't seem to have knowledge in this case!_ Well, yeah, that's because it's an _hallucination._ You're just describing hallucination, but with extra steps. The very idea of an hallucination implies a local misapprehension or misperception ("water exists right THERE") of a more general, universal belief ("water exists"). But the generalized belief is not the belief under scrutiny here. His current belief is _local,_ and it ends where the perceived puddle ends. His belief is not true, so it's not JTB. In my opinion, anything beyond the specific puddle (purporting to show that such a hallucinating traveler is an example of "non-knowledge JTB", based on the aforementioned generalizing) is playing fast and loose with both the "T" and the "B" components of JTB (and probably the "J", but I'm too tired to delve into it at the moment...).
After seeing these videos its more like the way we understand the word "knowledge" is broken Its not enough to say "i know this" ,its just not able to cover the complexity of understanding, perception and belief
Ohhh, my god, is there anyone understand "no false lemmas"? I am really confusing now. Besides that, I have read a book talked about "internal and external epistemic conditions", which is also really hard to understand.
No false lemmas is saying that you can't have knowledge is it is based off a false belief. For example: jones is going for a job interview and the person interviewing him tells him that Smith is going to get the job. Later on jones sees smith counting 10 coins and then putting them in his pocket, from this he forms the belief that the man with 10 coins in his pocket will get the job. In the end jones gets the job and it turns out he also had 10 coins in his pocket so his belief was true and justified. However the no false lemmas theory would say he didn't have knowledge because he based his belief off the idea that Smith would get the job, which turned out to be false. Hopefully that makes sense, I didn't explain it very well hahah
I really like these videos. They are full of simple and understandable examples. You refer to a video on internalism and externalism. Can you provide a link to those? I would also like to see a video from you on Knowledge ontology is possible? In regard to the topic you are covering in these videos, are there any literature you would like to recommend reading? Especially on Causal theory and Reliabilism.
Goldman's classic work "Epistemology & Cognition" dives into his arguments for reliabilism and how it relates to our cognitive faculties. Alvin Plantinga continues this train in the theistic realm by expanding Goldman's hypothesis and theory into what we now call "Proper Function". His book is a set of 3: 1. Warrant: The Current Debate, 2. Warrant & Proper Function, and 3. Warranted Christian Belief.
the use of Armstron's example as something we know just like I know I am watching a video is quite comical since there are theories that the landing on the moon was a mistification; and if that's true we cannot count is as knowledge; following Linda Zagzebski I guess we can conclude that only some of things we think we know are in fact true, therefore only some would count as knowledge; but it it's just luck that decides whether somethign we believe is true, then we can conclude that nothing we believe is real kowledge
Reliabilism just seems to push the problem back a step without resolving it. Now we have to find a criterion for "reliable belief-forming process" in place of one for knowledge. We now need to decide how one can know that he knows. The problem with positions such as Millikan's seems to be that it suggests the whole problem can be changed just by focusing on one thing at a time. Both the barn example and the lottery drag in probability and seem to simply move toward a Bayesian approach not just as an account of how beliefs are actually formed, but as a criterion for how they should be formed. I can't feel compelled by any account of "knowledge" which is going to be based on probability or notions of the epistemological status of beliefs about other things, such as the fake barns surrounding the real barn.
I honestly do NOT understand why epistemology remains wedded to the notion that "truth" is attainable. We've been running this "thought experiment" for ~2,500 years now. I think the empirical evidence is pretty compelling that knowledge is a direction, not a destination. NOTE: I am not advocating relativism, nor postmodernism. The accuracy of beliefs is measurable. Beliefs can be ranked according to their likelihood of being true. When we get to the point that there are no reasonable alternatives available, and belief appears to conform closely with reality, then belief may approximate knowledge. Bayesianism has both an analytical and normative aspects... but the normative aspect that we SHOULD all think like Bayesians, is unrealistic. We don't. It IS however, a strong analytical method that CAN be used by knowledgeable (s.l.) persons to attempt to make their beliefs as objective and accurate as possible.
I believe Goldman is nitpicking and concentrating on superficial logical aspects and is voluntarily being dense and technical. I believe that it has very little to add to the debate.
Actually, what he's doing is describing how humans actually think. Some primates do seem to have a natural instinct for nit-picking, and it has become an important part of their culture and socialization.... but I'm referring to chimpanzees, not H. sapiens.
Doesn't reliabilism fall prey to the original Gettier problem? Surely, the mechanism used to form that original belief was reliable - after all, we wouldn't call it justified if it was unreliable. If we do accept reliabilism, we should also accept causalism. A belief must be caused by its own truth by a reliable method to be knowledge.
the ideia is that the mechanism used in the original gettier cases is inference with a false premise, which can lead to justified beliefs but very rarely leads to true beliefs. To be reliable is not to cause many justified beliefs, but to cause many true beliefs. And since inferences involving false premises do not usually lead to true belief, it is not realiable. So reliabilism is safe from the original cases.
This whole mess of epistemology seems to be leading to an inevitable conclusion: you can't know anything, unless you know everything. So, short of omniscience, you can only believe a thing. Since no one could know everything, no one knows anything. So even if Humankind managed to gather all data about everything, no single human could know that data is true, and anything he proclaims as a result of that data is ultimately proclamations of faith. Frustrating.
It doesnt matter that we don't "know" everything but rather only believe it or find it likely, because we can still use those things to accomplish our goals and Gettier problems and such are merely abstract theories. Common sense, u know!
I agree epistemology is one big mess. I think we don't need all these headaches of finding knowledge of invisible worlds, whether I've reconstructed appropriately via a proper causal chain, or through reliable process given x, y, and z; I support common-sense theories. If we've seen this appendage attached to our body for many years, it does what a hand appendage should do, and the hand appendage has never been anything else that a hand appendage, then it's an existing hand (at least in my reality). If I'm a brain in a vat, then I can never know so it makes no sense to try to know.
These are fun mental games --> but very misleading. They are instances of the Map v Territory fallacy. Or maybe the "Use/Mention" problem. Things precede words. Words are made up to refer to things. There are many things without words, but no words without things. We must perceive up and down for the words "up" and "down" to have meaning. We can't define them without reference to perceptions. The failure to come up with air-tight definitions for words does not mean that those words fail to refer to things. Knowledge is a word that is used by different people to refer to different constructs --> that doesn't mean that those constructs don't exist. They do. Each construct is a mental model used by brains to organize a pattern of ideas. Our "knowledge" problem is not a "knowledge problem" - its a social problem of disagreement over the use of the word knowledge. And this is the great futility for philosophers--> dissatisfying to them because the game of philosophy is played in language. But that's that.
This is not correct. Even thou it fits many other topics of a similar kind. In Its Core the Diskussion of Knowledge is about a Safe Statement and the Problem Shows that there is no Safety with Gettier cases in regards to the Definition we use.
I think the idea that testimony is a good guide for knowledge is questionable. It might lead you to find good sources of knowledge, but given that people misrepresent things, even by accident, it seems like that can't be a good way to form knowledge.
You're right. Epistemology has been hamstrung for years by focusing more on what is POSSIBLE than what is likely and reasonable. How many times do I need to hear that I might be a brain in a vat, or that Fake Barn County might be a real place. The point of both is that "appearances can be deceiving", if you want to put it in simple vernacular terms. Philosophers just like to take an extreme "case" to illustrate what is a reasonable concept. Fake Barn County is ultimately no different from the Brain Vat County. In neither place can you fully trust what your eyes are telling you. Science has developed methods for circumventing this sort of issue, and it's the best we can do.
Beliefs can be untrue, but knowledge only requires you to hold a belief . The proposition is true I hold the belief that its true I have justification for holding that belief. Notice you can have bad justification, or believe for the wrong reasons, and you still have knowledge then. How you get there is irrelevant to its truth. If state A is true, then it is
Seems so unnecessarily convoluted, looks like a desperate attempt to claim truth is knowable. Falibalism resolves the conflict simply by acknowledging that you can't prove what is true, only disprove what is false, all these JTB counterexamples seem to fit just fine within that framework.
Please use real world examples, and possibly historical examples.... these hypothetical examples do not do a good job of illustrating the usefulness of philosophy....
This video series is great! To me it seems that a lot of this epistemological debate is around the definition of knowledge. It seems like there is an important dichotomy being missed. Knowing something is different from "thinking that you know something". I think that knowledge must essentially be connected to reality. To the extent that someone's knowledge is not connected to reality, they never had the knowledge, they only thought that they had knowledge. I don't see the value in claiming that people can have knowledge of things that are false. To me, it seems like the claim to knowledge is an expression of our confidence that a proposition is true. Different people have different means of deriving confidence. To the extent that someone has confidence in false facts and proclaims knowledge of false facts, they never truly had knowledge (since knowledge requires truth). I prefer this retroactive definition. One can rationally claim to have knowledge of false things, but one cannot have knowledge of false things. That is the key distinction.
So true!
> the claim to knowledge is an expression in our confidence that a proposition is true
would this be called knowledge non-cognitivism or something lol
Love this video series. Thank you Jennifer Nagel.
I appreciate you professor Nagel. Wish I could attend a class of yours.
It’s hard to isolate a mechanism, because there isn’t a single mechanism by which we form our beliefs. We Can form them one way and confirm them another. The problem with Getier problems is that they have the person relying on a single faulty warrant. 7:21
In other to mitigate this issue, one must take the perspective that all information is provisional, in this way assumptions and expectation even when made are always subject to change even when they are made without sufficient evidence (which should be discourage with such a perspective). So as Henry observed a barn instead of making assumptions or having expectations of what the barn is or the type of area they're in , all he can truly say is he observed a barn or even several and without further investigation he cannot truly deduce more information. I should also say its not so much that Henry cannot make assumptions or have expectations but he should realise they aren't concrete, so in a way they should be treated as hypothesis making them subject to change.
You probably dont care at all but does anyone know a way to log back into an instagram account..?
I was stupid forgot my login password. I would appreciate any assistance you can give me!
@Yusuf Gannon instablaster :)
@Kian Fabian i really appreciate your reply. I found the site thru google and im in the hacking process now.
Seems to take quite some time so I will get back to you later when my account password hopefully is recovered.
@Kian Fabian it did the trick and I now got access to my account again. I am so happy:D
Thanks so much, you saved my account !
@Yusuf Gannon no problem xD
This subject is so goddamn slippery and feels like i am trying to draw a cloud with no boundaries, >__
Hello, Turkish translation was added in the first video, but there are no translations in the next videos. Could you please add Turkish translation. Thank you
Translations are done by other viewers, usually not the video uploader...
so touching for an excellent video
Can you do a video on Merleau Ponty??
There is an element missing from reliabilism: semantics.
Reliabilism is also predicated on the idea that one has a complete understanding of the sensory faculty and its possible faults (which is possible, but it is aquired knowledge not intrinsic knowledge). I personally like the idea of reliabilism or how I came to discover it as discerenment of the senses in coherence with reality.
Now, the semantics.
Barn: a 3d object adhering to these set of factors.
Faults of Perception:
Determination of 3d object is possible due to shift in perspective (if anyone knows of a way to tease out whether an object is 3d or not from a single reference point then do let me know. Otherwise I will work under the assumption that a minimum of two perspective points are required for such an evaluation).
Set of factors matched, 3d object: inconclusive sensory perception (aka NEI: Not Enough Information).
Solution:
Identify Possibilities Based on Current Information,
Tease out truth based on techniques of truth (deduction, inference, reliabilism, etc.).
If not teased-out-able, seek more information.
If information is not seekable then accept uncertainty of knowledge (well... there is a barn here or there is a 2d representation of a barn here, which is coherent with reality? I don't know based on all current information).
Finally, questions posed at the end of the video in relation to intuition simply lie in the situational fact that he did shift perspective (but whether the shift in perspective was big enough to discern/notice the (actual) reality leads to the difference in opinion).
Did they ever make that video on Internalism vs. Externalism?
I would say Henry knows that he is looking at a barn, he just don't know if it's a real barn.
As well, I would like to raise another question, not directly based on the topic of this video, but also linked to Scepticalism:
If you close your eyes and hold your hand in front of your face, how can you be sure, until you open your eyes, that your hand is actually there where you believe it is?
I hope I made my thoughts halfway understandable.
Reliable empirical predictions... This reminds me of Plato's analogy of the cave. While one may think that the only thing that matters is the ability to predict one's experiences, this is ultimately of lesser importance than of knowing what is good and right for example.
do anyone know any books to get started with philosophy?
philosophy for dummies or something.
+Navyanto Arasma Here's a really great list of recommended readings in a bunch of different areas of philosophy: www.reddit.com/r/philosophy/wiki/readinglist.
If you're thinking of starting out with a general introductory text (the list provides a few), I'd also consider Thomas Nagel's "What Does It All Mean?" This is a great little book that's often used in Intro to Philosophy classes. Hope this helps!
+Wireless Philosophy whoa thank you very much. keep up the good work in making great videos ;)
Any time! Thanks!
+Navyanto Arasma Sophies world is good.
***** yeah I've read it already. thanks btw.
I'd, perhaps, still say that 'knowing' is relative. What then am I looking for in terms of arguments and their premise? What then am I looking for in terms of truth?
Interesting that I've seen these videos before yet somehow they take on a new light as if I've never seen them before ever since I've felt that abduction, induction, validity, soundness, syllogism, and discrete mathematics are not necessarily what I'm looking for.
"What then am I looking for in terms of truth?
":
Answer: Truth is a state where beliefs correspond accurately with reality. This is determined largely by coherence and internal consistency of the available evidence, as well as conformity of our beliefs to laws of nature, including statistical likelihood. We can never fully acquire "truth", but we can move in the direction of truth, and approach it incrementally. It is irrational to accept as "true", or even plausible, an explanation that is statistically vanishingly unlikely. Thus, "truth" is unavoidably a statistical phenomenon. No way around that.
Some version of reliabilism is obviously sound. Attempts at over-reduction are inherently problematic. It seems we can not always categorize knowledge gaining process (such as 'appropriate causal connection') as reliable or unreliable, but in degrees, (In that case in degrees of appropriateness). So how do we define something like a 'threshold of reasonable reliability' Well that's the question here to me. Attempting to place exact 'odd's is likewise sometimes problematic. A knowledge of the nature of probability is sometimes relevant of course, but especially because of the way interpretations of probability can so easily mislead us, even when considering the topic here. 'Lottery knowledge' can't be used as a comparison against certain kinds of observational knowledge of course. I view reliabilism as a generic, non-proprietary, self-evident basis for analyzing the knowledge process. I think it may require either slightly different versions for each context, or a some conditional wording etc. But the conditional wording itself might sneak back in. So maybe there is no shortcut to 'custom-made' theorems for essentially different scenarios, but maybe there's a finite set of those, idk. For example the knowledge of a moon landing can get more reliable through additional inference. You can have a level of reliability based on the way it was described in the video. But that process alone is not as reliable as then adding in inference to how reliable the knowledge was in terms of it's origination and dissemination. Meaning, all the sociological inferences that 'reasonably falsify' the hypothesis that the moon landing was fake (you've then 'falsified a potential falsification of knowledge') . That inference increases reliability. So was it reliable enough before the inference? After? I'm not trying to get into that now. Just to offer that reliability might sometimes be an irreducible fabric of processes, not always just a thread. You can probably safely assume that an additional input of observation or inference can sometimes make the process more reliable, because those inputs may inform of something rightly presumed to be highly unlikely. I mean to ask: how can we possibly distill a process in all cases to binary 'reliability' or 'unreliability'? The question of context can be irrelevant to reliability. If he saw a barn as in a complete barn, in that scenario context is immaterial to reliability. So there are really 2 distinct scenarios here. I think that was expressed idk. If he only saw the face of the barn, context of course must be more heavily weighted when attempting to quantify reliability. For some reason I'm fascinated with the idea of a 'test' for 'reasonable reliability' given a set of knowledge originators/justifications etc and a specific entirety of context. Sounds pretty fuzzy so far, lol. Idk if there already is one. Maybe for very simple things. But I'd like to learn more.
longest 10 min of my life
Why?
where can I get the script?
Why is the hallucinating traveler's belief always expanded from simply "hey, there's water!" (meaning, this specific puddle of [what seems to him to be] water) to the generalized "there's water... *_somewhere_* in the valley ahead"? It always has to be expanded to allow for his being luckily "correct" (there's water _elsewhere_ in the valley, etc.). In my view, if he's wrong about the location of where the true water is, he's as wrong as wrong can be, even if the real water is just a few meters away from where he's hallucinating water to be. It might as well be on the other side of the planet.
In fact, as long as we're characterizing his belief, why stop at the valley or even the planet? You might as well just generalize it as "hey, there's water somewhere in the universe". _Well golly be! Turns out he's actually correct, there IS water somewhere in the universe, yet he doesn't seem to have knowledge in this case!_ Well, yeah, that's because it's an _hallucination._ You're just describing hallucination, but with extra steps. The very idea of an hallucination implies a local misapprehension or misperception ("water exists right THERE") of a more general, universal belief ("water exists"). But the generalized belief is not the belief under scrutiny here. His current belief is _local,_ and it ends where the perceived puddle ends. His belief is not true, so it's not JTB.
In my opinion, anything beyond the specific puddle (purporting to show that such a hallucinating traveler is an example of "non-knowledge JTB", based on the aforementioned generalizing) is playing fast and loose with both the "T" and the "B" components of JTB (and probably the "J", but I'm too tired to delve into it at the moment...).
After seeing these videos its more like the way we understand the word "knowledge" is broken
Its not enough to say "i know this" ,its just not able to cover the complexity of understanding, perception and belief
Ohhh, my god, is there anyone understand "no false lemmas"? I am really confusing now. Besides that, I have read a book talked about "internal and external epistemic conditions", which is also really hard to understand.
No false lemmas is saying that you can't have knowledge is it is based off a false belief. For example: jones is going for a job interview and the person interviewing him tells him that Smith is going to get the job. Later on jones sees smith counting 10 coins and then putting them in his pocket, from this he forms the belief that the man with 10 coins in his pocket will get the job. In the end jones gets the job and it turns out he also had 10 coins in his pocket so his belief was true and justified. However the no false lemmas theory would say he didn't have knowledge because he based his belief off the idea that Smith would get the job, which turned out to be false.
Hopefully that makes sense, I didn't explain it very well hahah
SILVA DAY What you said are really helpful! Thank you : )
I really like these videos. They are full of simple and understandable examples. You refer to a video on internalism and externalism. Can you provide a link to those? I would also like to see a video from you on Knowledge ontology is possible?
In regard to the topic you are covering in these videos, are there any literature you would like to recommend reading? Especially on Causal theory and Reliabilism.
Goldman's classic work "Epistemology & Cognition" dives into his arguments for reliabilism and how it relates to our cognitive faculties. Alvin Plantinga continues this train in the theistic realm by expanding Goldman's hypothesis and theory into what we now call "Proper Function". His book is a set of 3: 1. Warrant: The Current Debate, 2. Warrant & Proper Function, and 3. Warranted Christian Belief.
You are a genius❤👌🏼
the use of Armstron's example as something we know just like I know I am watching a video is quite comical since there are theories that the landing on the moon was a mistification; and if that's true we cannot count is as knowledge; following Linda Zagzebski I guess we can conclude that only some of things we think we know are in fact true, therefore only some would count as knowledge; but it it's just luck that decides whether somethign we believe is true, then we can conclude that nothing we believe is real kowledge
solid
Crazy stuff
Goldman's own counterexample was with the mountains, not the barns.
Reliabilism just seems to push the problem back a step without resolving it. Now we have to find a criterion for "reliable belief-forming process" in place of one for knowledge. We now need to decide how one can know that he knows. The problem with positions such as Millikan's seems to be that it suggests the whole problem can be changed just by focusing on one thing at a time. Both the barn example and the lottery drag in probability and seem to simply move toward a Bayesian approach not just as an account of how beliefs are actually formed, but as a criterion for how they should be formed. I can't feel compelled by any account of "knowledge" which is going to be based on probability or notions of the epistemological status of beliefs about other things, such as the fake barns surrounding the real barn.
I honestly do NOT understand why epistemology remains wedded to the notion that "truth" is attainable. We've been running this "thought experiment" for ~2,500 years now. I think the empirical evidence is pretty compelling that knowledge is a direction, not a destination.
NOTE: I am not advocating relativism, nor postmodernism. The accuracy of beliefs is measurable. Beliefs can be ranked according to their likelihood of being true. When we get to the point that there are no reasonable alternatives available, and belief appears to conform closely with reality, then belief may approximate knowledge.
Bayesianism has both an analytical and normative aspects... but the normative aspect that we SHOULD all think like Bayesians, is unrealistic. We don't. It IS however, a strong analytical method that CAN be used by knowledgeable (s.l.) persons to attempt to make their beliefs as objective and accurate as possible.
I believe Goldman is nitpicking and concentrating on superficial logical aspects and is voluntarily being dense and technical.
I believe that it has very little to add to the debate.
Actually, what he's doing is describing how humans actually think.
Some primates do seem to have a natural instinct for nit-picking, and it has become an important part of their culture and socialization.... but I'm referring to chimpanzees, not H. sapiens.
Doesn't reliabilism fall prey to the original Gettier problem? Surely, the mechanism used to form that original belief was reliable - after all, we wouldn't call it justified if it was unreliable. If we do accept reliabilism, we should also accept causalism. A belief must be caused by its own truth by a reliable method to be knowledge.
the ideia is that the mechanism used in the original gettier cases is inference with a false premise, which can lead to justified beliefs but very rarely leads to true beliefs. To be reliable is not to cause many justified beliefs, but to cause many true beliefs. And since inferences involving false premises do not usually lead to true belief, it is not realiable. So reliabilism is safe from the original cases.
This whole mess of epistemology seems to be leading to an inevitable conclusion: you can't know anything, unless you know everything. So, short of omniscience, you can only believe a thing.
Since no one could know everything, no one knows anything. So even if Humankind managed to gather all data about everything, no single human could know that data is true, and anything he proclaims as a result of that data is ultimately proclamations of faith. Frustrating.
perhaps knowing is feeling as though it really (emphasis) is best to know nothing(?).
It doesnt matter that we don't "know" everything but rather only believe it or find it likely, because we can still use those things to accomplish our goals and Gettier problems and such are merely abstract theories. Common sense, u know!
I agree epistemology is one big mess. I think we don't need all these headaches of finding knowledge of invisible worlds, whether I've reconstructed appropriately via a proper causal chain, or through reliable process given x, y, and z; I support common-sense theories.
If we've seen this appendage attached to our body for many years, it does what a hand appendage should do, and the hand appendage has never been anything else that a hand appendage, then it's an existing hand (at least in my reality). If I'm a brain in a vat, then I can never know so it makes no sense to try to know.
These are fun mental games --> but very misleading. They are instances of the Map v Territory fallacy. Or maybe the "Use/Mention" problem. Things precede words. Words are made up to refer to things. There are many things without words, but no words without things. We must perceive up and down for the words "up" and "down" to have meaning. We can't define them without reference to perceptions. The failure to come up with air-tight definitions for words does not mean that those words fail to refer to things. Knowledge is a word that is used by different people to refer to different constructs --> that doesn't mean that those constructs don't exist. They do. Each construct is a mental model used by brains to organize a pattern of ideas. Our "knowledge" problem is not a "knowledge problem" - its a social problem of disagreement over the use of the word knowledge. And this is the great futility for philosophers--> dissatisfying to them because the game of philosophy is played in language. But that's that.
This is not correct. Even thou it fits many other topics of a similar kind. In Its Core the Diskussion of Knowledge is about a Safe Statement and the Problem Shows that there is no Safety with Gettier cases in regards to the Definition we use.
@@das.gegenmittel can you explain what you mean in more detail?
I think the idea that testimony is a good guide for knowledge is questionable. It might lead you to find good sources of knowledge, but given that people misrepresent things, even by accident, it seems like that can't be a good way to form knowledge.
we got reeeal fake doors here.
It's due to the ants in my eyes problem postulated by Johnson. This problem was countered by the personal space dilemma.
Am I the only one who needed to pause the video when she talks about the "Fake Barn County?" These examples are so ridiculous! xD
You're right. Epistemology has been hamstrung for years by focusing more on what is POSSIBLE than what is likely and reasonable. How many times do I need to hear that I might be a brain in a vat, or that Fake Barn County might be a real place. The point of both is that "appearances can be deceiving", if you want to put it in simple vernacular terms. Philosophers just like to take an extreme "case" to illustrate what is a reasonable concept.
Fake Barn County is ultimately no different from the Brain Vat County. In neither place can you fully trust what your eyes are telling you. Science has developed methods for circumventing this sort of issue, and it's the best we can do.
Fake Barn County? 🙄 OMG I'm done with this 😂🤣😂
Beliefs can be untrue, but knowledge only requires you to hold a belief .
The proposition is true
I hold the belief that its true
I have justification for holding that belief.
Notice you can have bad justification, or believe for the wrong reasons, and you still have knowledge then.
How you get there is irrelevant to its truth. If state A is true, then it is
SOO much mental masturbation! !!
Not you Professor, your dissertation was awesome and instructive, love your videos :)
Reliability doen’t equal accuracy
I'm sure the content of your videos are good but I can't stand watching a hand write and draw things very slowly
waHter
BORING
Seems so unnecessarily convoluted, looks like a desperate attempt to claim truth is knowable. Falibalism resolves the conflict simply by acknowledging that you can't prove what is true, only disprove what is false, all these JTB counterexamples seem to fit just fine within that framework.
Please use real world examples, and possibly historical examples.... these hypothetical examples do not do a good job of illustrating the usefulness of philosophy....