The Swamping Problem

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ส.ค. 2024

ความคิดเห็น • 65

  • @KaneB
    @KaneB  ปีที่แล้ว +12

    In this video, I pretend to believe in a thing called "justification". For an alternative view see: th-cam.com/video/h_Uvs4YNs1o/w-d-xo.html

  • @aaronchipp-miller9608
    @aaronchipp-miller9608 ปีที่แล้ว +16

    What's special about JUSTIFIED true beliefs?
    What's special about TRUE belief?
    What's special about BELIEF?
    What's SPECIAL?
    WHAT?

    • @KaneB
      @KaneB  ปีที่แล้ว +10

      This but unironically.

    • @real_pattern
      @real_pattern ปีที่แล้ว +1

      great job! you really demonstrated the core insights of philosophical inquiry.

  • @justus4684
    @justus4684 ปีที่แล้ว +17

    * Richard Rorty intensifies *

  • @otavioraposo6163
    @otavioraposo6163 ปีที่แล้ว +25

    It's always intrinsically valuable to watch your videos, Kane.

    • @KaneB
      @KaneB  ปีที่แล้ว +1

      Thanks!

  • @allusionsxp2606
    @allusionsxp2606 ปีที่แล้ว +6

    This is the first time I have come across this problem. But I think there is a greater sense of security with knowledge than true belief. True, a true belief and knowledge get you the same result, but I think knowledge provides a more satisfactory sense of belief. There does seem to be an intuitive appeal to knowledge over true belief, which could explain why it is valued more.
    Simply put, it reduces insecurity. Not sure how good this answer is but it is one that I came up with.

    • @KaneB
      @KaneB  ปีที่แล้ว +2

      That was basically Plato's answer if I recall correctly. If you have a mere true belief that this the road to Larissa, then that belief can be displaced very easily, such as if you encounter somebody who tells you that it's not the road to Larissa. On the other hand, if you know that this is the road to Larissa, then your true belief will survive such challenges; you will be able to adequately respond to the person who says that it's not the road to Larissa. Of course, on this view, the value of justification is merely instrumental. True belief is all that really matters, and justification is good because it's a means of ensuring that, when you have a true belief, you retain it.

    • @ericvulgate
      @ericvulgate ปีที่แล้ว

      Everybody thinks their 'beliefs' are acquired from knowledge .

  • @xiutecuhtli15
    @xiutecuhtli15 ปีที่แล้ว +5

    i like justification more than truth i think. when i play a puzzle game, it's really unsatisfying to accidentally win. i kind of want to like solve the puzzle by thinking about it. figuring stuff out in the real world is similarly like a game to me i think

  • @martinbennett2228
    @martinbennett2228 ปีที่แล้ว +3

    Imagine a thinker 5th century BC puzzling over change and permanence, arriving at a belief that at some point nature is indivisible and can be rearranged to make something different. This turns out to be not only a true belief but an understanding with the potential to advance knowledge very rapidly, however the justification is rather thin, more of a hypothetical possibility than justified in a way that would likely convince sceptics.
    More particularly others who believe that whatever 'stuff' is, it must necessarily have soulful intent and purpose turn out to be much more influential (thereby retarding development of understanding by almost two millennia).
    We can see that true belief is not enough, it is not systematically, publicly available in the way that true justification is. With lack of justification there is no way to adjudicate between different beliefs; in fact, it is worse than this because there are many more ways for a belief not to be true than for a belief to be true, so true beliefs are inevitably swamped or outnumbered by untrue beliefs. Moreover without justification, how could there even be concepts of true and untrue beliefs?

  • @JulioCSilvaPhilosophy
    @JulioCSilvaPhilosophy ปีที่แล้ว +3

    Nice video! I have two comments:
    a) Maybe someone could argue that, from a logical point of view, you can deduce anything (truths or falsehoods) from a falsehood. If that is so, then if an agent holds false beliefs, then he can infer any other beliefs and, in that sense, there is no security. In comparison, from a logical point of view, from truths you can infer only truths, and this is a more secure way to avoid believing any statement.
    b) From the point of view of virtue epistemology, it is very important the performance (careful analysis, critical thinking, open-mindedness, etc.) of the agents to get knowledge. If an agent doesn't have those virtues, then that could have some non-desirable consequences. For example, if an agent is closed-minded, then he will not consider different views in a rational discussion.

  • @juliohernandez3509
    @juliohernandez3509 ปีที่แล้ว +3

    I loved the ending. "Why have a belief at all"? Perfect.

    • @evinnra2779
      @evinnra2779 ปีที่แล้ว

      Yeah, sounding like what a true philosopher would say.(NOT)

    • @juliohernandez3509
      @juliohernandez3509 ปีที่แล้ว

      @@evinnra2779 I am not interested in drawing boundaries between "true" and "untrue" philosophers. I like interesting arguments and ideas. That is good enough for me.
      Also do the Churchlands count as "true" philosophers for you? Because not only would they deny that we need beliefs, they would also deny that we have them. Hahahahaha.

    • @davidarvingumazon5024
      @davidarvingumazon5024 ปีที่แล้ว

      @@juliohernandez3509 I'm confused with your 2nd paragraph. Do you imply that you need and already have them?

    • @juliohernandez3509
      @juliohernandez3509 ปีที่แล้ว

      @@davidarvingumazon5024 I was pointing out that there are serious philosophers that would say that beliefs don't exist. I was responding to @evinnra2779. He seemed to imply that no serious philosopher would claim that we don't need beliefs. That is clearly not the case.

  • @norabelrose198
    @norabelrose198 ปีที่แล้ว +4

    Sounds a hell of a lot like what Richard Rorty said on this issue 😊

  • @ferdia748
    @ferdia748 ปีที่แล้ว +1

    Knowledge is better than mere true belief because the processes which give us knowledge are more likely to give us true belief. So by striving for knowledge we are increasing our amount of true beliefs. True belief is not inherently inferior to knowledge, but the processes involved in the attainment of knowledge are superior to other processes and so we must exercise them and exercising them involves attaining knowledge.

  • @aarantheartist
    @aarantheartist ปีที่แล้ว +2

    I think the value of knowledge is clear if you define it in a very old fashioned way. On one old definition of knowledge, knowledge requires true belief where the justification makes absolutely certain that the belief is true. Let’s call that “CKnowledge”. Most philosophers had assumed that true belief was intrinsically valuable, and if you assumed that, it was easy to understand what is valuable about CKnowledge over mere true belief - CKnowledge is true belief that you can “see” is true. You have a special truth guaranteeing justification which lets you grasp the truth from your perspective. So, the value of CKnowledge is that of seeing that your belief has the intrinsically valuable “true” status.
    I guess that won’t wash if you don’t see truth as intrinsically valuable of course! Great video as ever.

    • @ashkenassassin7219
      @ashkenassassin7219 ปีที่แล้ว

      You said " On one old definition of knowledge, knowledge requires true belief where the justification makes absolutely certain that the belief is true." that is simply Justified True Belief but requires infallible justification rather than fallible kind.Also presumably if knowledge is simply True belief you will know your belief has the intrinsically valuable “true” status. So "Cknowledge" it's unclear why that would be valuable over True belief

    • @aarantheartist
      @aarantheartist ปีที่แล้ว

      @@ashkenassassin7219 Indeed, it is the old JTB definition with an infallible J condition. But a belief being true does not entail that I know it to be true. Hence I can have mere true belief without knowing that I do. I can be right just be guessing. CKnowledge is supposed to be better because the special kind of infallible J that people like Descartes sought would give you a conscious awareness of the truth - you would not merely be right, but you would see that you are.

    • @ashkenassassin7219
      @ashkenassassin7219 ปีที่แล้ว

      @@aarantheartist right but if knowledge requires mere true belief then it doesn't require justification forget about infallible justification it wouldn't require anything of that sort so then you would know it would entail that you know it so as I said it's unclear why that would be valuable over True Belief.

    • @aarantheartist
      @aarantheartist ปีที่แล้ว

      @@ashkenassassin7219 if you define knowledge as mere true belief and nothing else, then having a true belief that P entails knowing that P. And obviously that wouldn’t be more valuable than true belief - it would be the same thing.
      But what’s your point? It would still be better to have the special kind of infallible justification that Cartesian philosophers have sought, wouldn’t it?
      Edit: perhaps you are asking me why it would be better to have the extra Cartesian justification at all? If that’s your question, the answer is that it would allow you not merely to have a true belief, but to see that it is true (note, I didn’t say “to know it is true”. I said to “see it is true”).

    • @ashkenassassin7219
      @ashkenassassin7219 ปีที่แล้ว

      @@aarantheartist well that's the thing because seeing something is a form of knowledge and if knowledge is just mere true belief, then the true belief of seeing something will just entail that you know it so if knowledge is true belief you will be able to see it is true so as usual it's not clear why 'Cknowledge" will be more valuable than true belief when seeing that it is true is compatible with both so yeah it would be the same

  • @zhaoli4608
    @zhaoli4608 ปีที่แล้ว +1

    For some people, truth is utility and nothing else. If something is not useful for them, it might as well not exist.

  • @jonathanmitchell8698
    @jonathanmitchell8698 ปีที่แล้ว +2

    I have a few thoughts - I may just be lacking nuance on the distinctions though:
    1. It seems to me that the concept of "belief" in itself involves justification. I don't think a claim would be considered a belief if it existed without at least the justification of "feeling" true. Surely, belief must involve more than simply saying the words that comprise a proposition. At the very least, it seems that belief should involve a mental model that incorporates the conceptual objects in the proposition into a larger structure. But it seems that we would require something more to call this a belief - an actor can create a mental model of their role in a play and the world they are acting out, while still distinguishing between that model, and the mental models they identify as beliefs. Whatever it is (e.g. the feeling of "trueness") that distinguishes a mental model from a mental model that is identified as a belief, seems to me to be nearly synonymous with "justification" (or maybe, "justification" is the label we put on the thing that evoked the feeling that comprises this distinction).
    2. It seems that everything eventually grounds out in beliefs that we accept on the basis of mere feeling. What is categorically different about: 1. someone feeling strongly that 1+1=2 and accepting that this must be true, or 2. someone feeling strongly that the axioms of set theory are true and the rules of logic are true, and then manipulating these axioms to arrive at the conclusion that 1+1=2?
    3. As I argued before, a belief seems to at least involve a mental model. You can't really have a belief about something without framing it or describing it within the context of other mental objects. It seems like the objects within this model at the very least influence each other. If I believe that it is going to rain tomorrow, I have to have a concept of what "rain" is and what "tomorrow" is, and if I were to be convinced that rain doesn't actually exist, then it seems my belief must change (even if I maintain the proposition, it seems that the underlying model - how the objects relate to one another - must change, assuming I was truly convinced of the alteration to my model). Is this construction of conceptual objects not justification in itself? If the constituent objects affect the belief, are these objects not justification?

    • @jonathanlochridge9462
      @jonathanlochridge9462 ปีที่แล้ว

      3.I think that you could consider justification that comes wholly from assumptions and definitions that are unconscious to be justified. But, I think that there is a notable difference between that an when you can actually recall or follow and one that you can't. So you make up a different rationalization that doesn't reflect the original reasons.
      I do agree that many beliefs come down to our unconscious definitions. Although, when we explore semantics and definitions further it can help us refine our beliefs by a lot.
      Your particular example is predictive. And to an extent, I think that biases the reasoning towards being more explicit. Except perhaps if it comes down to intuition. Although the phenomena of "deja vu" is commonly used for when people think something will happen intuitively. Or that a situation is familiar without having an idea why. So, I think it is definitely possible for us to feel something will happen without having and rational reason why. We could assume that there is some underlying reason though.
      From what I have read, humans are really good at pattern recognition. Particularly when trained. So, I think part of intuition is pattern recognition. So we see something that fits a pattern we know. So, we believe that the pattern might be completed. If you try to look at why a pattern exists you might be able to find a reason with some thought. In other cases, we might not be able to tell why we feel like there is a pattern.
      I currently believe that Intuition mostly works as a form of fast thinking that applies huerestics to quickly give us information about what things are, and what is true, and what things mean. Sometimes the Huerestics are a good approximation. Sometimes they are a bad one.
      You may have beliefs about what rain is. But, your brain instinctively is labeling things to the easiest and closest matching word. Our perception is effected by our instinct. Although, we can rationally control and effect our perception. If we observe something closer. We can think of multiple ways to categorize it. So, to an extent we have the ability to take more time to figuratively see things from a different perspective.
      I think "feelings" are distinct from but related to emotions and mostly derived form our senses and the built in ways we have of making sense of what we see.
      I also think that language is not strictly required for our perception. But, that perception is still deeply tied to language. I have direct experience of observing things and recognizing that it doesn't directly fit into any word I know. Although, I can still use imprecise words to talk about it.
      Now, in terms of definition: "Instinct" vs. "Intuition" are similar but different. Instinct implies that we don't learn it. But, that it is "hard-wired" into us. That when certain conditions are met, we do a particular thing. Sort of like an program with an IF statement. Although, in many cases they may be derived from chemical or biological processes.
      Like sweat uses the physical phenomena of evaporation to cool us down. Since that is grounded in physical mechanisms. It is feasible that the cells responsible for creating sweat need no co-ordination. And that heat results in a chain reaction of processes that ends in sweat being created.
      Instinct seems to imply some sort of thinking or nervous activity. Like you have sensors(nerves around the body), Or genetic triggers, that are activated and result in a coordinated set of responses or that try to push a thinking mind towards doing a particular thing. Which would imply instinct isn't automatic, but it is still condition based.
      From a certain view, any automatic response to a stimuli or physical condition could be seen as an "instinct" but that is a bit vague. And there are a lot of levels something could be happening on if something seems automatic.
      Emotions are also fairly physically grounded. But they can be triggered by a lot. They may effect perception and reasoning. But, they aren't specifically about reality or perception like "feelings" are in my view. Although "feelings" is used as a word for emotions too. Particularly as we can sense physical effects of our emotion like a tightening in the chest. Or of the muscles for stress. Crying, laughing, facial expressions, etc.
      I don't know if our "feeling that something is true" underpinning a belief if connected to or different from the fundamental processes of perception.
      I think it is feasible that they are. But, I don't know. They could also be connected to what we want to believe.
      To an extent: "I have a feeling this is true" could be seen as a justification. But we don't know how good of a justification that is. (Assuming a good justification is more likely to be true. Or alternatively any justification that only leads to good beliefs is true/good.)
      It might even be different in different areas or for different people. Maybe some people have particularly good intuitions. And some people have particularly bad ones.
      Which then leads to a question of "why". If that is a difference then that is something that can be tested or observed.
      (Although, to an extent the validity of that is dependent on the validity of your own perception)
      Is it possible to have good perception but bad intuition?
      Such as if you were good at matching up what you see to what it is.
      But, your intuitive hueristics for doing things with those observations were flawed? Or vise verse?
      Maybe you could have someone who can instinctively apply a logic and be right a lot of the time with the right starting information. But their brain doesn't categorize the information for getting their assumptions very well? So, as long as they think about what they are perceiving, they will come to mostly true conclusions intuitively. But, they make bad assumptions or catagorizations without thinking?
      Or to generalize it more. What if someone doesn't know the words to describe something they are observing. But they are still able to develop an intuition for it? Is that possible? Or not?
      Or perhaps they have to make new words unique to them to be able to label the concepts well enough to understand it. But, their linguistic model wouldn't be understandable to someone else unless they teach from first principles and observations.
      I hope I didn't overwhelm you.

  • @GodisgudAQW
    @GodisgudAQW 4 หลายเดือนก่อน

    Just because there are cases where a false belief is a perfectly adequate replacement for a true belief doesn't mean that true belief is always replaceable.
    An example I thought of is belief in God. If one exists, true belief would be really important as it would be the only way to avoid hell. It is precisely your belief that would save you from hell, and if God exists, the only belief regarding the entity's existence that would be useful would be true belief.
    More generally, even if we take truth to have only instrumental value, not all utility can be derived by some false belief.
    I'm tempted to say truth will always help in reaching our goals, and false beliefs usually won't, so that explains why truth is valuable. Of course, I am aware of the problem of subversive truths. I can deal with that in a later post, but my overall sentiment is that false beliefs can sometimes be better than true beliefs, but true beliefs always have utility, whereas the same usually doesn't hold for false beliefs.
    As for the swamping problem itself, I believe knowledge just is true belief. Justification is a tool of persuasion for self and others. The swamping problem is just a demonstration of why our definition of knowledge (true belief) ought to be separated from the means of acquiring it (justification).
    As for why beliefs are useful, even the simple organisms have beliefs, but they are basic. They feel hungry, and so they move in a way that they believe will help them acquire more calories. We might say they don't have beliefs, but I think it's more that they have rudimentary goals, and they don't deliberate much on their beliefs.
    My intuition is that beliefs motivate action. It's not really possible to act toward a goal without belief.

  • @DanuuJl
    @DanuuJl ปีที่แล้ว +1

    A justified false belief wouldn't be always as useful as justified true belief. Indeed, you can transform reality with false beliefs too, but soon or late you need to revise your old false beliefs, that to be able to move further in interaction with world.
    Useful false belief may be sufficient in some cases, but in other it wouldn't be. Remember those medieval courageous people, who thought a man can fly exactly like a bird, than they just have crashed and died in their flying experiments.
    In case if the false belief equally useful as true for you and nothing shows you are wrong, it doesn't mean, that so will be in all cases for all of us.

  • @ostihpem
    @ostihpem ปีที่แล้ว +1

    Justification is empty as long as it is not directed towards truth.

  • @Michelle_Wellbeck
    @Michelle_Wellbeck ปีที่แล้ว +5

    richard rorty has entered the chat

  • @grubfoot5707
    @grubfoot5707 11 หลายเดือนก่อน

    Just thinking about the Borges story. If we take a fractal then often the perfect representation, the function used to draw the pattern, is going to be simpler than the pattern drawn out in incredible detail.

  • @italogiardina8183
    @italogiardina8183 ปีที่แล้ว

    A way to view the swamping problem can be from attribution and information processing within a group structure. That is attributing behaviour to underlying essences of persons. Individuals are then perceptually distinct but highly prototypical members are most informative about issues that concern the group. It's plausible the most prototypical (management) tend to swamp all members with knowledge claims that by token of being disproportionally influential. A standard member is more likely to listen to someone with a doctorate than a degree for example or by social power the CEO who hires or fires subordinates. This function as leadership qua function of charisma or social attraction arguably leads to attribution error.
    The error occurs when knowledge functions through the group. Although as self categorisation within a prototype emerges so knowledge anchors to a person who represents the prototype through consensus. So in this sense the most prototypical within a group ensures compliance to knowledge claims at least within the in-group when it comes to general conceptual schemas as opposed to universals like 'tree'. An example is that certain ontological science schemas as claims can be interpreted within the context of a plethora of world religions. A religion be based on in-group prototypes that have terms like Christian Science or Hindutva yoga science as reduced from historical cannons of the religious prototype. So further evidence of the leader to member swamping problem can be observed as certain members become hyper sensitive to this swamping problem react through introversion by token of self isolation which in a sense is a function of individualism. Personally, travel to some radically distinct culture can be a nice escapism from cultural swamping of the mind.

    • @jonathanlochridge9462
      @jonathanlochridge9462 ปีที่แล้ว

      Interesting point. You use so much technical language and jargon though that is it kind of hard to follow and figure out exactly what you are saying.
      The topic and overall flow is still pretty interesting to me.
      I do agree that modelling that swamping problem as a arising from the group structures is an interesting lens. This seems to be a fairly scientifically dependent question. (Although, that is relative to your overall epistemology.)
      I think you could make a social studies experiment to see if these plausible possibilities are actually happening like you think they might.
      Philosophically, it seems you are coming up with a reason or justification for why we adopt the false beliefs of authority figures or engage in particular logical falicies. Proving your model for the actual reason might be difficult. But it seem feasible to me. You could try to get an idea of how well the existing evidence backs up your proposal if you wanted?

  • @deepfritz225
    @deepfritz225 ปีที่แล้ว +6

    Based + redpilled

    • @cunjoz
      @cunjoz ปีที่แล้ว

      justified + true

  • @ostihpem
    @ostihpem ปีที่แล้ว +1

    True belief is useless because from your POV it could be a false belief as well since you can’t tell the difference yet. You are at a stalemate, it’s 50/50. You act on such premise and you are in for a rough time. If on the other hand you have access to the truth of your belief it gives you reliability and opens your playbook. This is why we strive for justification (access to the truth of a belief).
    Of course the irony is that we (probably) cannot have any perfect justification which means we cannot have any justification at all (think of a coin flip where anything!!! is non-perfectly justified, e.g. the probability laws, your perception, the setting and of course the usual intangibles like the coin flip itself which would lead to confusion to another 50/50 stalemate). So that‘s a dead end and so we need to improvise to get at least some useful information about the quality of our belief. This is the beginning of fallible knowledge, good reasons, field experts opinion, repeated experience, pragmatism, science etc. It is very risky but so far we doing fine overall. We have no alternative anyway. Anything is better than just mere belief.

  • @masterlikesmargarita
    @masterlikesmargarita ปีที่แล้ว

    Dutant, J. (2015). The legend of the justified true belief analysis. Philosophical perspectives, 29(1), 95-145.
    I would recommend reading this paper. I really do not aggree that knowledge is or ever really was considered to be justified true belief.
    If i were to argue, i would say that knowing something is just inherently different to believing in something; insofar i find it even contradictory to try and define knowledge as any sort of belief.
    That is also why Gettier cases dont work as intended: the victims are always false in why something is as they believe because they believe it out of ambiguity. If i see a something structured like a building in the distance i wouldnt say i know it to be a building; if one were to argue against me that it could also just be a fake reconstruction i had no way to argue against that just from seeing something from my perspective. I would have to go and check it out.
    And to give an exemple of knowledge in a sense: in the first problem of Gettier Smith knows that the man who gets the job has 10 coins in his pocket insofar that that would be the case if Jones would have gotten the job. In this case it would have been certain because thats how logic works haha. But it does not need to be this case. And it would be rediculous to argue, that in fact Smith would only belief but like justified 100% that when that would have been the case, that then the man who got the job would have had 10 coins.
    It is a messy frame to argue from once you entagle both of these actions - knowing and believing that is. The article i showed in the beginning i find to be an interesting historical recollection.
    PS: The value of knowledge in a sense i would argue simply lies therein that once you know you can argue your case or rather argue against alternatives. ('Dialectical efficiency' i think it would be called) So in a sense i aggree with you that the value of knowledge just lies in it being knowledge or in the justification. And i know that one can also argue false believes really well without being a scammer - but in those cases we go to another sortof level of statement. A level where i would argue that there isnt really much to know. I can know wether something is a building or not but i will probably never know what the core principles of the universe are... i dont know...

  • @josebolivar4364
    @josebolivar4364 ปีที่แล้ว

    Hey, wake up! Kane B just posted!

  • @jonathanlochridge9462
    @jonathanlochridge9462 ปีที่แล้ว

    Interesting video.
    At minute 16 you claim that if you thought one of your beliefs was false you would give it up.
    However, what if you know something is false, but you don't know what alternative belief is true. So you pick a false belief that in instrumentally useful to help you?
    You essentially treat the world as if it were true. Because you don't have a better alternative.
    Arguably, this could apply to many scientific beliefs. Or the use of a model like Newtonian Physics.
    Essentially metaphorically you seem to be claiming that the map is not the object it represents. And that if it was it actually wouldn't be useful.
    To an extent this applies that a falsehood might potentially be more useful than a truth if we can't understand the whole truth due to a human limit such as time or memory?
    Although, I would argue that if we collectively had a perfect copy in some sense. that would still be very useful. On one hand, that is kind of impossible because the world is always changing?
    On the other hand, you could just make a model that uses equations to perfectly describe how something changes. But that to an extent implies that a form of determinism exists if you can create a perfect model of everything across all times. Even that possibility might come across physical limits as well. Presumably this perfect model might be able to be encoded and compressed into a form of information. This also seems to imply that a recursion issue might happen. Does the book listing all knowledge also list itself?
    The idea of a digital clone of a particular area of reality is an idea that is actually gaining traction alongside a push for greater surveillance of people and objects. And it is one of the goals behind the "Internet of Things". And more broadly "smart cities". That if you have a digital twin of a company, city, or maybe even a nation. Then you can simulate how the actions of the powerful would impact such a society. Or have a real time record of the conditions within a society of company so that problems can be addressed. Long term, the hope is to use AI Algorithms to handle this massive amount of data when the model becomes way to complex for a human to handle and analyze. Obviously it will fundamentally be an incomplete clone will have areas where it is wrong since simulating absolutely everything is infeasible. However, such a clone would still be very useful to those will power and access to it.
    That also leads to the potential instrumental danger or knowledge differences. To an extent, someone else's knowledge might be used to do harm.
    I like your final claim in someways. Although, I personally view truth itself as also being a good. I accept your proposal that that justification to an extent matters more than truth.
    But fundamentally me valuing truth is subjective. It is kind of connected to some other beliefs I have though. Most notably my religious beliefs. But, that would be a much longer topic to go into.
    And I also think that an over-valuing of truth could lead to things that I instrumentally view as bad. Some knowledge can be used to directly do harm. So, if someone who wants to do harm learns that knowledge that can be dangerous. The ethical questions of how to handle that are difficult and complex though.
    Another angle I think is interesting is approaching truth from the perspective of approximation or incompleteness. Particularly when we look at science.
    If you have a false model/belief. That means that something about it is in conflict with reality. (Assuming some form of reality exists)
    If observe the world/reality and see something that contradicts the model, then you know that at least part of it isn't true.
    However, if you know it is true in a particular area, then so long as your belief about when the model is correct and when it isn't is correct: Then the negative effects of this false belief are mitigated. You could potentially call it "partially true".
    Now partial truth doesn't make much sense in a lot of logical frameworks. However, I think you could argue that the meta-knowledge that a model is sometimes correct and sometimes false could be viewed as a true belief. Overall, I think that a view that includes allowances for this sort of thing is at least somewhat necessary to really trust in science if you aren't a scientist. Since the most accurate models we know of are very complex. Believing that "partial truth" in this perspective has no value means that learning about a lot of things is pointless as a non-scientist. I am pretty sure there is a better philosophical technical term for this though. I also know some perspectives take an inherently probabilistic view of epistemology as well. Which kind of leads into instrumental epistemology in my view too.

  • @DeadEndFrog
    @DeadEndFrog ปีที่แล้ว

    nice beard, (edit) the rest of the video is good too

    • @KaneB
      @KaneB  ปีที่แล้ว

      Thanks

  • @ashkenassassin7219
    @ashkenassassin7219 ปีที่แล้ว

    The problem is if you were to take all beliefs as false but useful this is going to involve you identifying what is useful that involves finding out the truth about what is useful so someone who holds this view would be in some sort of self contradiction and or self refutation

  • @realSAPERE_AUDE
    @realSAPERE_AUDE ปีที่แล้ว

    @Kane B if you believe *x* does that mean you believe *x* is true? Is that why you said you’d likely give up a belief if you discovered it’s false?

    • @KaneB
      @KaneB  ปีที่แล้ว +2

      That's probably the standard view, yeah -- that to believe that P is to take it that P is true. I'm not so sure about this, though; it seems easy enough to me to hold false beliefs.

    • @realSAPERE_AUDE
      @realSAPERE_AUDE ปีที่แล้ว

      @@KaneB what does it mean to believe *P* if not that it means to believe *P* is true? I sort of thought that having a false belief would just mean that you believe *P* to be true yet *P* is in fact not true.

    • @KaneB
      @KaneB  ปีที่แล้ว +1

      @@realSAPERE_AUDE One way this could work is due to a kind of compartmentalization. So take "the sky is blue". I might believe that the sky is blue, then I reflect on the science and philosophy of colour, and come to the conclusion that it is false that the sky is blue. But then, when I'm not actively reflecting on the challenges to this claim, I find myself believing it again. This is how Hume describes his response to skeptical arguments: in the philosophy seminar room, they are utterly compelling, but then when he leaves and starts playing billiards with his friends, all the beliefs that were challenged by such arguments come flooding back. I might even be in a state where: (1) I believe that the sky is blue and (2) I remember that there are compelling arguments against the belief that the sky is blue but (3) I am not currently running through such arguments so they do not dislodge my belief.

    • @realSAPERE_AUDE
      @realSAPERE_AUDE ปีที่แล้ว

      @@KaneB interesting; I’ll have to think about that more. Thanks for answering my questions again!

    • @realSAPERE_AUDE
      @realSAPERE_AUDE ปีที่แล้ว +1

      @@KaneB perhaps our awareness is too narrow to actually have overarching beliefs in a meaningful sense yet we have some illusion that we have some set of stable beliefs and stances when we aren’t directly thinking about them..is that close to what you may be trying to hint at or am I way off?

  • @TheGlenn8
    @TheGlenn8 ปีที่แล้ว

    Where did that beard come from!?

  • @evinnra2779
    @evinnra2779 ปีที่แล้ว

    Have you ever gotten lost because of using Google maps? This is an AI problem, the impossibility of AI figuring out for it self the difference between justifiable true belief and knowledge. In the Platonic view knowledge is supposed to be more than true belief of knowing that something is true. Knowledge is more because it is tested and practiced, so true beliefs are the things that the mind knows, so the term for becomes knowledge. Knowledge is something more, something additional to the information related by a true belief, it is an INNATE sense of an additional something. The problem for AI is that it can only do what it is programmed to do, so Google map will suggest the most frequently used routes, or routes that don't seem to have any present traffic problems, but Google maps has no knowledge of what is the most justifiable true belief in any given traffic situation, simply because there is no innate sense in it with which it could choose . If you ask a robot to clean the room, the robot in it's infinite wisdom and with great efficiency may decide to sweep the rubbish under the carpet.

  • @whocares2387
    @whocares2387 ปีที่แล้ว

    do a video about how nature can't be categorised into objects as everything is not mutually exclusive

    • @KaneB
      @KaneB  ปีที่แล้ว

      I have a video which defends an antirealist approach to classification, which might be similar to what you're asking for: th-cam.com/video/arDbrM27s4s/w-d-xo.html

  • @ostihpem
    @ostihpem ปีที่แล้ว

    Logically a true belief is superior to false belief because it reduces complexity. Falsehood leads to ex falso, so it doesnt reduce anything.

  • @mustyHead6
    @mustyHead6 ปีที่แล้ว

    beard doesn’t suit you

    • @KaneB
      @KaneB  ปีที่แล้ว +4

      I don't care. It's a product of laziness, not an aesthetic choice.

    • @spongbobsquarepants3922
      @spongbobsquarepants3922 ปีที่แล้ว

      ​@@KaneBWhen you had long hair, you had an especially good look. Long hair is also a product of laziness, so you could also grow that out again, and save some money.