Dr. JEFF BECK - The probability approach to AI

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 ธ.ค. 2024

ความคิดเห็น • 80

  • @jordan13589
    @jordan13589 ปีที่แล้ว +20

    Wrapping myself in my Markov blanket hoping AGI pursues environmental equilibrium 🤗

  • @SymEof
    @SymEof ปีที่แล้ว +16

    One of the most profound discussions about cognition available on TH-cam. Truly excellent.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว +1

      @@webgpu podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1

  • @Blacky372
    @Blacky372 ปีที่แล้ว +2

    Thanks! I am grateful that such great content is freely available for everyone to enjoy.

  • @BrianMosleyUK
    @BrianMosleyUK ปีที่แล้ว +5

    I've been starved of content from this channel, this is so satisfying!

  • @neurosync_x
    @neurosync_x 8 หลายเดือนก่อน

    🎯 Key Takeaways for quick navigation:
    00:00 *🧠 Brain's Probabilistic Reasoning*
    - The brain's implementation of probabilistic reasoning is a focal point of computational neuroscience.
    - Bayesian brain hypothesis examines how human and animal behaviors align with Bayesian inference.
    - Neural circuits encode and manipulate probability distributions, reflecting brain's operations.
    02:19 *📊 Bayesian Analysis and Model Selection*
    - Bayesian analysis provides a principled framework for reasoning under uncertainty.
    - Model selection involves choosing the best-fitting model based on empirical data and considered models.
    - Occam's razor effect aids in selecting the most plausible model among alternatives.
    06:13 *🤖 Active Inference Framework*
    - Active inference involves agents dynamically updating models while interacting with the environment.
    - It incorporates optimal experimental design, guiding agents to seek the most informative data.
    - Contrasts traditional machine learning by incorporating continuous model refinement during interaction.
    09:34 *🌐 Universality of Cognitive Priors*
    - Cognitive priors shape cognitive processes, reflecting evolutionary adaptation and cultural influences.
    - The debate on universal versus situated priors explores the extent to which priors transcend specific contexts.
    - Cognitive priors facilitate rapid inference by providing a foundation for reasoning and decision-making.
    14:20 *💭 Epistemological Considerations*
    - Science prioritizes prediction and data compression over absolute truth, acknowledging inherent uncertainty.
    - Models serve as predictive tools rather than absolute representations of reality, subject to continuous refinement.
    - Probabilistic reasoning emphasizes uncertainty and the conditional nature of knowledge, challenging notions of binary truth.
    19:11 *🗣️ Language as Mediation in Communication*
    - Language serves as a mediation pattern for communication.
    - Communicating complex models involves a trade-off between representational fidelity and communication ability.
    - Grounding models in predictions facilitates communication between agents with different internal models.
    22:03 *🌐 Mediation through Prediction*
    - Communication between agents relies on prediction as a common language.
    - Interactions and communication are mediated by the environment.
    - The pragmatic utility of philosophy of mind lies in predicting behavior.
    24:24 *🧠 Materialism, Philosophy, and Predictive Behavior*
    - The pragmatic perspective in science prioritizes prediction over philosophical debates.
    - Compartmentalization of beliefs based on context, such as scientific work versus personal philosophy.
    - Philosophy of mind serves the practical purpose of predicting behavior.
    29:46 *🧭 Tractable Bayesian Inference for Large Models*
    - Exploring tractable Bayesian inference for scaling up large models.
    - Gradient-free learning offers an alternative approach to traditional gradient descent.
    - Transformer models, like the self-attention mechanism, fall within the class amenable to gradient-free learning.
    36:56 *🎓 Encoding representations in vector space*
    - Gradient-free optimization and the trade-off with limited model accessibility.
    - The importance of Autograd in simplifying gradient computations.
    - Accessibility of gradient descent learning for any loss function versus limitations of other learning approaches.
    39:18 *🔄 Time complexity of gradient-free optimization*
    - Comparing the time complexity of gradient-free optimization to algorithms like Kalman filter.
    - Discussion on continual learning mindset and measurement of dynamics over time.
    40:19 *🧠 Markov blanket detection algorithm*
    - Overview of the Markov blanket detection algorithm for identifying agents in dynamic systems.
    - Explanation of how dynamics-based modeling aids in identifying and categorizing objects in simulations.
    - Utilization of dimensionality reduction techniques to cluster particles and identify interacting objects.
    43:10 *🔍 Emergence and self-organization in artificial life systems*
    - Discussion on emergence and self-organization in artificial life systems like particle Linnea.
    - Exploration of the challenges in modeling complex functional dynamics and the role of emergent phenomena.
    - Comparison of modeling approaches focusing on bottom-up emergence versus top-down abstraction.
    49:02 *🎯 Role of reward functions in active inference*
    - Comparison between active inference and reinforcement learning in defining agents and motivating behavior.
    - Critique of the normative solution to the problem of value function selection and the dangers of specifying reward functions.
    - Emphasis on achieving homeostatic equilibrium as a more stable approach in active inference.
    52:20 *🛠️ Modeling levels of abstraction and overcoming brittleness*
    - Discussion on modeling different levels of abstraction in complex systems and addressing brittleness.
    - Exploration of emergent properties and goals in agent-based modeling.
    - Consideration of the trade-offs in modeling approaches and the role of self-organization in overcoming brittleness.
    55:08 *🏠 Active inference and homeostasis*
    - Active inference involves steering emergent systems towards target macroscopic behaviors, often resembling homeostatic equilibrium.
    - Agents are imbued with a definition of homeostatic equilibrium, leading to stable interactions within a system.
    - Transitioning agents from a state of homeostasis to accomplishing specific tasks poses challenges in maintaining system stability.
    56:34 *🔄 Steerable multi-agent systems*
    - Gradient descent training on CNN weights can produce coherent global outputs, illustrating macroscopic optimization.
    - Outer loops in multi-agent systems steer agents toward fixed objectives without resorting to traditional reward functions.
    - Manipulating agents' internal states or boundaries can guide them to perform specific tasks without disrupting system equilibrium.
    59:00 *🎯 Guiding agents' behaviors*
    - Speculative approaches to guiding agents' behaviors include incorporating desired tasks into their definitions of self.
    - Avoiding brittleness in agent behaviors involves maintaining flexibility and adaptability over time.
    - Alternatives to altering agents' definitions of self include creating specialized agents for specific tasks, akin to natural selection processes.
    Made with HARPA AI

  • @luke.perkin.online
    @luke.perkin.online ปีที่แล้ว +9

    I really enjoyed this. Great questions, if a little leading, but Beck was just fantastic in answering, thinking on his feet too. The way he framed empiricism, prediction, models... everything, it's just great! And then to top it off he's got the humanity, the self awareness of his Quaker/Buddhist trousers (gotta respect that maintaining hope and love are axiomatic for sanity during the human condition) without any compromise on the scientific method!

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว +2

      Epic comment as always, cheers Luke! Dr. Beck was brilliant!

    • @JeffBeck-po3ss
      @JeffBeck-po3ss ปีที่แล้ว +3

      The best thing about the Quaker buddist pants is that there is none of this belt and zipper BS. Loose fit, draw string, modal cotton, and total freedom...

    • @luke.perkin.online
      @luke.perkin.online ปีที่แล้ว

      Indeed! The freedom to pursue those purposes that create meaning and value in our lives, perhaps even more fulfilling than just scratching our paleolithic itches. Strong narratives anchor us, inspire us, yet trap us, it's a such a pickle!

  • @Lolleka
    @Lolleka ปีที่แล้ว +2

    I've switched to the bayesian framework of thinking a few year ago. There is no coming back, it is just too good.

  • @Daniel-Six
    @Daniel-Six 11 หลายเดือนก่อน +2

    Love listening to Beck riff on the hidden melodies of the mind. Dude can really shred the scales from minute to macroscopic in the domain of cognition.

    • @gavinaustin4474
      @gavinaustin4474 7 หลายเดือนก่อน +1

      He's changed a lot since his days in the Yardbirds.

    • @Daniel-Six
      @Daniel-Six 7 หลายเดือนก่อน

      @@gavinaustin4474 😁

  • @siarez
    @siarez ปีที่แล้ว +4

    Great questioning Tim!

  • @eskelCz
    @eskelCz ปีที่แล้ว +2

    What was the name of the cellular automata "toy" he mentioned? Particle Len... ? :)

  • @35hernandez93
    @35hernandez93 ปีที่แล้ว +4

    Great video, although the volume was a bit muted

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว +2

      Thanks for letting me know, I'll dial it up on the audio podcast version

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว

      podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1

  • @ChristopherLeeMesser
    @ChristopherLeeMesser ปีที่แล้ว +2

    Interesting discussion. Thank you. Does anyone have a reference on the bayesian interpretation for self-attention in transformers?

  • @Blacky372
    @Blacky372 ปีที่แล้ว +3

    Great talk! Thank you very much for doing this interview. One minor thing: I would have preferred to hear Jeff's thoughts flow for longer without interjections in some parts of the video.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว +1

      Thanks for feedback, I think I did get a little bit overexcited due to the stimulating conversion. Cheers 🙏

    • @Blacky372
      @Blacky372 ปีที่แล้ว

      @@MachineLearningStreetTalk I can't blame you. The ideas discussed left me with a big smile and a list of new topics to read about. You both are absolute legends!

    • @JeffBeck-po3ss
      @JeffBeck-po3ss ปีที่แล้ว

      I thought it was perfectly executed. As someone who has worked with Jeff for many years I can attest that it's quite risky to give him free reign in this setting.

  • @rastgo4432
    @rastgo4432 ปีที่แล้ว +2

    Very awesome, hope the lengths of the episodes be more ❤

  • @dr.mikeybee
    @dr.mikeybee ปีที่แล้ว

    How would a non-gradient-decent method like a decision tree be used to speed up learning? Is there a way to "jump" from what we learn from a decision tree to updating a neural net? Or is the idea that an agent can use Evolutionary algorithms, Swarm intelligence, Bayesian methods, Reinforcement learning methods like Q-learning, policy gradients, etc., Heuristic optimization, and Decision tree learning as part of its architecture. And if so, where is the learning taking place? If there is no update to a model, are we learning by storing results in databases? Updating policy networks, etc?

    • @backslash11
      @backslash11 ปีที่แล้ว +1

      No easy jump. It's hard to actually update all the weights of a large neural net, but as seen with LLMs, you can teach them quickly to an extent by just adding pretext. Pretext can become huge with techniques such as dilated attention, or it can be compressed and distilled to cram more info in there. This priming can persist as long as needed, basically becoming a form of semi-permanent learning. I'd imagine in the future, the billion dollar training will just form the base predictive abilities of a network, but other non-gradient descent methods will be fed later as priming, and managed as if it were a base of newly learned knowledge. Once in a while the entire network could be updated, incorporating that knowledge into the base model.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss ปีที่แล้ว +4

      The relevant Google search terms are 'coordinate ascent' and 'Variational Bayes' and 'conjugate priors'. The trick is extending coordinate updates to work (approximately in some cases) with models that are not just simple mixtures of exponential family distributions.

  • @ntesla66
    @ntesla66 ปีที่แล้ว +1

    That was truly eye opening... the epiphany I had around the 45 minute mark was that there are the two schools of approach in training just like the two schools of physics. The general relativists whose mathematical foundations are in the tensors and linear algebra, and the quantum physicists being founded in the statistical. The one being a vector approach needing a coordinate system and the other using Hamilton's action principal. Tensors or the Calculus of Variations.

  • @LotaMatanović
    @LotaMatanović ปีที่แล้ว

    Dr. Beck said that "you can use gradient descent learning for any loss function" which is not right. We can use gradient descent learning for any loss function which has a derivative.

  • @marcinhou
    @marcinhou ปีที่แล้ว +2

    is it just me or is the volume really low on this?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว +1

      podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1 sorry about that, fixed on pod

  • @ffedericoni
    @ffedericoni ปีที่แล้ว

    Epic episode! I am already expanding my horizons by learning Pyro and Lenia.

  • @svetlicam
    @svetlicam ปีที่แล้ว

    Brain is not baysian inference. Electrical potential in neurons are not exited by correlational probability of imput impulse but exactly opposite its totally to the very tiny electrical charges what is exited or what is not or what is just curent dispersal, maybe this dispersal works on principles of baysian inference, which maybe add in long term to some pruning of synapses, but more is kind of background noise.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss ปีที่แล้ว +1

      You're thinking hardware. The Bayesian brain hypothesis is a cognitive construct. Humans and animals behave as if they are reasoning probabilistically. My laptop also reasons probabilistically even though it's basically a deterministic calculator. I've just programmed it to do so. Nature has likely done something similar if for no other reason than the fact that failing to rationally take uncertainty into account can have undesirable consequences. See the Dutch Book theorem.
      That said one could make an argument that, at biologically relevant spatio-temporal scales the laws of physics are bayesian. Niven 2010 and Davis and Gonzalez 2014 have very nice derivations of the equations of statistical physics/thermodynamics from purely information theoretic, i.e. bayesian, considerations.

    • @svetlicam
      @svetlicam ปีที่แล้ว

      @@JeffBeck-po3ss true, is cognitive construct through mathematical principles but is not how cognition works, cognition works to say that way from sort of principle of exclusivity, not probability if you get distinction. Only exclusive stimuli get into account of cognitive process for the faster and much more goal oriented reaction, probabilistic process would be too long and too energy expensive but through tools like mathematical principles or computational algorithms is achievable faster and to some point or persevere more precise, because this processes are logically simplified through technique of mathematical calculations or binary probabilistic rationalization.

  • @kdaustin
    @kdaustin ปีที่แล้ว

    Incredible discussion.. thanks for sharing

  • @jason-the-fencer
    @jason-the-fencer ปีที่แล้ว +2

    "does it matter if the brain is acting AS IF it's using bayesian inference or not..." yes, it does. In engineering a solution to a problem, not, acting 'as if' doesn't matter because the outcome is what's important. But if you're conducing scientific research that is intended to lead you down a path of discovery, taking the 'as if' result as equal to 'is', you're going to risk the possibility of being led to wrong conclusions.
    This seems to be the whole problem with the mechanist view of the brain - it presupposes that our brains are just computers, and then never wants to find out.

    • @Vectorized_mind
      @Vectorized_mind ปีที่แล้ว

      Correct,he's delusional, he also claims science is not about seeking truth but about making predictions,which is idiotic cause science was built on the idea of understanding the operations and functions behind the mysteries of the Universe. What's the point of making predictions if you don't understanding anything?!

    • @JeffBeck-po3ss
      @JeffBeck-po3ss ปีที่แล้ว +2

      See if it makes more sense if you think in terms of an isomorphism which is a fancy word that tells you when two mathematical systems are the same. The standard proof of the inevitability of a free energy principle just shows that any physical system can be interpreted as performing bayesian inference. This is because the mathematics of belief formation, planning, and action selection can be mapped onto the equations of both classical and quantum mechanics. So the 'as if' is ultimately justified by showing that two mathematical systems are equivalent.

  • @kamalsharma3294
    @kamalsharma3294 ปีที่แล้ว

    Why this episode is not available on Spotify?

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว

      It is podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1

  • @dr.mikeybee
    @dr.mikeybee ปีที่แล้ว +4

    You've hit another one out of the park. Great episode!

  • @belgkiwi
    @belgkiwi 6 หลายเดือนก่อน

    It would be preferable if the interviewer let the guest speak. The role of the interviewer is to solicit information from the subject. Try to avoid open ended questions and sharing your opinions rather prompt the guest to elaborate on their answers. The guest in this interview, Dr Jeff Beck appears to have some very interesting views which are well worth exploring. You clearly have the intellectual baggage to make this discussion accessible to your audience

  • @ArtOfTheProblem
    @ArtOfTheProblem ปีที่แล้ว

    would love to collab

  • @kennethgarcia25
    @kennethgarcia25 11 หลายเดือนก่อน

    objectives? trajectories? how we define things relate to the aims one sense are important

  • @Daniel-ih4zh
    @Daniel-ih4zh ปีที่แล้ว +1

    Volume needs to be turned up

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว

      podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1

  • @entropica
    @entropica 11 หลายเดือนก่อน

    Joscha Bach would call the sum of the models of the different agents (self and others) the "role play" that's running on our brain, which is by construction a simulation.

  • @markcounseling
    @markcounseling ปีที่แล้ว

    The thought occurred, "My Markov blanket is a Klein bottle" Which I can't explain, but perhaps Diego Rapaport can?

  • @Isaacmellojr
    @Isaacmellojr 7 หลายเดือนก่อน

    The lack of "As if i was performing basian inference of course" was awkward.

  • @dirknbr
    @dirknbr ปีที่แล้ว

    It's not Einstein who said all models are wrong but Box

    • @JeffBeck-po3ss
      @JeffBeck-po3ss ปีที่แล้ว

      Omg. Thanks. I have been using that line for like 20 years. How embarrassing...

  • @ML_Indian001
    @ML_Indian001 ปีที่แล้ว

    "Gradient Free Learning" 💡

  • @Achrononmaster
    @Achrononmaster ปีที่แล้ว

    @1:20 the question is really _is that _*_all_*_ the brain does?_ I'd argue definitely not. Plus, you cannot separate brain from mind (a fools errand), and they're not the same thing. Bayes/brain cannot generate genuine novel insight nor mental qualia.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss ปีที่แล้ว

      I am inclined to agree with you when I am not wearing my scientist pants. But when I am wearing them I get frustrated with terms like mental qualia because it lacks precision. Insight and intuition, on the other hand, do have nice Bayesian descriptions via a kind of shared structure learning that enables the identification of analogies.

  • @jonmichaelgalindo
    @jonmichaelgalindo ปีที่แล้ว +1

    I have perfect knowledge of the Truth that I exist. Probability = 1. Maybe I'm an immortal soul, or just a line of code in a simulation, but that soul or that code exists. (I don't know that you exist though. Just me.)
    (Can GPT-4 experience this P=1 knowledge? I doubt that "I" exists in there.)

  • @bpath60
    @bpath60 ปีที่แล้ว

    Thank you Fodder for the mind sorry Brain !

  • @GodlessPhilosopher
    @GodlessPhilosopher ปีที่แล้ว

    RIP I loved your music

  • @tompeters4506
    @tompeters4506 ปีที่แล้ว

    Sounds fascinating. Wish I understood it and I aint a dummy.

    • @tompeters4506
      @tompeters4506 ปีที่แล้ว

      Sounds like some mechanism for faster AI learning for certain class of tasks (models)

    • @tompeters4506
      @tompeters4506 ปีที่แล้ว

      Mechanism being not depenent on gradient descent mechanism to optimal solution....it gets to approximate solution faster ?

    • @dr.mikeybee
      @dr.mikeybee ปีที่แล้ว

      Ask an LLM all the questions you have. For example, ask why a transformer is like a mixture of experts: A transformer neural network can be viewed as a type of implicit mixture of experts model in the following way:
      - The self-attention heads act as experts - each head is able to focus on and specialize in different aspects of the input sequence.
      - The multi-headed attention mechanism acts as a gating network - it dynamically combines the outputs of the different attention heads, weighting each head differently based on the current inputs.
      - The feedforward layers after the multi-headed attention also help refine and combine the outputs from the different attention heads.
      - The entire model is trained end-to-end, including the self-attention heads and feedforward layers, allowing the heads to specialize while optimizing overall performance.
      So the self-attention heads act as local experts looking at the sequence through different representations. The gating/weighting from the multi-headed attention dynamically chooses how to combine these experts based on the current context.
      This provides some of the benefits of mixture of experts within the transformer architecture itself. Each head can specialize, different combinations of heads get used for different inputs, and the whole model is trained jointly.
      However, it differs from a traditional mixture of experts in that the experts are predefined as the attention heads, rather than being separate networks. But the transformer architecture does achieve a form of expert specialization and gating through its use of multi-headed self-attention.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss ปีที่แล้ว

      You can actually show that Bayesian inference on a particular class of mixture models leads to an inference algorithm that is mathematically equivalent to the operations performed by a transformer that skips the add and norm step.

  • @dionisienatea3137
    @dionisienatea3137 ปีที่แล้ว

    sort out your audio...why you post such an important discussion with this low audio? i have everything on 100% and barely understand ...

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว

      podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/DR--JEFF-BECK---THE-BAYESIAN-BRAIN-e2akqa1 Audio pod has improved audio

  • @Gigasharik5
    @Gigasharik5 10 หลายเดือนก่อน

    Aint no way that the brain is bayesian

  • @u2b83
    @u2b83 ปีที่แล้ว

    Jeff: So what do you mean by guard rails lol ...what he really meant, what do you mean by social complexification?

    • @JeffBeck-po3ss
      @JeffBeck-po3ss ปีที่แล้ว +2

      Your guess is as good as mine.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว +2

      I would have to listen back, what's the timestamp? But speaking broadly - I am interested in emergent social dynamics i.e. culture / language, which is to say - that which is best made sense of when we zoom out quite far from the highly diffused phenomena in the "microscopic" physical substrate and look at the things which emerge higher up in the social plane. For example the concept of a chair exists as a meme in our social plane, even though it's parasitic on a widespread diffusion of physical interactions lower down (between people, and chairs, and lower down, between cells etc!). So the "guardrails" of the dynamics are communication affordances at the respective scale i.e. the words we use, how we can physically interact with our environments, but the interesting thing is the representational fidelity of these affordances, i.e. they can be used, remixed, and overloaded to create a rich tapestry of meaning, both in the moment, and more culturally embedded in our society mimetically. The emergence of social plane from physical plane is something FEP can teach us a lot about IMO. What's also interesting is that the higher the plane is from the physical the faster it can evolve i.e. our language evolves a million times faster than our DNA, this evolution velocity asymmetry is common of many other symbiotic organisms.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss ปีที่แล้ว +2

      Our resident expert in this domain is Mahault Albarracin.

    • @JeffBeck-po3ss
      @JeffBeck-po3ss ปีที่แล้ว +4

      My intuition is that communication must be grounded in common observations and so ultimately communication is about transmitting predictions.

    • @MachineLearningStreetTalk
      @MachineLearningStreetTalk  ปีที่แล้ว +1

      @@JeffBeck-po3ss We will release our interview with Mah soon!

  • @Glen-uv2cd
    @Glen-uv2cd 5 หลายเดือนก่อน

    Verses AI...

  • @Vectorized_mind
    @Vectorized_mind ปีที่แล้ว +1

    The most flaw thing I've heard is "science is not about seeking truth but about making predictions"🤣🤣,this is a very erroneous claim,science is the process of trying to figure out how the world around us works not just about making predictions.
    When Newton developed his laws he didn't want to merely make predictions but he sincerely wanted to understand how the world worked. The accuracy of the prediction is equivalent to how true your understanding of a system is. THE MORE TRUE YOUR UNDERSTANDING IS THE MORE ACCURATE YOUR PREDICTION,THE LESS TRUE YOUR UNDERSTANDING IS THE LESS ACCURATE YOUR PREDICTION IS.