Future of Science and Technology Q&A (August 16, 2024)

แชร์
ฝัง
  • เผยแพร่เมื่อ 12 ก.ย. 2024

ความคิดเห็น • 4

  • @nunomaroco583
    @nunomaroco583 26 วันที่ผ่านมา

    Amazing talk great questions great answers....

  • @mitchellhayman381
    @mitchellhayman381 24 วันที่ผ่านมา

    Thank you Steven

  • @filipewnunes
    @filipewnunes 26 วันที่ผ่านมา +2

    Can't wrap my mind around the fact that this video have just 680 views.

  • @NicholasWilliams-uk9xu
    @NicholasWilliams-uk9xu 27 วันที่ผ่านมา

    Yes, you want to off load computational correctness out of the LLM parameter space, and use the LLM as a sophisticated pointer machine to execute computational processes. It's simple to understand, the neural network does relational context tracking for pointing to a computation to run. It's like a human can play a video game better if the human can interface with joysticks, buttons, and triggers, which allow the human guard rails and a way to point to a computational functionality to drive the game character (the more levers the person has to interface with the precise computational processes of the game, the more the brain can be freed up to handle relational context). Neural network = relational context weighing machine, Computation = specialized function. Therefore the (neural network context weighing machine) operates [array of specialized precise computations]. Network handles probability, specialized computation handles specificity. I literally said this 3 years ago, lol, I tried to save people time, it didn't work, they did the stupid way anyways. Brain good at (Probability * Probability * Probability * Probability), while specialized computation good at (specific interpolation over compositional space). In todays society and unscientific community, it depends on (who) says it (input the person with highest economic status, which are errors people like Brian Keating make), rather than the substance of what is said. This is why the unscientific peer review process is not science, science is a method. They cover up this by doubling down, further ingraining anti-science dogma into their peer community, eroding it's ability to reason over time, they eat their own shit.