I definitely appreciate all your incredible lessons and insights!
Great stuff as always!
25:00 hey, why not? mental health is all about lying to yourself and rejecting reality just the right amount.
thanks :)
Is it just my impression or does this come down to the Halting Problem?
This sounds suspiciously like how our brains are wired...
You will be such a positive and care person when u get 60s xD
The MIT bot pushing "diversity" lol. It looks like something made by marketing students.
What the MIT Bot says is basically "Don't pay attention to reality. Create a magical picture of yourself and if you believe hard enough, everything will be fine". Funny enough, that's woke philosophy at its core and makes terrible life advice. How far MIT has fallen.
It's an interesting solution but I also agree that it seems to be a little over engineered because it relies on the "Allocation" layer to be a pretrained model which means that if you want to add another Agent, you need to retrain the Allocation model which also means that you have to generate reams and reams and reams of synthetic data just for the Allocation model to work as intended.
Another simpler method would be to just pick the 3 most likely Agents to complete the task and then evaluate the result and go with the one that is the most plausible. Sure it means you are doing 3x the amount of Inference at question time but you aren't wasting 1000x inference at training time and however many man hours to get the Allocaiton model to work properly and then having to repeat the process whenever a new Agent is created.