I'm an AI researcher, but this is a good explanation for the general public. A lot of people will be needed to be educated if these are to be widely used (which takes work but is better than only a few members of society having access)
I don’t understand why we don’t add fact checking on top of the models. Obviously the models can’t do that, but can we not do some sort of post-processing, if you will, to tidy up its responses. My interactions with GPT-4 showed that it couldn’t even get correct facts that it invented itself, such as a fictional character’s name. It seems to me that it should be easy for a separate algorithm to at least review what the model previously established, let alone checking facts from the internet. I’m not an AI researcher at all, I just want to develop an AI app
I wasn’t asking rhetorically. I’m genuinely curious, and since you’re an AI researcher, I figured you could help me understand it if you can spare the time
Here is I have a question if LLM models are generating hallucinations in their answered then how can someone refer these models for students and researchers?
Can't believe this has only so little views - very good explanation!
I'm an AI researcher, but this is a good explanation for the general public. A lot of people will be needed to be educated if these are to be widely used (which takes work but is better than only a few members of society having access)
I don’t understand why we don’t add fact checking on top of the models. Obviously the models can’t do that, but can we not do some sort of post-processing, if you will, to tidy up its responses. My interactions with GPT-4 showed that it couldn’t even get correct facts that it invented itself, such as a fictional character’s name. It seems to me that it should be easy for a separate algorithm to at least review what the model previously established, let alone checking facts from the internet. I’m not an AI researcher at all, I just want to develop an AI app
I wasn’t asking rhetorically. I’m genuinely curious, and since you’re an AI researcher, I figured you could help me understand it if you can spare the time
This was a outstanding video, thank you!
yes, this is what people don't understand that it's a LLM and will be as good as the data it is trained on.
well nice video, loved it
This is what my Evrostics Triad addresses.
Of course, there are not knowledge bases, there are language models. The general public is confused to say the least.
Here is I have a question if LLM models are generating hallucinations in their answered then how can someone refer these models for students and researchers?