Hallucinations Can Improve Large Language Models in Drug Discovery

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 ก.พ. 2025
  • Concerns about hallucinations in Large Language Models (LLMs) have been raised by researchers, yet their potential in areas where creativity is vital, such as drug discovery, merits exploration. In this paper, we come up with the hypothesis that hallucinations can improve LLMs in drug discovery. To verify this hypothesis, we use LLMs to describe the SMILES string of molecules in natural language and then incorporate these descriptions as part of the prompt to address specific tasks in drug discovery. Evaluated on seven LLMs and five classification tasks, our findings confirm the hypothesis: LLMs can achieve better performance with text containing hallucinations. Notably, Llama-3.1-8B achieves an 18.35% gain in ROC-AUC compared to the baseline without hallucination. Furthermore, hallucinations generated by GPT-4o provide the most consistent improvements across models. Additionally, we conduct empirical analyses and a case study to investigate key factors affecting performance and the underlying reasons. Our research sheds light on the potential use of hallucinations for LLMs and offers new perspectives for future research leveraging LLMs in drug discovery.
    arxiv.org/abs/...

ความคิดเห็น •

  • @S1LLY_C0ST4_L0V3R
    @S1LLY_C0ST4_L0V3R 8 วันที่ผ่านมา +1

    Are the voices real humans or AI chatbots?

    • @TheXComputerXDr
      @TheXComputerXDr 8 วันที่ผ่านมา

      Chat bots... I would be surprised if they are not AI.

    • @HurricaneEmily
      @HurricaneEmily 8 วันที่ผ่านมา

      It’s pretty obvious they are AI. There aren’t any pauses in speech between speakers, no fillers like um, no flaws in language.

    • @TheXComputerXDr
      @TheXComputerXDr 8 วันที่ผ่านมา

      @@HurricaneEmily How funny would it be if it was just highly edited, all sliced up...

    • @HurricaneEmily
      @HurricaneEmily 7 วันที่ผ่านมา

      It could be highly edited but I doubt it. It sounds like the two voices are reading, not having a natural conversation. This is what bad actors sound like. There is a sense you get from humans that they are genuinely listening to what another person is saying when they’re talking. This quality is completely lacking from this conversation which makes it hard to listen to. I almost clicked off the video because it was so grating but I was interested in the information. However, because it was delivered by AI by a human who was willing to deceive the viewers by not informing them that it was a conversation between two AIs, I’m highly skeptical of the truth behind it.

    • @TheXComputerXDr
      @TheXComputerXDr 7 วันที่ผ่านมา

      @@HurricaneEmily I don't think you can value information based off of it's presenter, it's too difficult to discern the truth that way. That being said, there is a lot of information as far as this "controlled chaos", and the hallucinations they talk about are icebergs of information in them selves, but is only talked about at a very surface, vague level, to the point where there's not much substance to the conversation, therefore not much to be disputed, because "they" didn't really talk about much, the touched the tip of a few icebergs and said, "oooo, interesting". That being said, hallucinations in llm's are real, controlled chaos is a subject of philosophy religion and mysticism since ancient times, and they could be inter related. But that topic is so much bigger than this tiny video could ever encapsulate. But good points you made, the way the AI talks to itself is definitely not congruent with any conversations with real people that I have been a part of, it's as you say, like they are reading from books, not really talking to each other, they are more or less reading lines from a "script" or book that one person (or AI) formed.