VQMS: Tong Wang (Yale)
ฝัง
- เผยแพร่เมื่อ 8 ก.พ. 2025
- Leveraging LLM knowledge to Improve Task Robustness: Applications in Customer Service and Content Marketing
Large Language Models (LLMs) trained on vast datasets, possess rich knowledge about language, reasoning, and real-world phenomena. This talk explores how this rich knowledge can be elicited and leveraged for downstream marketing tasks. We demonstrate this approach in two applications.
The first application is train a customer service chatbot through knowledge distillation. Here a teacher LLM reasons about differences between its high-quality responses and a smaller student LLM’s lower-quality responses, distilling strategies to improve the student’s performance in goal-oriented dialogues. We then show how to use the strategies to improve the performance of the student LLMs in real time customer service conversations.
Second, in content generation, LLMs generate hypotheses to explain why some headlines achieve higher click-through rates than others in A/B tests. These hypotheses are refined into validated theories that guide fine-tuning, helping models focus on generalizable mechanisms and avoid spurious correlations. The validated hypotheses help LLMs generate the “best” headlines for an article in terms of clickthrough-while ensuring relevance.
While the specifics of knowledge elicitation and distillation for the two applications differ in many important ways, an important common theme is that we leverage LLMs as abductive reasoners-to explain why one example is better than another on a desirable metric-and use these guidelines/hypotheses to perform the tasks with greater robustness and efficiency. - วิทยาศาสตร์และเทคโนโลยี
Wow congrats tong wang! Outstanding projects, excelente paper! 🎉🎉