Until I saw this, I was starting to think that there was something wrong with me not being able to achieve magical improvements in results by using DSPy over meticulously hand-crafted prompts targeted at the observed quirkiness of specific LLMs. Thank you for restoring my self confidence. And now I'm also going to incorporate graph databases into my RAG pipelines after watching a couple of your videos.
one basic doubt. def forward(self, question): # Step 1: Retrieve context based on the question context = self.retrieve(question).passages # Step 2: Generate an answer based on the context and question prediction = self.generate_answer(context=context, question=question) answer = prediction.answer # Step 3: Validate the answer type using the entity_linker function correct_question_type, original_answer_type, type_status = entity_linker(answer, question) # Optional: You can use the AnswerTypeValidityCheck signature for validation if needed validation = self.check_answer_type(entity_type=correct_question_type, question=question, answer=answer).type_status return dspy.Prediction(context=context, answer=answer, type_status=type_status) in this method, the answer returned is the one you got from self.generate_answer. That method doesnt use any entitylinker . so how is entitylinker influencing the answer ?
Do you know of any efforts on converting these entities and relationships further into formal logic representations? Being able to pair these graph databases with formal logic representations would definitely help improve the quality of written text, organic exploration/discovery, and understanding over time.
Until I saw this, I was starting to think that there was something wrong with me not being able to achieve magical improvements in results by using DSPy over meticulously hand-crafted prompts targeted at the observed quirkiness of specific LLMs. Thank you for restoring my self confidence. And now I'm also going to incorporate graph databases into my RAG pipelines after watching a couple of your videos.
one basic doubt.
def forward(self, question):
# Step 1: Retrieve context based on the question
context = self.retrieve(question).passages
# Step 2: Generate an answer based on the context and question
prediction = self.generate_answer(context=context, question=question)
answer = prediction.answer
# Step 3: Validate the answer type using the entity_linker function
correct_question_type, original_answer_type, type_status = entity_linker(answer, question)
# Optional: You can use the AnswerTypeValidityCheck signature for validation if needed
validation = self.check_answer_type(entity_type=correct_question_type, question=question, answer=answer).type_status
return dspy.Prediction(context=context, answer=answer, type_status=type_status)
in this method, the answer returned is the one you got from self.generate_answer. That method doesnt use any entitylinker . so how is entitylinker influencing the answer ?
Do you know of any efforts on converting these entities and relationships further into formal logic representations?
Being able to pair these graph databases with formal logic representations would definitely help improve the quality of written text, organic exploration/discovery, and understanding over time.
Can't sign in diffbot even with non-gmail ID. Is it supposed to be so?
I really like this tool can it be used for seo
Do you feel that nomic embeddings are adequate open-source embeddings model for RAG projects or do you recommend another?
girl, u rock thank you so much for this!! Where can I follow you?
thanks great video!