🐋 RAG in C# using DeepSeek-R1

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ก.พ. 2025
  • In this video, we explore the power of reasoning models like DeepSeek-R1 and how they enhance Retrieval-Augmented Generation (RAG) in .NET applications. Traditional language models excel at retrieving and summarizing information, but reasoning models take it a step further by improving logical inferences, contextual understanding, and structured outputs.
    We start with a quick introduction to DeepSeek-R1, a state-of-the-art model now available in Azure AI Foundry and GitHub Models. Then, we dive into a hands-on .NET console application, highlighting the difference between performing RAG searches using a standard language model and an advanced reasoning model like DeepSeek-R1.
    Key benefits of integrating reasoning models in AI applications:
    ✅ Better Contextual Understanding - Generates more relevant and logical responses
    ✅ Improved Decision-Making - Helps AI go beyond just retrieving facts to applying reasoning
    ✅ Enhanced RAG Pipelines - More accurate answers when working with complex knowledge bases
    Resources
    DeepSeek-R1 in Azure AI Foundry and GitHub Models: azure.microsof...
    GitHub Demo Repository: github.com/elb...
    Watch the video and see how to bring more intelligence to your .NET AI apps!
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 9

  • @eliwhalen604
    @eliwhalen604 6 วันที่ผ่านมา +1

    Thanks for the video!

    • @elbruno
      @elbruno  6 วันที่ผ่านมา +1

      you are welcome! this is an interesting one!

  • @fallonmassey4714
    @fallonmassey4714 3 วันที่ผ่านมา

    *Ok, I ran the sample code and if you don't know Kernel Memory, it won't make any sense. For example, a LOT of steps are magically done for you,*
    1. The vector database appears to be the memory, which will be limiting for larger datasets, but could be ok
    2. The conversion of the text to vectors looks automatic & the dot product to do cosine similarity isn't shown
    3. How it's connecting, i.e. prompt, function/tool calls, etc. isn't clear, you just know it happens
    4. It's also not clear as to why you need another model for the embedding
    In short, it's not clear that using Kernel Memory allows customization, but it likely does. I'm trying to evaluate this with Microsoft.ai.extensions & Semantic Kernel, a lot of ways to solve the same problem. *Having said that, this is probably the EASIEST way to do RAG that I've seen!*

    • @elbruno
      @elbruno  2 วันที่ผ่านมา +1

      Hey, next step is to go a take a look at github.com/microsoft/kernel-memory.
      You can find the whole implementation details there, even extensions with SK to use Azure AI Search, Qdrant or other vector databases.
      We write about it here: devblogs.microsoft.com/dotnet/vector-data-qdrant-ai-search-dotnet/
      And yes, it's the first step, I'm guessing if someone is interested they can start to learn more about the other pieces.
      For more in-depth content we usually have the .NET + AI live streams on the .NET channel every other week and that's the place for that!
      Thanks for the feedback 💯💯

    • @fallonmassey4714
      @fallonmassey4714 2 วันที่ผ่านมา

      @@elbruno Thank you, I actually learned a lot more! 👍👍

  • @fallonmassey4714
    @fallonmassey4714 3 วันที่ผ่านมา

    *Since Deepseek doesn't support functions/tools, how are you connecting the LLM to your data, it's unclear in the video.*

    • @elbruno
      @elbruno  3 วันที่ผ่านมา

      A better answer is in the code: github.com/elbruno/deepseek-rag-ollama !

    • @fallonmassey4714
      @fallonmassey4714 3 วันที่ผ่านมา

      @@elbruno Of course it is, however, I thought the video was a description of the code, or why is it here? 🤔

    • @gustavnilsson6597
      @gustavnilsson6597 3 วันที่ผ่านมา +1

      KernelMemory goes to vector store and attaches data for r1