![Coding Crashcourses](/img/default-banner.jpg)
- 96
- 422 891
Coding Crashcourses
Germany
เข้าร่วมเมื่อ 26 ก.ค. 2022
Hello and welcome to my channel. I am a german software engineer who works in the AI Space. I love to learn new stuff and teach it to people :-). I already have got a german channel with a lot of videos and full courses on Python, JavaScript, React and much more.
German channel: studio.th-cam.com/channels/ikLKUS0DZWMkukbkYDG49Q.html
German channel: studio.th-cam.com/channels/ikLKUS0DZWMkukbkYDG49Q.html
Self-Corrective RAG with LangGraph - Agentic RAG Tutorial
In this video you will learn how to perform "CRAG", self-corrective Retrieval Augmentation with LangGraph.
You will learn how you can force an Agent to rewrite queries and check whether a document is suited to answer a question or not.
Code: github.com/Coding-Crashkurse/LangGraph-Tutorial
Timestamps:
0:00 Introduction to CRAG
1:00 Code Walkthrough
12:06 Results
#langchain #langgraph
You will learn how you can force an Agent to rewrite queries and check whether a document is suited to answer a question or not.
Code: github.com/Coding-Crashkurse/LangGraph-Tutorial
Timestamps:
0:00 Introduction to CRAG
1:00 Code Walkthrough
12:06 Results
#langchain #langgraph
มุมมอง: 832
วีดีโอ
LangGraph - Tool based Customer Support bot with DB Interaction
มุมมอง 1Kวันที่ผ่านมา
In this video I will show you how to build agents with tools that interact with a database. We gonna use OpenAI model with function/tool calling capabilities and LangGraph as Agent Orchestrator. Timestamps 0:00: Introduction 1:15 Database setup 3:38 Agents & Tools 13:56 Functions for LangGraph 20:45 Nodes & Edges 22:25 Result #langchain #langgraph
LangGraph: Hierarchical Agents - How to build Boss & Subordinate Agents
มุมมอง 1.3K14 วันที่ผ่านมา
In this video I will show you how to build hierarchical agents with LangGraph. Subordinate Agents will ALWAYS Report to a Boss Agent, who than makes a final decision on a specific topic. We will create a fictive news agency. Code: github.com/Coding-Crashkurse/LangGraph-Tutorial Timestamps: 0:00 Introduction 0:45 LLM Setup 5:51 LangGraph - Nodes & Edges 12:31 Result #langchain #langgraph #agents
EURO 2024 - Predict the WINNER with AI
มุมมอง 41121 วันที่ผ่านมา
In this video, we gonna make a little fun project - we try to train a model just predicts the winner of the EURO 2024. It´s a classical machine learning project with a very basic model, but we do it all - from preprocessing to training a real model ourself. Tune in ;-) Code: github.com/Coding-Crashkurse/EURO-2024-Prediction Timestamps: 0:00 Einführung 0:49 Model training 7:19 Predicting the tou...
LangGraph vs. LangChain LCEL - Can we get rid of LCEL (LangChain Expression Language)
มุมมอง 1.6Kหลายเดือนก่อน
Many people seem to dislike LCEL. It is hard to debug chains, they are hard to read and can really become complex. Can we use LangGraph to create complex Chains instead of using LCEL for that? Let´s find that out! Timestamps 0:00 Introduction 0:58 Simple RAG Chain with LCEL 4:00 RAG chain with LangGraph 8:07 Comparison & Discussion
Introduction to LangGraph: A Quick Dive into Core Concepts
มุมมอง 6Kหลายเดือนก่อน
In this video, we'll explore LangGraph, a powerful tool that enables agentic workflows with language learning models (LLMs) through cycles, enhancing efficiency and performance. Built on top of LangChain, LangGraph leverages its robust foundation to streamline processes and facilitate seamless integration. Whether you're a beginner or an experienced user, this crash course will guide you throug...
LangChain - Parent-Document Retriever Deepdive with Custom PgVector Store
มุมมอง 1.4Kหลายเดือนก่อน
In this video we gonna make a Deepdive into Parent-Document Retriever. We not only use the langchain docstore, but we will also create our own custom docstore. This is quite an advanced video and probably the advanced one you will about this topic on TH-cam Code: github.com/Coding-Crashkurse/ParentChild-Retriever Timestamps: 0:00 Introduction into Parent-Document Retriever 1:55 PD-Retriever wit...
Tool Calling with LangChain is awesome!
มุมมอง 2.5Kหลายเดือนก่อน
In this video I will explain what Tool Calling is, how it differs from Function Calling and how it is implemented in LangChain. Code: github.com/Coding-Crashkurse/OpenAI-Tool-Calling Timestamps 0:00 Introduction 1:39 Basics & Tool Decorator 4:00 Tool with Pydantic Classes 6:03 Perform Tool Calling 11:06 Tool Calling with API #langchain
TaskingAI - Next-GEN AI Development Platform - Create Assistants with Models, RAG and Tools easily
มุมมอง 940หลายเดือนก่อน
Learn more about tasking AI here: www.tasking.ai/ - New Standards in AI-Native App Development TaskingAI brings Firebase's simplicity to AI-native app development. Start your project by selecting an LLM model, build a responsive assistant supported by stateful APIs, and enhance its capabilities with managed memory, tool integrations, and augmented generation system. About this video In this vid...
LangChain Expression Language - The ONLY video you need to TRULY understand LCEL
มุมมอง 3.4K2 หลายเดือนก่อน
In this video, we'll do a deep dive into the LangChain Expression Language-the backbone of LangChain. This is the only video you'll need to really understand the ins and outs of LCEL and its Runnable Interface. Code: github.com/Coding-Crashkurse/LCEL-Deepdive Timestamps: 0:00 - Intro 0:37 - Basic Chain and invoke method 3:06 - The magic of the pipe operator 8:11 - RunnablePassThrough 9:02 - Run...
LangChain Streaming - stream, astream, astream_events API & FastAPI Integration
มุมมอง 2.2K2 หลายเดือนก่อน
In this Video I will show you how to perform streaming with LangChain. I will also show you the astream_events API. At the end I gonna show you how you can integrate both approaches with FastAPI. Code: github.com/Coding-Crashkurse/FastAPI-LangChain-Streaming Timestamps: 0:00 Introduction 0:27 stream & astream 2:00 astream_events API 4:52 FastAPI Integration (StreamingResponse) 7:42 Frontend Eve...
LangChain - Dynamic Routing - Retrieve data from different databases
มุมมอง 2.2K2 หลายเดือนก่อน
In this video I will show you how to perform dynamic routing with langchain. Not all data should be stored into a vectorstore. Tabular data is preferably stored in a simple SQL Table. How can we work with data from multiple sources? This is what you will learn in this video Code: github.com/Coding-Crashkurse/Langchain-Dynamic-Routing Timestamps: 0:00 Introduction 1:02 RAG-Chain 4:25 SQL-Chain 7...
Routing with LangChain - Basics - Semantic Routing vs. LLM Classifier
มุมมอง 1.8K2 หลายเดือนก่อน
In this Video I will show you why you want to perform routing in langchain and how you can do routing. We will explore Semantic Routing (cosine similarty) vs LLM based Classifier as Router. Timestamps 0:00 Intro to routing 0:18 Semantic Routing 4:13 LLM based Classifier
DALL-E 3 - UPDATE - Change ONLY DETAILS of your Images
มุมมอง 1953 หลายเดือนก่อน
OpenAI introduced a new functionality - Editing and changing only parts of exisiting images. In this (short) Video I show your how to do this.
[APRILS FOOL] - GPT-5 & SORA LEAKED - Get ALPHA Access to OPENAI´s new models
มุมมอง 6753 หลายเดือนก่อน
Video of first april joke :-)
OpenAI Voice Assistant (Speech-To-Speech) with Function Calling
มุมมอง 7323 หลายเดือนก่อน
OpenAI Voice Assistant (Speech-To-Speech) with Function Calling
RAPTOR: Dynamic Tree-Structured Summaries with LangChain - Advanced RAG
มุมมอง 1.5K3 หลายเดือนก่อน
RAPTOR: Dynamic Tree-Structured Summaries with LangChain - Advanced RAG
How to create your AI Girlfriend with OpenAI
มุมมอง 8093 หลายเดือนก่อน
How to create your AI Girlfriend with OpenAI
RAGAS - Evaluate your LangChain RAG Pipelines
มุมมอง 6K3 หลายเดือนก่อน
RAGAS - Evaluate your LangChain RAG Pipelines
LangChain vs. LlamaIndex - What Framework to use for RAG?
มุมมอง 13K3 หลายเดือนก่อน
LangChain vs. LlamaIndex - What Framework to use for RAG?
LangChain in Production - Monitoring with FastAPI and LangFuse
มุมมอง 2.5K4 หลายเดือนก่อน
LangChain in Production - Monitoring with FastAPI and LangFuse
Semantic-Text-Splitter - AI Based Text-Splitting with LangChain
มุมมอง 3.4K4 หลายเดือนก่อน
Semantic-Text-Splitter - AI Based Text-Splitting with LangChain
I created a FULLY automated AI YOUTUBE CHANNEL for passive income
มุมมอง 1.1K4 หลายเดือนก่อน
I created a FULLY automated AI TH-cam CHANNEL for passive income
Semantic-Text-Splitter - Create meaningful chunks from documents
มุมมอง 9K4 หลายเดือนก่อน
Semantic-Text-Splitter - Create meaningful chunks from documents
RAG in Production - LangChain & FastAPI
มุมมอง 8K4 หลายเดือนก่อน
RAG in Production - LangChain & FastAPI
Object Oriented Programming with Python - From Beginner to Advanced
มุมมอง 9K4 หลายเดือนก่อน
Object Oriented Programming with Python - From Beginner to Advanced
Gemini Ultra 1.0 vs GPT-4 - Can Google finally beat OpenAI?
มุมมอง 8274 หลายเดือนก่อน
Gemini Ultra 1.0 vs GPT-4 - Can Google finally beat OpenAI?
GPT Mention - Why it´s BIGGER than you think!
มุมมอง 4615 หลายเดือนก่อน
GPT Mention - Why it´s BIGGER than you think!
Thanks! Great video!
please make a video about chatbot using fastapi with memory .
Already have that: th-cam.com/video/Arf7UwWjGyc/w-d-xo.html
Cool, because of your videos I also switched to Langgraph and it is so much nicer in my opinion. However when i tested CRAG I realized that especially the document grade takes very long, which is not suitable for a production environment. What would you do to make this faster? Could also be slow for me because i am using mixtral 8x7b from Groq and not an OpenAi model
You can also try to classify multiple documents at once and ask the llm to write the index ([0:2] for example to then filter the correct docs. Less accurate probably, but faster
As far as I understand, this does not work with Ollama at the moment, does it?
I use LangGraph almost exclusively and on a daily basis, it makes the code more readable. Debugging also works much better. But sometimes I use LCEL as a one-liner for simple things (prompt template | llm | output parser). This can also make the code more readable, because otherwise you have some kind of subgraph again instead of a one-liner. Conclusion: Your opinion is tried and tested 👍
did they stop with the developer mode toggle on the signup page? cant seem to get a developer account
@@Nonya-xv7hq yes i Pointed it out in a comment. You now have to Apply for that :/
Awesome video! Would you consider adding a module to discuss how to do tool calling with other LLMs (such as Llama3 70B via Groq or Mistral)?
Question upfront? Does it not work with other models? LangChain normally provides a standardized interface for all models
@@codingcrashcourses8533 - Thanks for the reply. Perhaps I was doing something incorrectly because it is working with Groq now. FYI your videos are probably the best I've found. Seriously great work. Thanks so much for creating this channel!
@@b18181 No worries, that questions are totally fine. But it´s just the biggest benefit of using Langchain, that you dont have to worry about APIs, but you can just switch Classes and it will (should) work ;-). Thank you for your kind comment
thanks a lot for the great video! I especially like the Design Patterns part. Just a feedback: I wish you first explained what you want to achieve with each example and then implemented the code and briefly went over why each element is necessary. For example why the super class Observer is needed. Coming from a C programming background, every problem I encounter during coding, seems like it will be solvable by functions. I never understood the real need of making classes. It would be nice to make another video going a bit deeper in design patterns part, explaining the chain of thought of "why" you choose classes for specific problems. Thanks a lot!
Thank you for your feedback. Maybe I can create a Design Patterns video in the future, where I cover this and also show "functional" alternatives with Python :)
@@codingcrashcourses8533 A more in-depth video on design patterns and best practices will definitely be appreciated! :) With some more concrete examples of when you choose each design pattern or if it's a personal choice etc.
Excellent demo - only thing I would say is show at the beginning what the outcome/you are trying to build so you can see how it all ties together.
Thanks for the Feedback
That is a really interesting implementation! I wonder if this could help reducing time on the retriever.add_documents operation, as I'm trying to do a RAG with around 100 pdfs and when testing ParentDocument retriever this is delaying too much. Do you know any solution for this?
Hm, how do you preprocess your pdfs? How many chunks do you have at the end?
@@codingcrashcourses8533 On my vectorstore they are splitted on 800 chunk size. On my store im loading them using PyPDF loaders and a kv docstore
@@codingcrashcourses8533 im using PyPDF loader and then storing them on a LocalFileStore using create_kv_docstore. At the end my docstore has around 350 chunks
Using astream, the response from the LLM has words that are split for example the word "hippopotamus" comes as 2 chunks "hippo" and "potamus". When creating an app, how to recognize and combine the 2 split parts into a single word for front-end?
does not matter as long as there is no "space" Token in between :). The word will just be concatenated
I think you make excellent videos. My only request for future content would be for you to make a quick recap on your agent related projects to point out the changes that have to be made to make them work locally without OpenAI. Thank you for all of the content you produce. I have learned a lot from you.
Well that´s actually the beauty of langchain. Just replace the OpenAI classes with classes you like. They all share the same Runnable Interface. So instead of using ChatOpenAI, you can just use OLlama. from langchain_community.llms import Ollama Pretty easy with Langchain to switch models :)
@@codingcrashcourses8533 I do that now with my applications and you are right it works extremely well. I am still navigating through ollamafunctions and what I think might be some payload adjustments I need to make on my end in some of the applications I have in development. Thank you again for consistently putting out great content.
Is this subject to loss in the middle problem?
Yes, like any other ingestion step. You got methods like reranking to fight problems like this :)
Hi, is line 157 in the code meant to come before the iteration summaries loop or after in line 165? i.e. are we updating the all_summaries field with the previous cluster texts or does it not matter? Otherwise we would be updating "iteration summaries["texts"]" with the same value as "iteration summaries["summaries"]"
Great Video, Thanks for sharing!
Great video !
Great video! I would like to see a video about a customer support chatbot(with Q&A and RAG)
I got a very similar Video about that
@@codingcrashcourses8533 thank you very much. I think i saw it, it was developed in langchain. However, I would like to see it using langgraph. Thank you very much for making this kind of content
hi, could you help me out with the errors in this code: drive.google.com/file/d/1qlzGwMjaewy0Ni32Cpo7pZ_nz5x-AgTq/view?usp=drivesdk I have been struggling with integrating guardrails in my code since a long time. your help would be much appreciated
Nice video! Is there some way to retrieve the metadata as well with the multivector retriever? Such as page number or file name?
Yes sure, you have got access to the metadata attribute of the documents and can just use them whatever you want for. If you struggle with that, maybe watch my LCEL Crashcourse on this channel :)
@@codingcrashcourses8533 Sorry, I was being unprecise. I mean retrieving metadata from the docstore! Is that also possible?
i really want to see a langgraph fastapi version from you ser
Will we done. I will do a human-in-the-loop version
@@codingcrashcourses8533 Yes please :)
is there a github repo for us to see this code and walk through it?
All of my projects i made videos about are available on github. Everything ;)
This is a good video. Thank you.
Thanks for the video! Is there a way to evaluate the performance of the Guardrails? Basically I got dataset of Jailbreak prompts and want to look how many of those prompts jailbreak the model before and after the implementation of guardrails? Thank you for your help!
You could evaluate that with RAGAS. I have a Video on that topic on my channel
@@codingcrashcourses8533 Just watched it! In the video, you explained how to evaluate the actual "information retrieval and delivery" capabilities of a RAG. I need to evaluate how well the guardrails work. I have a labeled dataset with prompts that the model shouldn't process, and I want to evaluate how many of those prompts the model processes before and after implementing NeMo. Any ideas how that can be done? My first guess was some type of ROC graph. Any idea would be helpful :)
@@niklasfischer3146 You can use custom datasets in Ragas and set ground_truth and expected values there. At the end you will compare these and then count how often guardrails prevents a prompt that should not be processed (you can identify that by a response like "sorry, I will not answer to that due to xyz".
This is great, but will the source code be made public?
It is public. I will update the repo, sorry
@@codingcrashcourses8533 Can you tell me where the repo is?
@@codingcrashcourses8533 Can you tell me where the repo is?
@@codingcrashcourses8533 Can you tell me where the repo is?
@@codingcrashcourses8533 Can you tell me where the repo is?
in 24:11 why self.arrows == self.arrows?? for the rest you did self.(variablename) == other.(variablename)
ups, i made an error there. Thanks for pointing it out. other.arrows is correct of course.
@@codingcrashcourses8533 also just wanted to say thank you so much for this course! really helpful and i understood alot of the concepts you covered! I subscribed and liked!
Thank you for such great video, as always! How can I find your Udemy courses?
www.udemy.com/course/langchain-on-azure-building-scalable-llm-applications/?couponCode=3CC86F0FBFF12BE2E8E3 www.udemy.com/course/langchain-in-action-develop-llm-powered-applications/?couponCode=73267520F1E98047E188 here are promocodes where u can get them for the cheapest price available (and I get 95% of the money instead of 30% ;-) ). Thanks for your support
Im sure this is production ready with some changes in identifying current logged in user. But I’m curious how we gonna make it work if want to integrate in WhatsApp or other social handles. You always unlock endless possibilities with your videos 🦾🫶🏻
Great tutorial! I've got one question, though: in 6:02 respectively 6:54: Does the model decide which tool to use on basis of the doc string?
Yes, that docstring is the explanation for the llm :)
Wait what, I thought FAISS didnt support metadata filters ? Weird that TimeWaited works with it no ?
I am not too familiar with each change, FAISS is also work in progress, maybe they added it in some version :)
@@codingcrashcourses8533 In any case, your video is amazing and you are greatly helping me for my internship project. Many thanks, keep up the great work 💪👍
I don't know the umap library, its very interesting. Good explanation about RAG advanced techniques, sucess for you!
thank you :)
A god among men
First comment I have also purchased your Udemy course , found it really useful
Which one? Thank you very much for your Support:)
@@codingcrashcourses8533 RAG Deployment with micro services and PgVector
I tried this code, and everytime got exception "OpenAI function call failed: You exceeded your current quota, please check your plan and billing details". my credit remaining is $0.00 but in Usage Project's Monthly Bill is $0.00/$5.00 limit does anyone know how to solve it?
It seems like u used all credits or you did not buy any yet
@@codingcrashcourses8533 I did not buy any because I think there is a free limit, right?
Which would you recommend for creating complex, production-ready agents with more than 30 nodes, with cycling and branching capabilities: Haystack, LangGraph, or a custom-built framework? I haven't seen a comparison between Haystack and LangGraph for real product development. What is your opinion?
I am not that familiar with Haystack I have to admit and since I am pretty familar with LangChain my opinion would be pretty biased
why dont you make project videos like chatbot etc?
I have multiple projects on my channel :)
@@codingcrashcourses8533 coupon?
route conversation is giving error "401 Unauthorised". How to debug the same.
You probably just did not use the token for the tokenpoint
If possible I would give 1k likes! that solved hours of studying!
Thank you for that comment. Really appreciate that :)
nice video. I can echo the sentiment of others. Most videos miss out on important things or basics for a non experienced beginner. Do you think you could add a video in actual production (live production). Second thing just a feedback , the data flow and handshake of services if it is shown in numbers and different colors ( the flow basically ) would be great.
I have a full udemy course ok that :)
@@codingcrashcourses8533 i have to look for that. Have you also experimented with claude (from anthropic)
On your Github Site, there is a repository for an "Advanced RAG with Langchain" Course - I cannot find it on Udemy. Is it already live? When can I expect it online?
3 weeks and it will be finished
@@codingcrashcourses8533 ❤️
Excellent tutorial. But what do you need the `from langchain_openai import ChatOpenAI` for in the first cycles example?
the package langchain-openai :)
But you don't use the ChatOpenAI package on that example, do you? I mean - you define a model, but where is it used? The decision to stop the cycle comes from the the algorithm, does it? I don't see any AI model involved into that cycle example. Edit: Ah, ok, the decision on the cycle is made by the model!
This is just great!
Thank you :)
LLMChain() is deprecated and the output_parser in the examples also cause json output error. Would be nice, if you could update the github code. Thank you If anyone having issue with json output, here is a fix: from langchain_core.output_parsers import BaseOutputParser class LineList(BaseModel): lines: list[str] = Field(description="Lines of text") class LineListOutputParser(BaseOutputParser[LineList]): def __init__(self) -> None: super().__init__(pydantic_object=LineList) def parse(self, text: str) -> list[str]: lines = text.strip().split(" ") return lines
Very good tutorial. Thank you.
Thank you for your comment:)
how would you store a chat history here? So e.g. saving the last 2 qa pairs
just create a new attribute for that: history: list[BaseMessage]. You can just append to that and do whatever you want with it
Nice work! few new methods of Langchain I was not aware of :)
video doesn't have implementation example of map rerank right
No, sorry. Its anyway quite old