As far as I see pydantic kind of kills the __init__ method. When I define classes I do not need to use anymore the __init__ method but the pytdantic attributes. Is it correct?
Yes, you are correct to some extent, but let's clarify the details. When using Pydantic to define data models in Python, it automatically generates the __init__ method for you, based on the fields you define in the class. This means that you generally don't need to write a custom __init__ method yourself because Pydantic handles the initialization of the attributes, as well as validation and type checking.
Thank you, it came at a great hour! I'm having problems, I'm creating an application that receives a dictionary from a request and then trying to convert it to a structure that the 'convert_pydantic_to_openai_function' method accepts, but it's difficult! Doing it statically by declaring it like: """class Tagging(BaseModel): title: str = Field(description="Write a title") ...""" and than using "convert_pydantic_to_openai_function(Tagging) Everything goes well, but when I try to convert the dictionary dynamically to fields such as 'title' mentioned, the problem arises, would you know how to do this conversion properly? So I could pass it on to AI dynamically to process the fields that the user provided in the request!
Ok that is an interested idea. I would probably try to create a class with no attributes attached. Allow extra_attributes and set these attributes AFTEr instantiation. May that be a solution?
Thanks for the video. I’m currently very stuck on the concept of using multiple embeddings or vector indexes vs metadata / knowledge graph. My use case is simple: I have a text of a journal written roughly in chronological sequence. Embedding functions take into account the lexical meaning of words and their approx distance to each other, but not smart enough to allow something like “chapter 3 starts in 1872, so following day/month references like March 4, should have a closer cosine distance to 1827 than “March 4, 1824”. Is it possible to create a low dimensional embedding vector that represents sequence in the document or rough chronological distance to each other? I will pay $10 to anyone who is able to give me insight on this!
Finally there is someone who introduces pydantic v2
Great video! Nothing superfluous, everything is clear. Good examples
Hands down the best 30 minutes invested on Learning Python : Huge fan of your teaching method!
Thank you very much :)
Awesome tutorial! Got me up to speed with a library that's using a lot of pydantic that Im trying to understand. Thank you :)
@@raphipik thank you for your comment :)
Thanks man, you covered basically everything you need to know to get started, undoubtedly the best pydantic course i've seen on youtube
thank you for that comment :)
Coming from typescript ecosystem where I'm using packages like Zod to do basically the same things, it is nice to see python has this gem, too
and don't forget pandera for runtime validation on dataframes. nice to see python gaining type support in multiple areas.
Well done, this is an excellent demonstration of the uses of Pydantic.
Extra thank-you for pydantic-settings!
really enjoyed it, well explained
Nice! Much needed walk through
great it´s useful for you :-)
Very helpful. Thanks!
at 20:39, the @computed_field has been removed above "def age()" function?
Very nice video on Pydantic. Thanks you so much.
Thank you for your nice comment:)
Hello. Thanks for video. Last operation with use env_file, env variables loads not from firle), from enviroment?
Is there anything specific that makes this the best library versus standard librarys dataclasses or the attrs library?
Come on, I explain that in the first 20 seconds of this Video 😅
As far as I see pydantic kind of kills the __init__ method. When I define classes I do not need to use anymore the __init__ method but the pytdantic attributes. Is it correct?
Yes, you are correct to some extent, but let's clarify the details.
When using Pydantic to define data models in Python, it automatically generates the __init__ method for you, based on the fields you define in the class. This means that you generally don't need to write a custom __init__ method yourself because Pydantic handles the initialization of the attributes, as well as validation and type checking.
@@codingcrashcourses8533 Thank you so much!
Thank you, it came at a great hour! I'm having problems, I'm creating an application that receives a dictionary from a request and then trying to convert it to a structure that the 'convert_pydantic_to_openai_function' method accepts, but it's difficult! Doing it statically by declaring it like: """class Tagging(BaseModel):
title: str = Field(description="Write a title")
...""" and than using "convert_pydantic_to_openai_function(Tagging)
Everything goes well, but when I try to convert the dictionary dynamically to fields such as 'title' mentioned, the problem arises, would you know how to do this conversion properly? So I could pass it on to AI dynamically to process the fields that the user provided in the request!
Ok that is an interested idea. I would probably try to create a class with no attributes attached. Allow extra_attributes and set these attributes AFTEr instantiation. May that be a solution?
Nice🎉
But wait. Your Food object in your instance of the Restaurant class in not a Food obj but a dictionary.
Ohh man i wish i knew about computed value already! Thats a life safer
Yep, one of my favorite features too :)
Thanks for the video. I’m currently very stuck on the concept of using multiple embeddings or vector indexes vs metadata / knowledge graph. My use case is simple: I have a text of a journal written roughly in chronological sequence. Embedding functions take into account the lexical meaning of words and their approx distance to each other, but not smart enough to allow something like “chapter 3 starts in 1872, so following day/month references like March 4, should have a closer cosine distance to 1827 than “March 4, 1824”.
Is it possible to create a low dimensional embedding vector that represents sequence in the document or rough chronological distance to each other?
I will pay $10 to anyone who is able to give me insight on this!
I would try to have a look if SelfQueryRetrievers, if you use Langchain
Validierung without Pydantic :D Language validation failed :D
:p