Learn to Embed Data with ChatGPT to make a Recommendation Bot

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ธ.ค. 2024

ความคิดเห็น • 30

  • @davidrose9577
    @davidrose9577 ปีที่แล้ว +4

    Great info, great walk through and execution. I appreciate the fact that you write the code in large font as it's frequently difficult to see the code on your computer screen. Thank you for the extra effort.

    • @tech-at-work
      @tech-at-work  ปีที่แล้ว

      Thanks! Glad to hear it helps!

  • @caiyu538
    @caiyu538 ปีที่แล้ว +1

    Great demo

  • @alireaziat3842
    @alireaziat3842 ปีที่แล้ว

    This is great thanks. I need to understand this: is the movie description the information that have been vectored here? I mean the system can measure similarity on a full length set of descriptions?

    • @tech-at-work
      @tech-at-work  ปีที่แล้ว

      Correct, it’s assigning the vectors to the entire description!

  • @emiliostavrou
    @emiliostavrou ปีที่แล้ว +1

    Do you think it would be possible to connect this to a live speadsheet?

    • @tech-at-work
      @tech-at-work  ปีที่แล้ว +1

      You could have a live spreadsheet in Google Colab, but you'd need to re-run the embedding code each time to actually use it.

  • @redbaron3555
    @redbaron3555 ปีที่แล้ว +1

    Awesome videos! Thank you!

  • @HamzaRashid
    @HamzaRashid ปีที่แล้ว +1

    Thanks for this video and the fine tuning video. Can you help me understand - are these emeddings/fine tunings done at the API Key level? Or does it do it on model level that can be used with new API keys under the same account?

    • @tech-at-work
      @tech-at-work  ปีที่แล้ว

      You can use the embedded/fine-tuned models with multiple API keys from the same account!

  • @tyrealq
    @tyrealq ปีที่แล้ว +1

    Is it possible to use embedding to build a lexicon that can be used to classify words/phrases/sentences into particular categories even if a word/phrase/sentence is not in the dictionary? Thank you so much for your tutorial. It is the most straightforward and organized video I have ever encountered.

    • @tech-at-work
      @tech-at-work  ปีที่แล้ว +1

      It can as long as you have sufficient training data using the new words, and the associated classification.

    • @tyrealq
      @tyrealq ปีที่แล้ว

      @@tech-at-work Thanks for the feedback! Say classify college football experiences, how much data (annotated training data) would you consider sufficient? I am working on a project but having a hard time gauging the effort. Thanks again!

    • @tech-at-work
      @tech-at-work  ปีที่แล้ว +1

      It will depend on the size of the lexicon you’re building, and different word combinations. You want examples and context for each word, and common nearby words, so a few examples per unique word should be sufficient (and word combinations)

    • @tyrealq
      @tyrealq ปีที่แล้ว

      @@tech-at-work Thank you, sir. I am still a little bit confused with fine-tuning vs. embedding. For college football experience classification, there are certain experiences that should be classified in one category not the other (e.g., "food is not good" should go to the concession quality category whereas "beverages are expensive" should belong to the concession pricing category). What I encountered was that if using by-default GPT-3.5, classifications were not done as anticipated (e.g., GPT-3.5 could not differentiate concession quality vs concession pricing). In this case, should I use embedding or fine-tuning to improve the quality of the output? Thank you again!

  • @nesun3
    @nesun3 ปีที่แล้ว +1

    How to add closest top 5 recommendations?

    • @tech-at-work
      @tech-at-work  ปีที่แล้ว +1

      You need to change n=6, and add a colon after the [1] to show all rows, then for Gradio you need it to output the list as string (sequence of characters). To get Top 5, adjust your last 2x sections of code to this, and it should work;
      def search_movies(df, movie_title, n=6):
      embedding = get_embedding(movie_title, engine='text-embedding-ada-002')
      df['similarities'] = df.Embedding.apply(lambda x: cosine_similarity([x], [embedding]))
      res = df.sort_values('similarities', ascending=False).head(n)
      return res.iloc[1:]['Movie'].tolist(), res.iloc[1:]['similarities'].tolist()
      def gradio_wrapper(movie_title):
      top_movies, similarity_scores = search_movies(df, movie_title)
      top_movies_str = '
      '.join(map(str, top_movies))
      similarity_scores_str = '
      '.join(map(str, similarity_scores))
      return top_movies_str, similarity_scores_str
      top_movie_str = ', '.join(map(str, top_movie))
      similarity_score_str = ', '.join(map(str, similarity_score))
      return top_movie_str, similarity_score_str
      iface = gr.Interface(
      gradio_wrapper,
      inputs="text",
      outputs=[gr.outputs.Textbox(label="Top Movies"),
      gr.outputs.Textbox(label="Similarity Scores")],
      interpretation="default",
      )
      iface.launch(share=True)

  • @caiyu538
    @caiyu538 ปีที่แล้ว +1

    Openai directly tokenize for us in your code?

    • @tech-at-work
      @tech-at-work  ปีที่แล้ว

      You can add code to tokenize, but I didn’t include it in this example

  • @dearlove88
    @dearlove88 ปีที่แล้ว

    So, my understanding, which I’m happy to be corrected, is that fine tuning doesn’t actually ‘add data’ to an LLM, unless you want to retrain the whole model. Vectoring/Embedding is pretty much the only financially viable option to insert data and get openAI to answer questions about said data.

    • @tech-at-work
      @tech-at-work  ปีที่แล้ว +1

      You’re mostly correct; fine-tuning directly adjusts the existing models parameters but does not “add new data”. Embeddings allow the model to understand (context & sentiment) new data as an input, but also works with an existing model. Embeddings are much cheaper but less nuanced where fine-tuning is more accurate but requires better formatted data and more expensive to perform.

  • @bryancaro8625
    @bryancaro8625 ปีที่แล้ว

    Great video.
    Im getting this error RetryError[] openai embedding, im trying to embed 5000, but works if its less. You know why?

  • @IsharaWijayaratne
    @IsharaWijayaratne ปีที่แล้ว

    I wasn't able to download the excel file 🥺

    • @tech-at-work
      @tech-at-work  ปีที่แล้ว +1

      I updated the link, let me know if you have any issues now!

    • @IsharaWijayaratne
      @IsharaWijayaratne ปีที่แล้ว +1

      @@tech-at-work Thanks so much, it works.

  • @sergun4703
    @sergun4703 ปีที่แล้ว

    Hi, I have tried to run your code but faces an error on pip install step.
    1)pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    lida 0.0.10 requires kaleido, which is not installed.
    tensorflow 2.13.0 requires typing-extensions=3.6.6, but you have typing-extensions 4.8.0 which is incompatible.
    2) pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    lida 0.0.10 requires kaleido, which is not installed.
    tensorflow 2.13.0 requires typing-extensions=3.6.6, but you have typing-extensions 4.8.0 which is incompatible.
    Do you have any Idea How to fix that?

  • @hanabimock5193
    @hanabimock5193 ปีที่แล้ว

    Private data on ChatGPT 🚩🚩🚩🚩

    • @tech-at-work
      @tech-at-work  ปีที่แล้ว

      Fair point, make sure you're comfortable/allowed to share the data with OpenAI