ChatGPT Prompt Engineering Principles: Chain of Thought Prompting

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ต.ค. 2024

ความคิดเห็น • 48

  • @georgegray2712
    @georgegray2712 ปีที่แล้ว +19

    As someone who has been involved with computers since the Sinclair ZX-81 in the 80s, it is absolutely incredible that a machine can perform this level of reasoning.

    • @SimonHuggins
      @SimonHuggins 3 หลายเดือนก่อน

      Well, had a bit more than 1k of memory to work with!

  • @ytpah9823
    @ytpah9823 ปีที่แล้ว +5

    🎯 Key Takeaways for quick navigation:
    00:00 🧠 "Perfect Prompt Principles" is a series exploring techniques to improve prompt engineering for better results in language models.
    01:06 🔄 Chain of Thought Prompting involves breaking down complex problems into step-by-step subproblems to achieve more accurate and useful outputs from language models.
    02:03 🚫 Some problems, like riddles, can't be solved directly with language models; they require a Chain of Thought approach to systematically tackle each component.
    03:13 📝 When using Chain of Thought prompting, start by listing all the subproblems that need to be addressed before solving the main problem.
    06:05 🤖 Language models may not have all the answers, but making educated guesses based on probabilities can help progress in solving complex problems.
    08:51 🧩 Chain of Thought can be applied to various problems, even when a direct solution seems possible, to ensure thorough and detailed analysis.
    10:12 🗂️ Chain of Thought can lead to nuanced and considered answers by addressing multiple aspects and potential ambiguities within a problem.
    11:48 🧐 Using Chain of Thought, language models can provide more insightful answers by considering the sequence of events and potential variables within a problem-solving scenario.
    Made with HARPA AI

  • @ManiSaintVictor
    @ManiSaintVictor 9 หลายเดือนก่อน +1

    Best explanation of chain of thought I've come across. Thanks.

  • @aiadvantage
    @aiadvantage ปีที่แล้ว +7

    Great video Kris! Very valuable topic

    • @amandamate9117
      @amandamate9117 ปีที่แล้ว

      lol

    • @AllAboutAI
      @AllAboutAI  ปีที่แล้ว +2

      Thanks mate :) lets have a catch up soon

    • @arnettbuckers
      @arnettbuckers ปีที่แล้ว +1

      Love the fact that you guys are friendly. Love both your channels. Could you do a video on Tree of Thoughts prompting similar to this one? I've tried using the ToT approach but I'm not as skilled in AI as you two. Be great to see a detailed example from either one of you. Either way. Thanks for the content guys! Really appreciate it!🤗

  • @NABZ028
    @NABZ028 ปีที่แล้ว +5

    As always, the key is to guide the LLM and provide him with the best advice. The LLM's ability to "reason" is amazing, and I can only imagine what they will be able to accomplish in the future. The examples are great because they provide a clear explanation of the technique and show how it can be applied to a variety of situations.

    • @AllAboutAI
      @AllAboutAI  ปีที่แล้ว +2

      Thanks mate, 100% agree

    • @georgegray2712
      @georgegray2712 ปีที่แล้ว +1

      * “provide her with the best advice.”

  • @RaitisPetrovs-nb9kz
    @RaitisPetrovs-nb9kz ปีที่แล้ว +2

    Great, I just tested my custom instruction, which I crafted together with ChatGPT4 some weeks ago. It solves both problems in one go! I knew it is good! Great job, Kris, love the series, will send you the custom instruction prompt over the email!

    • @AllAboutAI
      @AllAboutAI  ปีที่แล้ว

      Awsome mate :) will do a Custom Instrucitons vid

    • @georgegray2712
      @georgegray2712 ปีที่แล้ว

      Please share your custom instructions!!

  • @iampuco445
    @iampuco445 ปีที่แล้ว +4

    Amazing didactic ability!!! Chapeau … I’ll borrow that for my next talk

  • @wenyunie3575
    @wenyunie3575 10 หลายเดือนก่อน

    I really like your two examples. They show perfectly the usefulness of COT

  • @micbab-vg2mu
    @micbab-vg2mu ปีที่แล้ว +2

    Great video - I am using yours prompts with some small modifications - results are great - thank you

  • @brianstieve648
    @brianstieve648 ปีที่แล้ว +1

    This is great! Don't know if it's learned more since it was uploaded, but it solved problem 1 for me with no issues about the museum without chain of thought prompting.

  • @travisross8704
    @travisross8704 ปีที่แล้ว +1

    Awesome! Let see more complex prompt engineering videos !

  • @brunosompreee
    @brunosompreee 10 หลายเดือนก่อน

    Thanks for the explanation. That helped me a lot in a prompt tuning I'm doing.

  • @Hall
    @Hall 4 หลายเดือนก่อน

    Great example! ❤
    Interestingly, my tests also reveal that adding “think step by step” with GPT 4o can accomplish the same results without COT prompting.
    Takeaway: As AI gets smarter, it can handle more complex problems without as much human assistance.

  • @AndiAsante
    @AndiAsante 6 หลายเดือนก่อน

    Great explanation, thanks 🙏

  • @BirgittaGranstrom
    @BirgittaGranstrom ปีที่แล้ว +4

    Love your Prompt Principles! Will listen a second time and see how I can apply it to one of my coaching methods to ISOLATE a problem. (I'm working on the Englings acronym version:-) Identifiera-Specificera-OrsaksInventera-LösningsFokusera-EffektAnalysera-Radera-Applådera

    • @amandamate9117
      @amandamate9117 ปีที่แล้ว

      show me your prompt.

    • @AllAboutAI
      @AllAboutAI  ปีที่แล้ว +1

      Awsome :D just send me a mail if you need some additional input

    • @BirgittaGranstrom
      @BirgittaGranstrom 10 หลายเดือนก่อน

      Thanks for your comment! It worked as a reminder of the ISOLATE method. I will get bac to you when it's done. So much interesting stuff to do and to little time.

    • @BirgittaGranstrom
      @BirgittaGranstrom 10 หลายเดือนก่อน

      @@AllAboutAI I will defenetly get back to you on this. Sorry for the delay!

  • @ehza
    @ehza ปีที่แล้ว

    This is useful, thanks man!

  • @ylazerson
    @ylazerson ปีที่แล้ว

    great video - thanks!

  • @somag6810
    @somag6810 10 หลายเดือนก่อน

    I am guessing with my intuition that knowledge graphs can really help get better results and chatgpt might be using the same especially for chain of thought problems.

  • @crobinso2010
    @crobinso2010 ปีที่แล้ว +6

    This should be called the "Socratic Method"

    • @gaussdog
      @gaussdog ปีที่แล้ว +1

      Chain of Socrates, .. 😁 interestingly enough, I have an extraordinarily complex idea that I’m working on, and it requires exactly this, as part of its set up, and I jokingly call it the ChatSPT:
      Socratically Pretrained Transformer

    • @iampuco445
      @iampuco445 ปีที่แล้ว +1

      I don’t want to be a KLUGSCHEISSER but the Socratic method is a bit different. In this (amazing) video you have the whole riddle in place and cut it in slices. In the Socratic dialog you develop the riddle by inspecting a single thought more and more detailed …

  • @NicheProfitEngine
    @NicheProfitEngine 5 หลายเดือนก่อน

    Is it possible to structure this into a single prompt?

  • @mageprometheus
    @mageprometheus ปีที่แล้ว

    I tried the ball and box problem with GPT3.5.turbo. Oh, dear. It needs a lot of hand-holding and keeps forgetting what it learned. Thanks for this.

  • @MarkVogt-g3d
    @MarkVogt-g3d 11 หลายเดือนก่อน +1

    INTERESTING video !
    I noticed you fell into the same trap we all occasionally (regularly!) fall into - you start saying "I THINK the response are CORRECT"...
    You THINK the response is CORRECT?
    We have to challenge ourselves to NOT consider our OPINIONS are equivalent to FACT.
    "I think the response is correct" implies you KNOW the CORRECT answer when in fact you only have a hunch, and even YOU are making ASSUMPTIONS about HOW you held the box, and even how long it took to ship the box.
    "I think I AGREE WITH the answer" is a bit more honest; you DON'T in fact "know" the CORRECT answer, nor are you pretending to be any sort of omniscient guru.
    ALL you're saying is your OPINION - you happen to AGREE with the response as "reasonable"...
    That's all we can say in light of the ambiguities - that the conclusion is REASONABLE.
    Train of Thought/Chain of Thought Prompting is fast becoming one of the most fascinating aspects of GPT for many many data scientists like myself !
    Keep up the great videos !
    Mark Vogt, Principal Data Scientist, Avanade

  • @trud811
    @trud811 ปีที่แล้ว +1

    For the first Japan problem. GPT4 can solve it even with the original prompt without any clarification. Is this method applicable for GPT4, or all such settings performed internally in the GPT4 engine?
    Here is my problem: "Michael is a 31 year old man from America. He is at that really famous museum in France looking at its most famous painting. However, the artist who made this painting just makes Michael think of his favourite cartoon character from his childhood. What was the country of origin of the thing that the cartoon character usually holds in his hand?" Solve this:

    • @AllAboutAI
      @AllAboutAI  ปีที่แล้ว

      Yeah, it is very applicable to GPT-4 too. But as you state GPT-4 is already much better than 3.5 to solve these types of problems from a Zero shot. But for longer more complex problems it really makes a big difference

    • @trud811
      @trud811 ปีที่แล้ว +1

      @@AllAboutAI Thanks for the answer. I just read in another source that you may just add "Take a deep breath and work on this problem step-by-step" and the result will be better. I applied it to your second problem, and it worked(so no need to do a complex staff as in the video). Maybe you may find cases where even this is not working?
      I am in my garage, I pick up a small ball and I grab a small box that is missing the bottom. I walk into my office and I put the small ball into the small box. Then I take the small box with me to the postal office. Here I put the small box into a bigger box and sent it to my friend in New York. Where is the ball now? Take a deep breath and work on this problem step-by-step

  • @tohkengleng9034
    @tohkengleng9034 หลายเดือนก่อน

    Where got we need to provide 1 shot and few shots? Chatgpt knows straight away.
    Alice has 6 🍎. She throws away 2 🍎. She gives 2 🍎 to Peter and Peter returns 1 🍎 to Alice. How many 🍎 do Alice have?
    Alice originally has 6 apples. Here's what happens step by step:
    1. She throws away 2 apples: \(6 - 2 = 4\) apples left.
    2. She gives 2 apples to Peter: \(4 - 2 = 2\) apples left.
    3. Peter returns 1 apple to Alice: \(2 + 1 = 3\) apples left.
    So, Alice has 3 apples at the end.

    • @tohkengleng9034
      @tohkengleng9034 หลายเดือนก่อน

      By providing zero shot, chatgpt knows already.

  • @TechMarine
    @TechMarine ปีที่แล้ว +2

    Would it solve a crime if it is feeded witnesses reports the same way it solve the riddle?!

    • @leod7
      @leod7 ปีที่แล้ว

      Pretty much as in the Minority report movie

  • @typicallyvirgo
    @typicallyvirgo 7 หลายเดือนก่อน

    For folks understanding GPT, the type of examples quoted in your CoT prompting are absolutely ridiculous. CoT needs to be done for problems and not wordy riddles. What you demonstrated is just a GPT working demonstration. It does little to explain CoT in any way...

  • @jdanorthwest
    @jdanorthwest ปีที่แล้ว

    Using AI to identify the country of origin of a cartoon character ... good God, we're all doomed