As someone who has been involved with computers since the Sinclair ZX-81 in the 80s, it is absolutely incredible that a machine can perform this level of reasoning.
🎯 Key Takeaways for quick navigation: 00:00 🧠 "Perfect Prompt Principles" is a series exploring techniques to improve prompt engineering for better results in language models. 01:06 🔄 Chain of Thought Prompting involves breaking down complex problems into step-by-step subproblems to achieve more accurate and useful outputs from language models. 02:03 🚫 Some problems, like riddles, can't be solved directly with language models; they require a Chain of Thought approach to systematically tackle each component. 03:13 📝 When using Chain of Thought prompting, start by listing all the subproblems that need to be addressed before solving the main problem. 06:05 🤖 Language models may not have all the answers, but making educated guesses based on probabilities can help progress in solving complex problems. 08:51 🧩 Chain of Thought can be applied to various problems, even when a direct solution seems possible, to ensure thorough and detailed analysis. 10:12 🗂️ Chain of Thought can lead to nuanced and considered answers by addressing multiple aspects and potential ambiguities within a problem. 11:48 🧐 Using Chain of Thought, language models can provide more insightful answers by considering the sequence of events and potential variables within a problem-solving scenario. Made with HARPA AI
Love the fact that you guys are friendly. Love both your channels. Could you do a video on Tree of Thoughts prompting similar to this one? I've tried using the ToT approach but I'm not as skilled in AI as you two. Be great to see a detailed example from either one of you. Either way. Thanks for the content guys! Really appreciate it!🤗
As always, the key is to guide the LLM and provide him with the best advice. The LLM's ability to "reason" is amazing, and I can only imagine what they will be able to accomplish in the future. The examples are great because they provide a clear explanation of the technique and show how it can be applied to a variety of situations.
Great, I just tested my custom instruction, which I crafted together with ChatGPT4 some weeks ago. It solves both problems in one go! I knew it is good! Great job, Kris, love the series, will send you the custom instruction prompt over the email!
This is great! Don't know if it's learned more since it was uploaded, but it solved problem 1 for me with no issues about the museum without chain of thought prompting.
Great example! ❤ Interestingly, my tests also reveal that adding “think step by step” with GPT 4o can accomplish the same results without COT prompting. Takeaway: As AI gets smarter, it can handle more complex problems without as much human assistance.
Love your Prompt Principles! Will listen a second time and see how I can apply it to one of my coaching methods to ISOLATE a problem. (I'm working on the Englings acronym version:-) Identifiera-Specificera-OrsaksInventera-LösningsFokusera-EffektAnalysera-Radera-Applådera
Thanks for your comment! It worked as a reminder of the ISOLATE method. I will get bac to you when it's done. So much interesting stuff to do and to little time.
I am guessing with my intuition that knowledge graphs can really help get better results and chatgpt might be using the same especially for chain of thought problems.
Chain of Socrates, .. 😁 interestingly enough, I have an extraordinarily complex idea that I’m working on, and it requires exactly this, as part of its set up, and I jokingly call it the ChatSPT: Socratically Pretrained Transformer
I don’t want to be a KLUGSCHEISSER but the Socratic method is a bit different. In this (amazing) video you have the whole riddle in place and cut it in slices. In the Socratic dialog you develop the riddle by inspecting a single thought more and more detailed …
INTERESTING video ! I noticed you fell into the same trap we all occasionally (regularly!) fall into - you start saying "I THINK the response are CORRECT"... You THINK the response is CORRECT? We have to challenge ourselves to NOT consider our OPINIONS are equivalent to FACT. "I think the response is correct" implies you KNOW the CORRECT answer when in fact you only have a hunch, and even YOU are making ASSUMPTIONS about HOW you held the box, and even how long it took to ship the box. "I think I AGREE WITH the answer" is a bit more honest; you DON'T in fact "know" the CORRECT answer, nor are you pretending to be any sort of omniscient guru. ALL you're saying is your OPINION - you happen to AGREE with the response as "reasonable"... That's all we can say in light of the ambiguities - that the conclusion is REASONABLE. Train of Thought/Chain of Thought Prompting is fast becoming one of the most fascinating aspects of GPT for many many data scientists like myself ! Keep up the great videos ! Mark Vogt, Principal Data Scientist, Avanade
For the first Japan problem. GPT4 can solve it even with the original prompt without any clarification. Is this method applicable for GPT4, or all such settings performed internally in the GPT4 engine? Here is my problem: "Michael is a 31 year old man from America. He is at that really famous museum in France looking at its most famous painting. However, the artist who made this painting just makes Michael think of his favourite cartoon character from his childhood. What was the country of origin of the thing that the cartoon character usually holds in his hand?" Solve this:
Yeah, it is very applicable to GPT-4 too. But as you state GPT-4 is already much better than 3.5 to solve these types of problems from a Zero shot. But for longer more complex problems it really makes a big difference
@@AllAboutAI Thanks for the answer. I just read in another source that you may just add "Take a deep breath and work on this problem step-by-step" and the result will be better. I applied it to your second problem, and it worked(so no need to do a complex staff as in the video). Maybe you may find cases where even this is not working? I am in my garage, I pick up a small ball and I grab a small box that is missing the bottom. I walk into my office and I put the small ball into the small box. Then I take the small box with me to the postal office. Here I put the small box into a bigger box and sent it to my friend in New York. Where is the ball now? Take a deep breath and work on this problem step-by-step
Where got we need to provide 1 shot and few shots? Chatgpt knows straight away. Alice has 6 🍎. She throws away 2 🍎. She gives 2 🍎 to Peter and Peter returns 1 🍎 to Alice. How many 🍎 do Alice have? Alice originally has 6 apples. Here's what happens step by step: 1. She throws away 2 apples: \(6 - 2 = 4\) apples left. 2. She gives 2 apples to Peter: \(4 - 2 = 2\) apples left. 3. Peter returns 1 apple to Alice: \(2 + 1 = 3\) apples left. So, Alice has 3 apples at the end.
For folks understanding GPT, the type of examples quoted in your CoT prompting are absolutely ridiculous. CoT needs to be done for problems and not wordy riddles. What you demonstrated is just a GPT working demonstration. It does little to explain CoT in any way...
As someone who has been involved with computers since the Sinclair ZX-81 in the 80s, it is absolutely incredible that a machine can perform this level of reasoning.
Well, had a bit more than 1k of memory to work with!
🎯 Key Takeaways for quick navigation:
00:00 🧠 "Perfect Prompt Principles" is a series exploring techniques to improve prompt engineering for better results in language models.
01:06 🔄 Chain of Thought Prompting involves breaking down complex problems into step-by-step subproblems to achieve more accurate and useful outputs from language models.
02:03 🚫 Some problems, like riddles, can't be solved directly with language models; they require a Chain of Thought approach to systematically tackle each component.
03:13 📝 When using Chain of Thought prompting, start by listing all the subproblems that need to be addressed before solving the main problem.
06:05 🤖 Language models may not have all the answers, but making educated guesses based on probabilities can help progress in solving complex problems.
08:51 🧩 Chain of Thought can be applied to various problems, even when a direct solution seems possible, to ensure thorough and detailed analysis.
10:12 🗂️ Chain of Thought can lead to nuanced and considered answers by addressing multiple aspects and potential ambiguities within a problem.
11:48 🧐 Using Chain of Thought, language models can provide more insightful answers by considering the sequence of events and potential variables within a problem-solving scenario.
Made with HARPA AI
Best explanation of chain of thought I've come across. Thanks.
Great video Kris! Very valuable topic
lol
Thanks mate :) lets have a catch up soon
Love the fact that you guys are friendly. Love both your channels. Could you do a video on Tree of Thoughts prompting similar to this one? I've tried using the ToT approach but I'm not as skilled in AI as you two. Be great to see a detailed example from either one of you. Either way. Thanks for the content guys! Really appreciate it!🤗
As always, the key is to guide the LLM and provide him with the best advice. The LLM's ability to "reason" is amazing, and I can only imagine what they will be able to accomplish in the future. The examples are great because they provide a clear explanation of the technique and show how it can be applied to a variety of situations.
Thanks mate, 100% agree
* “provide her with the best advice.”
Great, I just tested my custom instruction, which I crafted together with ChatGPT4 some weeks ago. It solves both problems in one go! I knew it is good! Great job, Kris, love the series, will send you the custom instruction prompt over the email!
Awsome mate :) will do a Custom Instrucitons vid
Please share your custom instructions!!
Amazing didactic ability!!! Chapeau … I’ll borrow that for my next talk
I really like your two examples. They show perfectly the usefulness of COT
Great video - I am using yours prompts with some small modifications - results are great - thank you
Thanks mate :)
This is great! Don't know if it's learned more since it was uploaded, but it solved problem 1 for me with no issues about the museum without chain of thought prompting.
its gotten alot dumber. not worth using anymore.
Awesome! Let see more complex prompt engineering videos !
Thanks for the explanation. That helped me a lot in a prompt tuning I'm doing.
Great example! ❤
Interestingly, my tests also reveal that adding “think step by step” with GPT 4o can accomplish the same results without COT prompting.
Takeaway: As AI gets smarter, it can handle more complex problems without as much human assistance.
Great explanation, thanks 🙏
Love your Prompt Principles! Will listen a second time and see how I can apply it to one of my coaching methods to ISOLATE a problem. (I'm working on the Englings acronym version:-) Identifiera-Specificera-OrsaksInventera-LösningsFokusera-EffektAnalysera-Radera-Applådera
show me your prompt.
Awsome :D just send me a mail if you need some additional input
Thanks for your comment! It worked as a reminder of the ISOLATE method. I will get bac to you when it's done. So much interesting stuff to do and to little time.
@@AllAboutAI I will defenetly get back to you on this. Sorry for the delay!
This is useful, thanks man!
great video - thanks!
I am guessing with my intuition that knowledge graphs can really help get better results and chatgpt might be using the same especially for chain of thought problems.
This should be called the "Socratic Method"
Chain of Socrates, .. 😁 interestingly enough, I have an extraordinarily complex idea that I’m working on, and it requires exactly this, as part of its set up, and I jokingly call it the ChatSPT:
Socratically Pretrained Transformer
I don’t want to be a KLUGSCHEISSER but the Socratic method is a bit different. In this (amazing) video you have the whole riddle in place and cut it in slices. In the Socratic dialog you develop the riddle by inspecting a single thought more and more detailed …
Is it possible to structure this into a single prompt?
I tried the ball and box problem with GPT3.5.turbo. Oh, dear. It needs a lot of hand-holding and keeps forgetting what it learned. Thanks for this.
INTERESTING video !
I noticed you fell into the same trap we all occasionally (regularly!) fall into - you start saying "I THINK the response are CORRECT"...
You THINK the response is CORRECT?
We have to challenge ourselves to NOT consider our OPINIONS are equivalent to FACT.
"I think the response is correct" implies you KNOW the CORRECT answer when in fact you only have a hunch, and even YOU are making ASSUMPTIONS about HOW you held the box, and even how long it took to ship the box.
"I think I AGREE WITH the answer" is a bit more honest; you DON'T in fact "know" the CORRECT answer, nor are you pretending to be any sort of omniscient guru.
ALL you're saying is your OPINION - you happen to AGREE with the response as "reasonable"...
That's all we can say in light of the ambiguities - that the conclusion is REASONABLE.
Train of Thought/Chain of Thought Prompting is fast becoming one of the most fascinating aspects of GPT for many many data scientists like myself !
Keep up the great videos !
Mark Vogt, Principal Data Scientist, Avanade
For the first Japan problem. GPT4 can solve it even with the original prompt without any clarification. Is this method applicable for GPT4, or all such settings performed internally in the GPT4 engine?
Here is my problem: "Michael is a 31 year old man from America. He is at that really famous museum in France looking at its most famous painting. However, the artist who made this painting just makes Michael think of his favourite cartoon character from his childhood. What was the country of origin of the thing that the cartoon character usually holds in his hand?" Solve this:
Yeah, it is very applicable to GPT-4 too. But as you state GPT-4 is already much better than 3.5 to solve these types of problems from a Zero shot. But for longer more complex problems it really makes a big difference
@@AllAboutAI Thanks for the answer. I just read in another source that you may just add "Take a deep breath and work on this problem step-by-step" and the result will be better. I applied it to your second problem, and it worked(so no need to do a complex staff as in the video). Maybe you may find cases where even this is not working?
I am in my garage, I pick up a small ball and I grab a small box that is missing the bottom. I walk into my office and I put the small ball into the small box. Then I take the small box with me to the postal office. Here I put the small box into a bigger box and sent it to my friend in New York. Where is the ball now? Take a deep breath and work on this problem step-by-step
Where got we need to provide 1 shot and few shots? Chatgpt knows straight away.
Alice has 6 🍎. She throws away 2 🍎. She gives 2 🍎 to Peter and Peter returns 1 🍎 to Alice. How many 🍎 do Alice have?
Alice originally has 6 apples. Here's what happens step by step:
1. She throws away 2 apples: \(6 - 2 = 4\) apples left.
2. She gives 2 apples to Peter: \(4 - 2 = 2\) apples left.
3. Peter returns 1 apple to Alice: \(2 + 1 = 3\) apples left.
So, Alice has 3 apples at the end.
By providing zero shot, chatgpt knows already.
Would it solve a crime if it is feeded witnesses reports the same way it solve the riddle?!
Pretty much as in the Minority report movie
For folks understanding GPT, the type of examples quoted in your CoT prompting are absolutely ridiculous. CoT needs to be done for problems and not wordy riddles. What you demonstrated is just a GPT working demonstration. It does little to explain CoT in any way...
Using AI to identify the country of origin of a cartoon character ... good God, we're all doomed