Summary 1. **RAG (Retrieval Augmented Generation):** Augmenting the knowledge base (db) to enhance responses by combining language models. 2. **Chain of Thoughts:** Promoting ideas using 'thoughts' 💭 in the form of chunks one by one to obtain actual answers. Language models arrive at your desired answers through reasoning and logic. 3. **ReAct (Thought, Action, and Observation):** Different from the chain of thoughts, this involves both private knowledge base (db) and public language model (llm) data. If information isn't in the knowledge base, it goes back to the public llm data (trained data) for results. 4. **DSP (Direct Stimulus Prompting):** The latest method involves hinting the prompt with a specific hint to get the answers.
The explanation is good as far as you know the difference between knowledge based data, internal or external, and private or public. 1. RAG is infact knowledge based but it is an external knowledge based approach as it depends on databases, search engines results that are relevant to your industry, either private or public. 2. COT is reasoning based without knowledge based approach. 3. ReACT can be reasoning through COT and Acting through knowledge sources that could be internal, external, private or public. 4. DSP neither rely on reasoning nor on external knowledge. This approach directly take the hint and apply pre-trained knowledge (internal knowledge) from LLM.
A prompt is a specific instruction or query given to an LLM (Large Language Model) to perform a task. A task can be: Providing information, summarizing, analyzing, planning, reasoning, coding, generating, etc. Effective prompt engineering involves iteratively refining these instructions or questions to achieve a more accurate, relevant, or desired outcome from the LLM.
@IBM Technology I got the theory but I want to see an example of the actual resulting prompt in each of the 4 methods. Nothing beats learning by example
I understand the main idea, but I think the examples and explanations weren't clearly thought through and felt vague. I didn't get a clear sense of how to apply these techniques effectively in real-life situations. However, I appreciate the intention and the effort put into it.
RAG --> is primarily about improving factual correctness by retrieving external information. CoT --> enhances logical reasoning by encouraging the model to break down problems into sequential steps. ReAct --> adds a layer of interaction, allowing the model to not just reason, but also perform actions based on that reasoning. DSP--> subtly influences the direction of the model's output through carefully designed prompts.
I like they are labeling every interaction with the LLMs. Prompt engineering, rag, cot, react, dsp. These are the basic blocks and as a developer I share what many are already seen and working on it. A higher programing language where it is no longer constrained to direct the physical and structured layer to compute the results. This programming language will skip those layers 100% to work directly on the business problems. It may be named as mayeutic. Supporting in a new way critical questions; fast no longer will be the measure. Fast will be just side effect. The key will be the transition from RDBs, repositories, rudimentary data input, rudimentary finance procedures, to the next abstractions that would facilitate this smart agility using it.
I will use it in my team to introduce those techniques to our new members ! Thanks ! :) (and congrats for mirror writing so clearly : what a skill ;) )
In computer science research, which encompasses fields such as computer science, computer engineering, and artificial intelligence, ethical standards have been neglected for at least two decades. A recurring problem is the renaming of well-established concepts without properly acknowledging their origins. For example, “prompt engineering” is simply a renaming of the concept of relevance feedback, but existing work on relevance feedback often goes unnoticed. This trend is pervasive: in deep learning, research unrelated to deep learning is frequently ignored and thus avoids comparison with lightweigt or frugal methods. Random projection has been renamed compressive sensing. Even basic concepts like the dot product, correlation and convolution have been renamed to create an illusion of innovation. The examples are numerous. Where are the intellectuals whose responsibility it is to denounce such abuses?
They did a pretty bad job of explaining ReAct which is probably why it didn't feel all that different. The secret sauce of ReAct has to do with using custom tools (not just a public or private database) and a reasoning loop where the model evaluates the output from a tool, decides if that is enough to give a final answer and if not, continue to use tools to get more information until it does
The view you are seeing it from has been flipped. There is a video on this channel or steve brutons where they explain how they make these videos. Also, if you assume the rule of wearing your wedding band on your left hand ring finger applies then you are looking at the marker being in his right hand.
They are all right-handed. The camera is behind them and is recording them facing and writing on some sort of mirror that makes their markers glow. Almost like an old school SmartBoard, but as a mirror.
are they writing backwards? how the heck does this work. lol. also you can just tell she's so intelligent. she has a really good vibe about her. anyone who gets to work with her is very very lucky.
So basically this about ‘structuring’ your prompts in a way the LLM has to process your input…? And who is expected to formulate these ‘natural language’ questions…?
I have two questions. One, is IBM going to "decouple" from any dependency or vulnerability via China? Two, could IBM get back into the PC market? They were in rough times when they divested from their old PC, and sold it off as Lenovo. But they could really bring a high-end machine to market, and keep it U.S. developed.
The US dependency by IBM is also problematic. The pervasive and unethical spying by the American govt should have any company that relies on AI worried.
These videos are not for AI engineers, they are for business people that need to understand the tools and techniques used in generative AI. If you want real media AI content, go check out machine learning Street talk. This is not the channel for you.
This is really terrible! RAG is not a method of prompt engineering, it's an architecture! And as far as the prompt explanations, they are also really poor. No wonder nobody uses IBM anymore
Summary
1. **RAG (Retrieval Augmented Generation):** Augmenting the knowledge base (db) to enhance responses by combining language models.
2. **Chain of Thoughts:** Promoting ideas using 'thoughts' 💭 in the form of chunks one by one to obtain actual answers. Language models arrive at your desired answers through reasoning and logic.
3. **ReAct (Thought, Action, and Observation):** Different from the chain of thoughts, this involves both private knowledge base (db) and public language model (llm) data. If information isn't in the knowledge base, it goes back to the public llm data (trained data) for results.
4. **DSP (Direct Stimulus Prompting):** The latest method involves hinting the prompt with a specific hint to get the answers.
The explanation is good as far as you know the difference between knowledge based data, internal or external, and private or public.
1. RAG is infact knowledge based but it is an external knowledge based approach as it depends on databases, search engines results that are relevant to your industry, either private or public.
2. COT is reasoning based without knowledge based approach.
3. ReACT can be reasoning through COT and Acting through knowledge sources that could be internal, external, private or public.
4. DSP neither rely on reasoning nor on external knowledge. This approach directly take the hint and apply pre-trained knowledge (internal knowledge) from LLM.
So simple and focused on the main idea and key points. Thank you for your straightforward explanation!
This is the most intelligent video I have seen on Prompt Eng.
A prompt is a specific instruction or query given to an LLM (Large Language Model) to perform a task. A task can be: Providing information, summarizing, analyzing, planning, reasoning, coding, generating, etc. Effective prompt engineering involves iteratively refining these instructions or questions to achieve a more accurate, relevant, or desired outcome from the LLM.
@IBM Technology I got the theory but I want to see an example of the actual resulting prompt in each of the 4 methods. Nothing beats learning by example
Absolutely :)
I understand the main idea, but I think the examples and explanations weren't clearly thought through and felt vague. I didn't get a clear sense of how to apply these techniques effectively in real-life situations. However, I appreciate the intention and the effort put into it.
Exactly. Half baked answers. Good efforts
RAG --> is primarily about improving factual correctness by retrieving external information.
CoT --> enhances logical reasoning by encouraging the model to break down problems into sequential steps.
ReAct --> adds a layer of interaction, allowing the model to not just reason, but also perform actions based on that reasoning.
DSP--> subtly influences the direction of the model's output through carefully designed prompts.
How blessed am I to watch this AMAZING video, thank you.
I like they are labeling every interaction with the LLMs. Prompt engineering, rag, cot, react, dsp. These are the basic blocks and as a developer I share what many are already seen and working on it.
A higher programing language where it is no longer constrained to direct the physical and structured layer to compute the results. This programming language will skip those layers 100% to work directly on the business problems. It may be named as mayeutic.
Supporting in a new way critical questions; fast no longer will be the measure. Fast will be just side effect.
The key will be the transition from RDBs, repositories, rudimentary data input, rudimentary finance procedures, to the next abstractions that would facilitate this smart agility using it.
Prompt Engineering is what will initiate the singularity. Questions are the most powerful statements.
Awesome Video . Great Instructor.
Excellent content and commentary. Thanks for sharing your knowledge.
A realistic discussion such as this should include the old concept that still holds true: Garbage in garbage out
Beautiful explanation!!
simple, direct, and on point. Thx a ton
Very good explanation
I will use it in my team to introduce those techniques to our new members ! Thanks ! :) (and congrats for mirror writing so clearly : what a skill ;) )
Thank you to all the people who were involved in the making of this video and content. Now, I know the 4 methods of Prompt Engineering.
It is wild to me how engineers view the research process. Honestly, they make it more complicated than it needs to be.
better label: "academic engineers"
Thank you !
So many factual inaccuracies and incomplete explanations. Can’t believe IBM official channel producing such content.
Really nice 👍
This lady is very smart
Absolutely amazing
Nice, short clip, explaining such mega-areas in 12 minutes
Instructions unclear, i now have a cat tattoo on my face
In computer science research, which encompasses fields such as computer science, computer engineering, and artificial intelligence, ethical standards have been neglected for at least two decades. A recurring problem is the renaming of well-established concepts without properly acknowledging their origins. For example, “prompt engineering” is simply a renaming of the concept of relevance feedback, but existing work on relevance feedback often goes unnoticed. This trend is pervasive: in deep learning, research unrelated to deep learning is frequently ignored and thus avoids comparison with lightweigt or frugal methods. Random projection has been renamed compressive sensing. Even basic concepts like the dot product, correlation and convolution have been renamed to create an illusion of innovation. The examples are numerous.
Where are the intellectuals whose responsibility it is to denounce such abuses?
Examples would be useful! Thanks
That amazing
Love you guys
The techniques apart from RAG look like a derived version of RAG itself. The line of separation is kind of blurred IMO.
They did a pretty bad job of explaining ReAct which is probably why it didn't feel all that different. The secret sauce of ReAct has to do with using custom tools (not just a public or private database) and a reasoning loop where the model evaluates the output from a tool, decides if that is enough to give a final answer and if not, continue to use tools to get more information until it does
Does this apply to all practical language models currently? This is how I should rizzz up my chat4 bot?
Woule be helpful if you can come up with realime example and usuage. May be in parts..
Nice video, thanks guys! Quick question: are all your engineers at IBM left-handed? You seem to have a bias for left-handed engineers 😅
The image is reversed. They have a video explaining how they make lightboard videos.
The view you are seeing it from has been flipped. There is a video on this channel or steve brutons where they explain how they make these videos. Also, if you assume the rule of wearing your wedding band on your left hand ring finger applies then you are looking at the marker being in his right hand.
They are all right-handed. The camera is behind them and is recording them facing and writing on some sort of mirror that makes their markers glow. Almost like an old school SmartBoard, but as a mirror.
are they writing backwards? how the heck does this work. lol. also you can just tell she's so intelligent. she has a really good vibe about her. anyone who gets to work with her is very very lucky.
The lady explanation always confused me, but still appreciate the intention.
Thx
So basically this about ‘structuring’ your prompts in a way the LLM has to process your input…? And who is expected to formulate these ‘natural language’ questions…?
Wish you had show more specific examples
sorry for my slowness. but the only thing I could understand is the RAG. the other ones are not clear.
How it is generating responses if I only have to train it with all actual data.
So ReACT is just RAG with two databases?
i was distracted on how you did the drawing and not have to write backwards... lol
Good short/focused content. But example/context could have been lot better.
But but they all sound the same essentially? Please tell me the nuanced difference between the four.
I have two questions. One, is IBM going to "decouple" from any dependency or vulnerability via China? Two, could IBM get back into the PC market? They were in rough times when they divested from their old PC, and sold it off as Lenovo. But they could really bring a high-end machine to market, and keep it U.S. developed.
The US dependency by IBM is also problematic. The pervasive and unethical spying by the American govt should have any company that relies on AI worried.
😮Giving the same example for dsp and cop makes it confusing
React isnt helping with prompt but with results... Misleading title
She's good 👍
Excellent presentattion
Lack of examples and not clear enough information for begginers killed this video for me
Miller Jessica Brown Larry Garcia Brian
example prompts would've been helpful
Apple intelligence seems work on this model
Instantly confusing and unclear. The example didn't even flow in relation to what she was saying.
top
i never even thumb downed a video before. content was lacking. no prompt examples.
Lol😂
Agreed this was very lame, terrible examples and explanation lacking in specificity and clarity
It's click bait.
These videos are not for AI engineers, they are for business people that need to understand the tools and techniques used in generative AI. If you want real media AI content, go check out machine learning Street talk. This is not the channel for you.
Awful video for beginners
Really bad examples, couldn’t they ask the AI to give better ones?
This is really terrible! RAG is not a method of prompt engineering, it's an architecture! And as far as the prompt explanations, they are also really poor. No wonder nobody uses IBM anymore
That was confusing due to inferior examples given. No you didn’t succeed in explaining to a 8 year old
superficial and pretty useless... Shoudl have given examples of a prompt for each
I really didn’t get value from this video.
This is completely inaccurate and confusing. IBM should take this down and check for accuracy of their content before putting this out there
Can you outline, briefly, the inaccuracy?
They trying to simplify it
Mam loves anual earning example very much 🥹
Glad to hear that