Great job, Mark !!! You've explained better than anyone that I've listened to in the past two years how to make A.I. agents do whatever I need them to, and how to fine-tune them to make the results even better. I've subscribed and will be listening to your archives as well as waiting for new videos to be posted.
I work with Mark on a super exciting project. When I was searching for the right partner, I told myself I want to work with someone who is 10x more knowledgeable than me. Mark is it, and much more. ❤
@@Mark_Kashef It's a good video on the intro of Meta Prompting. However, you are assuming the viewer knows as much as you know when you use the words "markdown" and "code block" and what asterisks and hashtags enable you to do or what they are. The majority of the population are new to AI and prompting. I think you need to put yourself in their shoes and ask yourself, will the man on the streets know about "markdown" and "code block" and what asterisks and hashtags are in terms of meta prompting?
Hey @Mark_Kashef after so many months of looking for the right prompt technique, yours is the BEST!! ever. Thanks for sharing and for making it so easy to understand. I subscribed so I wont miss the part two of this video.
Hi Mark, next cool video from you which I just found. Thanks a lot. Would you recommend to use Poe instead of using a specific Chatbot like ChatGPT or Claude?
I would only recommend POE if you’re someone like me that wants to test different outputs on a consistent basis. The downside of POE is that you don’t have those extra features like Memory, Personal Custom Instructions, etc you have in ChatGPT
Excellent video and it really gives me a good idea of how to use the AI to generate its own prompts. My question is why do you ask it to output in markdown all the time it just output in regular normal words? Thank you and new subscriber.
Thanks for the sub Scott! Markdown and XML tend to perform better in LLMs as the hashtags and bullets are a proxy for the proxy for importance in the information. For example # a main header, most import ## subheader ### sub-title within the airtcle ** = bolding __ =italics basically helps communicate and structure the prompt so that it's easier for the LLM to construe what's the most important from least important
This is really great, thanks for the detailed explanation and walk through. One question - what if you wanted to incorporate additional input (upload a text file or paste some data) or some clarifying questions into the prompt? I did try and few different approaches, none with ideal results.
My pleasure! On the upload front, I’ll usually add a portion where I’ll say ‘hey I’m going to upload this file or paste this data, XYZ is my dream outcome with it, and I need you to push me back with clarifying questions to help me get to said dream outcome with any gaps you’re missing from achieving it’ {insert example of dream outcome}’ Something along those lines
Awesome! I just finished a recording a video on how to integrate chain-of-thought into responses to provoke feedback from the user 🦾 it’ll come out next week.
A new direction with nice explanations. The idea will bring a revolutionary change in the traditional concepts of Prompt Engineering. I have already subscribed and liked the video. I also welcome you to my TH-cam channel and hope to get a subscription from such a knowledgeable person like you. Thank you for enlightening us regularly with your promising videos.
Can you do a video about data analysis using ChatGPT? When I give it a CSV file with campaign data, for example, and ask for insights, it always gives wrong assumptions and creates calculations that are not accurate.
Definitely something I can add to my content list! In terms of math, chatgpt and LLMs in general lack at calculations since language models don’t excel at maths. That’s why chatgpt uses code interpreter, to offload the math to Python to make sure it doesn’t hallucinate. I would personally use Claude Projects to due data analysis as it has a lower rate of hallucination that ChatGPT currently.
Part 1: "you are a prompt engineer you write very bespoke, detailed, and succinct prompts I want you ro write a more better organied and put together prompt of this:" ** INSERT PROMPT SAMPLE HERE** Part 2: "instructions: - output the prompt you generate in markdown - output the prompt in a codeblock"
Great content. Just found you, you got my subscribe. Question: I am new to all this. I need to do some large scale research (over 8000 entries) where the information is not aggregated anywhere. The only solution I can see is to call each contact and ask for the relevant information. (i need to get the name and email of the executive directors of all senior communities east of the Mississippi with resident populations over 50.) Any ideas?
thanks so much! means a lot and appreciate your sub In terms of your task, there's multiple ways to slice it. Route 1 (manual) - you hire a virtual assistant offshore to go through LinkedIn profiles and/or search online directories to identify senior communities and log all this information for you in a Google Sheet. Route 2 (semi automated) - using an intelligence tool like ZoomInfo or ClearBit which pretty much sell you access to many leads work information. Route 3 (automated) - using something like MeetAlfred to automatically target or reach out to individuals in your target market: meetalfred.com/articles/at-ulink-linkedin-automation Best ways that come to mind. Hope that helps!
do you mean templates embedded within the training itself? I would imagine they’d rather not constrain the model and put it in a box that’s stuck to a few templates.
For Claude prompts maybe 🤔 Llama 3 prompting will look vastly different; this also removes all dependence on these tools and puts it on the LLM themselves
The idea is somewhat interesting but not scientific as spected judging by the self professional description nor is groundbreaking as the clickbait suggests. An idea to have something closer to mathematicaly tested prompting would be generating the instruction in json format and the instructions in xml structure
You are not being economic with input tokens " I want you to write " is not necessary. a simple directive is all that is necessary. "Write ...." Conversational colloquial English is superfluous . It's a vector database so break up long sentences into short , direct, active voice instructions. Surely your research has taught you that . The responses should also be directed to be economical in output tokens spent unless otherwise directed .
thanks for your feedback - this video is meant to focus on the 'how' to use the technique rather than how to optimize the use of this technique in LLM production-based applications. If you're using a front-end with a no limit on token usage, being superflous with wording isn't a consideration
@@Mark_Kashef economy in programming and clear, concise thinking is never optional when it comes to constructing logic which is what we are doing. This very much is part of the "how" to use efficiently. There is always a cost when we are not efficient and economical
I think the consideration of prompt engineering as an extension of the way we speak rather than the way we code should be taken into account. as the models become more proficient with the nuances of language our superfluous speech patterns will become the norm just as with a standard human-human conversation. That's not saying communicating in a code style isn't efficient now, just something I think future us will be leaving behind as the technology evolves.
Great job, Mark !!!
You've explained better than anyone that I've listened to in the past two years how to make A.I. agents do whatever I need them to, and how to fine-tune them to make the results even better.
I've subscribed and will be listening to your archives as well as waiting for new videos to be posted.
Thanks for the feedback James! Super appreciate hearing that - will keep trying to pump out value 🦾
Most comprehensive video on the topic I have seen so far. You are great. Thanks !
Means a lot to hear that! Thank you so much 🙏🏻
I work with Mark on a super exciting project. When I was searching for the right partner, I told myself I want to work with someone who is 10x more knowledgeable than me.
Mark is it, and much more.
❤
Very useful tutorial!
Great to hear! Thanks for the feedback 🦾
Mind Blowing, Excellent.
thank you! Much appreciated
Great job. Everything was just PERFECT. Keep up the good job!
Thanks so much! High standard to keep but thanks for the feedback 🦾🙏🏻
@@Mark_Kashef It's a good video on the intro of Meta Prompting. However, you are assuming the viewer knows as much as you know when you use the words "markdown" and "code block" and what asterisks and hashtags enable you to do or what they are. The majority of the population are new to AI and prompting. I think you need to put yourself in their shoes and ask yourself, will the man on the streets know about "markdown" and "code block" and what asterisks and hashtags are in terms of meta prompting?
@@charles120001 fair point! thanks for the feedback there Charles, will take it into account.
Great video. Impossible to share how impactful this technique will super charge my business. Thanks 🙏 bro! Yes to more videos like these, cheers
Legit so happy to hear that - another one dropping tomorrow on this topic 🤙🏽
Great vid - appreciate the content - just starting to do some deeper dives into prompt engineering.
Thank you! Welcome to the rabbit hole of prompt engineering 😅
Hey @Mark_Kashef after so many months of looking for the right prompt technique, yours is the BEST!! ever. Thanks for sharing and for making it so easy to understand. I subscribed so I wont miss the part two of this video.
So happy to hear!! I’ve got at least two more videos coming out on this 🫡
Amazing how it went to do the # Role, etc by itself. Amazing, stuff, Poe is a game changer too!
thanks for the feedback amigo! This is the way of the future haha
I have been utilizing this process getting great results and didn’t know this was the official name.
This is very helpful Mark.Thank you from Morocco
So happy to hear! Thanks for tuning in from Maroc 🇲🇦
Brilliant stuff; many thanks for your excellent professional info... You got a new sub!
Thank you so much for the feedback, and the sub Curtis!
Such a great introduction to meta prompting!
Great to hear it was helpful!
Wooowwwww... A lot of knowledge 💡Thanks for the gems 🙏🏿
You got it! Thanks so much for the comment 🦾
Love your style and approach bro
amazing to hear!! thank you so much
I impreess with how you're explaining as a real genuine teacher. You've got me subscribed . thumbup continue to provide more
that means a lot Shukur! I'll keep trying to give that value :)
Great information on prompting Mark - thank you for sharing!
always a pleasure to see you in the comments haha!
My favorite prompt eng back at it again!
*lazy prompt eng 🤣🦾
Very informative tutorial! 👍
thanks Mike!
greetings from Panama Canal 🙏
Just subscribed pure class
Your kind words mean so much - thank you Muhammad! 🦾
Hi Mark, next cool video from you which I just found. Thanks a lot.
Would you recommend to use Poe instead of using a specific Chatbot like ChatGPT or Claude?
I would only recommend POE if you’re someone like me that wants to test different outputs on a consistent basis.
The downside of POE is that you don’t have those extra features like Memory, Personal Custom Instructions, etc you have in ChatGPT
Excellent video and it really gives me a good idea of how to use the AI to generate its own prompts. My question is why do you ask it to output in markdown all the time it just output in regular normal words? Thank you and new subscriber.
Thanks for the sub Scott!
Markdown and XML tend to perform better in LLMs as the hashtags and bullets are a proxy for the proxy for importance in the information.
For example
# a main header, most import
## subheader
### sub-title within the airtcle
** = bolding
__ =italics
basically helps communicate and structure the prompt so that it's easier for the LLM to construe what's the most important from least important
This is really great, thanks for the detailed explanation and walk through. One question - what if you wanted to incorporate additional input (upload a text file or paste some data) or some clarifying questions into the prompt? I did try and few different approaches, none with ideal results.
My pleasure!
On the upload front, I’ll usually add a portion where I’ll say ‘hey I’m going to upload this file or paste this data, XYZ is my dream outcome with it, and I need you to push me back with clarifying questions to help me get to said dream outcome with any gaps you’re missing from achieving it’
{insert example of dream outcome}’
Something along those lines
Prompt Engineering is great, especially when used in seo and other similar areas.
This is great. I built a local prompt engineer agent with feedback. Maybe a step up from this?
Awesome! I just finished a recording a video on how to integrate chain-of-thought into responses to provoke feedback from the user 🦾 it’ll come out next week.
@@Mark_Kashef nice I’m looking forward to seeing your implementation.
interesting, i have created something this to create me prompt but it was like back and forth
I have a video coming this week with the back and fourth aspect to help fine tune the prompt creation :)
Do you have or can you create a video on voice caller prompts please? Outbound/Inbound if possible? PS: You f#cking Rock!
One of the next videos I’m releasing is how to apply this technique to building ai agents;
Taha and I will make a video on voice prompt design 🦾
A new direction with nice explanations. The idea will bring a revolutionary change in the traditional concepts of Prompt Engineering. I have already subscribed and liked the video. I also welcome you to my TH-cam channel and hope to get a subscription from such a knowledgeable person like you. Thank you for enlightening us regularly with your promising videos.
Great video! This is how our prompt generator (Perfect Prompt Creator) and Professor Synapse works 💪 Top prompts and GPTs since Dec 2023 🤗🙌
Can you do a video about data analysis using ChatGPT? When I give it a CSV file with campaign data, for example, and ask for insights, it always gives wrong assumptions and creates calculations that are not accurate.
Definitely something I can add to my content list!
In terms of math, chatgpt and LLMs in general lack at calculations since language models don’t excel at maths.
That’s why chatgpt uses code interpreter, to offload the math to Python to make sure it doesn’t hallucinate.
I would personally use Claude Projects to due data analysis as it has a lower rate of hallucination that ChatGPT currently.
Can you provide the prompt that you used?
Part 1:
"you are a prompt engineer
you write very bespoke, detailed, and succinct prompts
I want you ro write a more better organied and put together prompt of this:"
** INSERT PROMPT SAMPLE HERE**
Part 2:
"instructions:
- output the prompt you generate in markdown
- output the prompt in a codeblock"
🔥🔥🔥🔥
Subbed
appreciate you!
Great content. Just found you, you got my subscribe. Question: I am new to all this. I need to do some large scale research (over 8000 entries) where the information is not aggregated anywhere. The only solution I can see is to call each contact and ask for the relevant information. (i need to get the name and email of the executive directors of all senior communities east of the Mississippi with resident populations over 50.) Any ideas?
thanks so much! means a lot and appreciate your sub
In terms of your task, there's multiple ways to slice it.
Route 1 (manual) - you hire a virtual assistant offshore to go through LinkedIn profiles and/or search online directories to identify senior communities and log all this information for you in a Google Sheet.
Route 2 (semi automated) - using an intelligence tool like ZoomInfo or ClearBit which pretty much sell you access to many leads work information.
Route 3 (automated) - using something like MeetAlfred to automatically target or reach out to individuals in your target market:
meetalfred.com/articles/at-ulink-linkedin-automation
Best ways that come to mind. Hope that helps!
Hello. The answer is a workflow with make
@@thibaultmouillefarine795 could you be more specific?
Why can't LLM model have generic templates?
do you mean templates embedded within the training itself? I would imagine they’d rather not constrain the model and put it in a box that’s stuck to a few templates.
parabens -ptbr
May I see a chat gpt bot built for you in open AI store?
I don't have any actively in the store at the moment, all just personal or shared.
seo-optimized... so it's double-optimized?? 🧐🤔
meta optimized 🤓
I do really love legit people like you, Mark God Bless You for been an ORIGINAL, Knowledge Giver !
Thank you so much!! Means a lot to hear, will keep aiming to provide more value 🚀
@@Mark_Kashef 👌🤘
Mark Kashef, what is your Fiverr profile?
It's here!
Link: www.fiverr.com/datascience2go
Please correct your video timestamps
noted with thanks
just use claude prompt generator and all solved
For Claude prompts maybe 🤔
Llama 3 prompting will look vastly different; this also removes all dependence on these tools and puts it on the LLM themselves
The idea is somewhat interesting but not scientific as spected judging by the self professional description nor is groundbreaking as the clickbait suggests.
An idea to have something closer to mathematicaly tested prompting would be generating the instruction in json format and the instructions in xml structure
thanks for the feedback
You are not being economic with input tokens " I want you to write " is not necessary. a simple directive is all that is necessary. "Write ...." Conversational colloquial English is superfluous . It's a vector database so break up long sentences into short , direct, active voice instructions. Surely your research has taught you that . The responses should also be directed to be economical in output tokens spent unless otherwise directed .
thanks for your feedback - this video is meant to focus on the 'how' to use the technique rather than how to optimize the use of this technique in LLM production-based applications. If you're using a front-end with a no limit on token usage, being superflous with wording isn't a consideration
@@Mark_Kashef economy in programming and clear, concise thinking is never optional when it comes to constructing logic which is what we are doing. This very much is part of the "how" to use efficiently. There is always a cost when we are not efficient and economical
@@hitmusicworldwidethere’s more than one way to do one thing though chill
I think the consideration of prompt engineering as an extension of the way we speak rather than the way we code should be taken into account. as the models become more proficient with the nuances of language our superfluous speech patterns will become the norm just as with a standard human-human conversation.
That's not saying communicating in a code style isn't efficient now, just something I think future us will be leaving behind as the technology evolves.
@@jpheeter completely agree, love this perspective! Appreciate you chiming in.
then hak tua chick does a video obliterates the rest of us going viral. I really question reality. it's mostly dumb luck at this point for everything
hopefully this video is more valuable than that 🤣
I heard that the output in these bots are limited when compared to it`s navite platforms
The front-end ChatGPT and Claude AI have an internal system prompt that alters the LLM’s behaviour!
@@Mark_Kashef I mean on Poe, output tokens are diminished
@@abboudkarim if it's a very long output, yes, but they just released an artefacts feature like Claude to get around that