Easily one of my favorite GenAI channels. Doesn't just use the examples with the releases. Actually produces real world feasible use cases and very insightful information regarding new drops.
Yes, I totally agree. Delving into the architecture of prompt engineering and explaining it as frameworks is very helpful. Truly a lightbulb moment for me, these are patterns I’ve adopted and move into production very quickly. Thanks Dan for sharing this 🎉
Dan, you are a true architect. The way you explain it is pure logic that, creates such a powerful way to evolve anyones work to a new level. There are so many errors with other platforms, that actaully could resolve those mistakes, glad you truly understand AI and what its for! Thanks!
Non-coder here. I understand that domain expertise combined w/agentic tools, AI best practice workflows and repeatable frameworks will give me a solid starting foundation. Can't wait to absorb your other videos.
(my additions) Juding prompts - scoring, evulating Editing prompts - applying changes in structured data in more than once place, ie creating a diff. Improvement prompts - different from a style transfer, as you are asking for new unseen information, or information to be removed. Depending on what the LLM thinks will improve the output. Completion prompts - old school, complete this sentence, ie copilot, or even stock market prediction. ie what happens next in this linear series. Catagory Expansion prompts - might be a subtype of Expansion, but rather than expanding specific information you are datamining to extract related topics or graphs of topics and attrs. ie generating 100 ideas, that will be scored. This is related to reasoning, but its something before reasoning can be applied.
Fantastic video ! Great and detailed explanation of the framework! Would be great if you can expand with some guidelines on what could be the differences in crafting the different type of prompt, I assume that each of them should have a more specific logic to follow to optimize the results
Only one to have the balls to mention Phi-4 Its cracked honestly Im testing on aider currently. >40% currently, but still 30 cases left, does even better comparatively on refactoring benchmark, but I need to retest those with proper context windows. Can you confirm 16k context window on phi-4
Easily one of my favorite GenAI channels. Doesn't just use the examples with the releases. Actually produces real world feasible use cases and very insightful information regarding new drops.
Also doesn't try to sell some bs course
Yes, I totally agree. Delving into the architecture of prompt engineering and explaining it as frameworks is very helpful. Truly a lightbulb moment for me, these are patterns I’ve adopted and move into production very quickly. Thanks Dan for sharing this 🎉
Feel like I learn so much watching your vids man. Many thanks!
Same here, and his blog is even sharper… a tip to all, buy his time software app, 5$ and i use it daily to keep my head in my task and eat the frog
Thanks for making some of your vids more accessible to the general public.
Dan, you are a true architect. The way you explain it is pure logic that, creates such a powerful way to evolve anyones work to a new level. There are so many errors with other platforms, that actaully could resolve those mistakes, glad you truly understand AI and what its for! Thanks!
Awesome. You are doing a fantastic menttoring work very useful for future Gen AI projects. Thanks for your effort.
Non-coder here. I understand that domain expertise combined w/agentic tools, AI best practice workflows and repeatable frameworks will give me a solid starting foundation. Can't wait to absorb your other videos.
Please make a course, I will buy in a heartbeat. This is pure gold.
(my additions)
Juding prompts - scoring, evulating
Editing prompts - applying changes in structured data in more than once place, ie creating a diff.
Improvement prompts - different from a style transfer, as you are asking for new unseen information, or information to be removed. Depending on what the LLM thinks will improve the output.
Completion prompts - old school, complete this sentence, ie copilot, or even stock market prediction. ie what happens next in this linear series.
Catagory Expansion prompts - might be a subtype of Expansion, but rather than expanding specific information you are datamining to extract related topics or graphs of topics and attrs. ie generating 100 ideas, that will be scored. This is related to reasoning, but its something before reasoning can be applied.
One of the goats in the industry subbed
What a great AI channel! I’m in. Thanks
amazing work. Thank again for this. Keep sharing and working on this valuable insights.
Fantastic video ! Great and detailed explanation of the framework! Would be great if you can expand with some guidelines on what could be the differences in crafting the different type of prompt, I assume that each of them should have a more specific logic to follow to optimize the results
The idea is great, but I would like this better if you'd clearly categorised what tool would be better for what use case. Great content anyway ❤
OpenAI hasnt put out anything competitive. Gemini 2.0 is by far the biggest release this season
Thank you this was extremely helpful!
bro is cooking fr
AI transforms effort into excellence 🔥
What about specific code assistant prompts? Would they fit in this list already ?
Would love to know how and where you organise and save your prompts in accordance with these categories
Great video. Curious to know which models do you think are best for each use case?
How to benefit from eachother and share prompts on those categories?
Interested in what you think are best models for each type (or good enough and cheap
Another awesome video. Wondering what hardware you are running for Llama 3.3... video looked like MBP?
How do i create an LLM agent that feeds on my university notes and becomes a tutor on them?
Great work.
Great mental model!
jus wanna say I love u bro
For your information… at least one non-coder watching and learning.
I am also becoming a fan of yours
Only one to have the balls to mention Phi-4 Its cracked honestly Im testing on aider currently. >40% currently, but still 30 cases left, does even better comparatively on refactoring benchmark, but I need to retest those with proper context windows. Can you confirm 16k context window on phi-4
compression - that is what my email client (hardmail ai) does. and I didn't even know what it was called.
im commenting cause im a fanboy
Nice taxonomy of prompts.
Plz plz plz plz invest in a good mic!!!!
Maybe get a better speaker