Maybe you could start your project planning with the O1 version, focusing on getting all the details like file structures, but don’t ask it to generate all the code right away. Instead, ask for explanations of the class diagram and details on what each class, function, and module does. Then, continue with the GPT-4o-mini approach. This might work better because generating the code when everything’s clear isn’t too challenging. Plus, you could save more of your weekly limit. It’s worth trying out to see if the experience improves since too much back-and-forth might lead to overthinking. Try it, and share the results if it works! Great content as always, thank you.
The issue where the model isn’t generating all of the code likely stems from maintaining a large context. I refer to this as "LLM dementia," which I have also encountered when working with Anthropic. What you need to do is request complete documentation of the code, then start a new chat with the documentation and code included. From there, you can proceed. The challenge you're facing is that you're unable to attach files for preview.
Interesting for "how many times does the letter t appear in the word strawberry" you get "The letter "t" appears once in the word "strawberry"." Great video, you are a great source of breaking news.
In my exprience, it's much better in coding and debugging. GPT-o will constantly gaves me answer that don't really matters to the problems, o1 can always find the problem at first feedback.
Buckle up for a rapid-fire rundown: 1. Understand O1's "Chain of Thought" process to provide effective instructions. 2. Break down complex tasks into smaller, manageable steps when using O1. 3. Provide clear and concise instructions to help O1 understand your needs. 4. Double-check O1's work, as it is still under development and may require feedback. 5. Stay updated on O1's development and explore its potential applications in your field.
A simple question, what time is it... as a language model I have no idea... where are you? I have no idea either, another sum 0+0 and explain how you got the result
I think the output depends on how you are asking the question and some internal settings. Here is the chat session I got the response in: chatgpt.com/share/66e4c82c-e6f8-8001-abe6-ad5e795c3a63
I love how I keep predicting the dates exactly, yet nobody notices... Remember this comment? 🤖 👁️ 🍓 Remember, remember the 12th of September, The Strawberry, Reason, and Mind. Orion’s path, through logic’s math, Shall soon its breakthroughs find. The Cosmic Glitch, Mrigasira Nakshatra, holds the Clue for You. 🙏
Maybe you could start your project planning with the O1 version, focusing on getting all the details like file structures, but don’t ask it to generate all the code right away. Instead, ask for explanations of the class diagram and details on what each class, function, and module does. Then, continue with the GPT-4o-mini approach. This might work better because generating the code when everything’s clear isn’t too challenging. Plus, you could save more of your weekly limit. It’s worth trying out to see if the experience improves since too much back-and-forth might lead to overthinking. Try it, and share the results if it works! Great content as always, thank you.
The issue where the model isn’t generating all of the code likely stems from maintaining a large context. I refer to this as "LLM dementia," which I have also encountered when working with Anthropic.
What you need to do is request complete documentation of the code, then start a new chat with the documentation and code included. From there, you can proceed. The challenge you're facing is that you're unable to attach files for preview.
Interesting for "how many times does the letter t appear in the word strawberry" you get "The letter "t" appears once in the word "strawberry"." Great video, you are a great source of breaking news.
In my exprience, it's much better in coding and debugging. GPT-o will constantly gaves me answer that don't really matters to the problems, o1 can always find the problem at first feedback.
Great video. 7:06 video starts from here
Thanks man.
That was insightful, it seemed like kind of a pain to use overall but then the result vs the time spent was still fairly impressive!
Great video! Thanks for the practical insight.
What screen recording software do you use?
screen.studio
Buckle up for a rapid-fire rundown:
1. Understand O1's "Chain of Thought" process to provide effective instructions.
2. Break down complex tasks into smaller, manageable steps when using O1.
3. Provide clear and concise instructions to help O1 understand your needs.
4. Double-check O1's work, as it is still under development and may require feedback.
5. Stay updated on O1's development and explore its potential applications in your field.
A simple question, what time is it... as a language model I have no idea... where are you? I have no idea either, another sum 0+0 and explain how you got the result
it isnt reasoning, its comparing
comparing and evaluating. That's what you do when you reason too.
Reasoning quantifies -> compares, then "weights"->names with a higer order artifact. Safe to say, this thing is reasoning.
@@TheHouseOfBards clearly a matter of perspective :) Id say those things are somewhat achieved when using a spreadsheet
Why are different TH-camrs saying it’s getting the strawberry question right and others wrong? Which is it?!
I think the output depends on how you are asking the question and some internal settings. Here is the chat session I got the response in:
chatgpt.com/share/66e4c82c-e6f8-8001-abe6-ad5e795c3a63
mine says 1 t
lmao if you need to ask for complete codes, that means you have no experience with software development at all
I love how I keep predicting the dates exactly, yet nobody notices...
Remember this comment?
🤖 👁️ 🍓 Remember, remember the 12th of September,
The Strawberry, Reason, and Mind.
Orion’s path, through logic’s math,
Shall soon its breakthroughs find.
The Cosmic Glitch, Mrigasira Nakshatra, holds the Clue for You. 🙏
Share code for webapp about pdf chat
that is coming soon, making some changes and then will release it as part of localgpt