yap: comment beta: mutable variable sigma: immutable variable Data types vibe: boolean stack: integer sauce: double quote: string squad: array facts: dictionary Conditional statements vibe check: if bro did not pass: else Loops bussin': while hits different: break hot take: continue body count: loop iteration variable name (best practice) Functions main character: main function dms: function slide into dms: function call slide back: return understood the assignment: program completed successfully / return 0 Input/Output flex: print left on read: user input Operators drip: + lack: - combo: * ratio: / no cap: == cap: != Classes squad goals: class blud: friend class highkey: public member lowkey: private member guarded af: protected member cook: create an instance cooked: deallocate an instance Exceptions yeet: throw red flag: error send it: try caught in 4k: catch last chance bro: finally
Perfect summary at the end. Until we can just unit test its output and trust it to be able to work through it's bugs on its own, then we must take the time to understand everything it writes which is often not saving us much time or energy in the end.
For this reason I actually find it to be a great tool. I am great at problem solving and getting things done when I know what I'm doing, I just have difficulty finding a starting point, so something like this that can give me such a starting point that I can then expand on is a huge time saver
The ideas it had for a programming language were hilarious. Plus points for creativity. It didn’t get the interpreter, but honestly most juniors would have a hard time doing that too. It is at the level of a junior developer who read a lot about coding but is not really good at coding. This is super useful sometimes. I am an experienced developer who has to touch many different areas - if I have to do something where I am not really proficient in it is much easier to ask ChatGPT instead of googling and reading docs. E.g. I can ask “what would be the idiomatic way of solving X in language Y with library Z” and it would give me an answer in seconds that would take me minutes to google. It doesn’t go deep, but it does have a broad knowledge.
Agree, and the AI's second debugging response actually recognized this, but failed to fix it. This is why you take the time to read the manual. Edit: Conner said in another comment that this also didn't work
I had gpt to explain an rxjs code snippet the other day and it got it entirely wrong. The problem was that it was very confident in its answer. I had to end up learning the concept by myself
Yeah shit happens all the time, i give him a code snippet of jsx or tsx and he starts making shit up his ass to explain a random point in the code that either doesnt matter or just straight up wrong, when you ask him to solve the problem he just created he just fucks up more and more, at the end gives you a pile of frankensteined garbage shit that you cannot even develop anymore because dependencies coincide with eachother
Periodt is a slang way of saying Period with a "tuh" or "t" sound at the end.. in very diverse environments you'll hear this saying used predominantly by younger women. It's another way of saying something is final. For example, Girl A: "Her outfit looks great!" Girl B: "Periodt!" meaning to strongly agree. It reinforces: to strongly affirm or agree without any opposing argument(s). And typically is said after the sentence to further emphasize what it's reinforcing. In that the girl is slaying her outfit or killing-it, respectfully.
The conclusion is correct, I have the same feeling. Right now, AI can be good to get ideas, but if it does something wrong, most of the time is stuck... and at that point is much faster if you wrote the code entirely, instead of debugging its code (which can include many issues)
@@catdisc5304 same bro!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! fun!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
it cannot execute anything, chatgpt is a word predicter. it doesnt do that, it literally works by predicting what the most accurate response would be to a prompt.
This is really interesting. Using a combination of Claude 3.5 Sonnet and GPT-4o via ChatGPT, I was able to write some rudimentary but very functional 3D rendering code in Rust. Even included a simple lighting/shading system. Now I'm tempted to recreate using o1.
I made pretty much the same observations as you did. For simple code snippets, AI is adequate most of the time. But once you demand something more complex, there are inevitably issues coming up, and the AI almost always fails to solve it. It gets even worse when it starts hallucination functions or properties which simply don't exist, and mentioning that to the AI usually just leads down a rabbit hole. At that point it's usually best to work with what you got initially, and do the followup steps yourself. Maybe still keep asking the AI for help with smaller sections, but do not trust it with the entire task :)
It was a while ago, but I think it took a day or two (although while also filming, getting constantly distracted, etc.). Getting the basics to work was pretty quick, but some of the edge cases like recursion took a bit of time IIRC.
The first thing you’ve asked, it is something I asked ChatGPT almost daily exact same one time I was even able to get it to make a Settings add music to it and a dark mode and an AI difficulty mode so you could say easy, medium or hard when you’re playing against it
When debugging, you have to be more helpful. For example, ask 4o to add trace statements so you can give it a more detailed error report. Once you have the trace, go back to o1
probably should have used o1-mini, it s better at coding, but anyway I think that all the models (Claude, gpt..) have the same issue , it generates amazing code mixed with dumb mistakes, and once it fails on something, it gets stuck in an infinite loop of attempting to fix => failing or breaking another thing.... and as a developer you would need to spend a lot of time to do a manual intervention as you are not familiar with the code, and probably even with the concepts it uses... so I think that writing code yourself and using ai as a tool to help speed up is better than prompting
I expect these models to get much better once they're able to autonomously write, run, and test the code piece by piece. Writing an entire program and only begin testing when every feature is implemented, makes it nearly impossible to debug.
Whats really cool with something like this is using with with cursor. I have found most of the time gpt or claude or any other ai tends to get you about 70% there. Then you place your code into cursor and with the context of your codebase cursor will really fast find the errors and be able to get it fully functional so you can build off of it.
That’s fair, although for the purpose of general coding I didn’t notice a huge difference. It has more use cases, but it didn’t seem to get significantly better at its existing ones in my limited testing.
as someone who is mid gen z, I have never heard periodt before and don't know what it means, but I assume its like saying 'period' at the end of a sentence
@@Mysticscubing "periodt" is literally saying period yes, but it is unfortunately the correct way of spelling the slang and is pronounced "period-tuh", just adding a t sound right after the d sound
Just tried chess out of curiosity, it's pretty similar to the result in the video. It almost works, but it would probably take me more time to debug it than to do it myself. It was able to generate the board perfectly and piece movement mostly works. But capturing pieces doesn't seem to work correctly, and some other rules like not being able to move into check aren't working properly. Pretty impressive regardless though.
“decent error handling” *has no line numbers* my guess would be that it’s expecting the last statement in a for loop to have a ‘periodt’, so it’d be ‘i glowUp periodt)’
Think BEFORE acting?! Ah, c'mon, you must be kidding. You don't seriously believe this is the norm among Home Devolvus, especially these days, not that it ever really was. What really separates us from the animal is that we have some comprehension regarding our own existence, abilities and indeed flaws, but more importantly, we are able to convince ourselves that we are better than we are, much better.
Bdw it always has been reasoning. It just does it in process of recursions where it asks itself reasoning couple times and answers until it finds firing answer It is slower then making it Into infrastructure of the model so gpt5 will once again most likely instantly respond with better architecture... Or it will also think but faster and better
I would check if periodt after glowup was even right, cause you might have sent it to debug the wrong problem. I can easily see it misdiagnose the error cause itself and the trying to use solutions that aren't in the right direction
At the last code example you used, could have tried adding "periodt" after the "glowUp" maybe. Second thing that makes me wonder, if it enforces "periodt" after parenthesis too, hence the previous error. Maybe it's requiring "semicolons".
@@ConnerArdmantell that the Germans, the word for city for example is "Stadt" which is pronounced the same way as "statt" (instead) and similar to "staat" (state/country) All three of this are pronounced (almost) the same and have redundant characters
@@catdisc5304 Your comment is straight-up off. You're just throwing words into a vacuum. First of all, those words clearly imply the plural forms: Stadt → Städte, Staat → Staaten. If you mess up the singular, you'll mess up the plural too. There are rock-solid West-Germanic rules for this, trust me. And oh yeah, for some weird reason, it’s the same in Dutch: stad → steden, staat → staten. Don’t even get me started on English spelling-full of unnecessary silent letters!
This is the equivalent of "everything evolves into crabs". I have also tried to create gen Z programming language called "genZ++" when o1-preview it came out. I wonder how many others have done the same. I went with more of a pythonic syntax since brackets are cringe.
It might not think in a completely human way, but it is making some form of decision before starting to give its answer. It’s just semantics, but I’d call that thinking.
We actually have no idea exactly how it works. Nobody does. OpenAI doesn’t even know. Their lead researchers have said themselves that the machine learning is beyond their entire understanding at this point
Hey brothet i came here from your (if j could startover) video and i watched the whole thing. And my teacher always use vue.js at school. What about vue.js?and react native is for app development and react js or react id for web development right?
great to start things with o1. no way it can start from finish for me... it will be a good study/research partner... but a colleague to do work together, please avoid to do so...
It will take a while before LLMs can completly replace human coders and do standalone projects. For now they can manage simple to medium complex tasks under strict guidance. There’s a reason they are called AI coding assistants.
Not Z tho, but I demand comments to start with 'sike'! Also comment sections start with 'jk' and end with 'fr'. Kidding, as I frequently type 'sike' in my comments :P On a serious note, though, probably it is a language choice consequence (well, Java as a backend language...), but although this is a joke interpreter it is still an interpreter - and AI shot itself so hard in the leg writing it! What you normally have with a script, a console handler or anything like that (console is a single-command script, right)? A caller routine, that just scans through "some" command list, and fires "some" command callbacks! And a state machine to scan inside the lines. Here, though, we end up with a hard-coded list of command names it is such a huge pain in the rear to add anything to I'd say it added more work instead of helping with the one you initially had to do.
why do I feel like gpt o1 has been coded with chatGPT? i've coded alot with gpt and gpt o1 wauld be a type of script chatGPT wauld make like "Humand normally think so lets integrade that" And then how it says "thought for 12 seconds" hm
Why javascript, all languages that you use are made in assembly, some high level are made in C or C++ but no language is made in js and assembly is the fastest
Honestly, crafting a response and then crafting a clear response is kinda autistic. You do write the body first, then make it sounds better. Maybe we are in trouble..
The "Gen Z" programming language ChatGPT "came up with" is actually a copy of a TH-camr's programming language called "bussin". You can check it out. Turns out, chatgpt is not that creative
@sm1522 it doesn't work good for me, but maybe its because im use to a custom multiagent system that ive built that is customized to my tasks. But i find it does alot of useless crap its temperature is default to like 1 or something. It repeats text when compiling the text from different agents. I dunno a bunch of little things. It takes more prompts to get desired results with o1 than it does with gpt4o or gpt4 turbo
AI will be ready when you type "make an app that tracks finances" and it says "Nah, that sounds really boring."
We were at that point a while ago where chatGPT would refuse to do things because they seemed too tedious
@@catdisc5304 i actually believe they trained the AI that way to save computing power
and it integrates an api in the app 💀💀
And then you gotta convince it💀
@@Spiderfffun probably lmao
The end of the statement should have been 'fr fr'.
😂
fr
That’s some RNS
Change != To nah
On god
Glow up and glow down for increment and decrement has to be the biggest w for gpt so far
yap: comment
beta: mutable variable
sigma: immutable variable
Data types
vibe: boolean
stack: integer
sauce: double
quote: string
squad: array
facts: dictionary
Conditional statements
vibe check: if
bro did not pass: else
Loops
bussin': while
hits different: break
hot take: continue
body count: loop iteration variable name (best practice)
Functions
main character: main function
dms: function
slide into dms: function call
slide back: return
understood the assignment: program completed successfully / return 0
Input/Output
flex: print
left on read: user input
Operators
drip: +
lack: -
combo: *
ratio: /
no cap: ==
cap: !=
Classes
squad goals: class
blud: friend class
highkey: public member
lowkey: private member
guarded af: protected member
cook: create an instance
cooked: deallocate an instance
Exceptions
yeet: throw
red flag: error
send it: try
caught in 4k: catch
last chance bro: finally
underrated comment
Brainrot language 😂
this much keyword in a brainrot lang would cause my heart to stop. please never let this happen...
“Else: Bro did not pass” is hilarious lmao
skibidi: toilet
Perfect summary at the end. Until we can just unit test its output and trust it to be able to work through it's bugs on its own, then we must take the time to understand everything it writes which is often not saving us much time or energy in the end.
you could also automatically feed the output back into the ai
Check out Pythagora
For this reason I actually find it to be a great tool. I am great at problem solving and getting things done when I know what I'm doing, I just have difficulty finding a starting point, so something like this that can give me such a starting point that I can then expand on is a huge time saver
@@camric7 at that point it's just generating a template though
4:34 the fact that one of the suggested prompts is "how many rs are in strawberry" is hilarious
The ideas it had for a programming language were hilarious. Plus points for creativity. It didn’t get the interpreter, but honestly most juniors would have a hard time doing that too. It is at the level of a junior developer who read a lot about coding but is not really good at coding. This is super useful sometimes. I am an experienced developer who has to touch many different areas - if I have to do something where I am not really proficient in it is much easier to ask ChatGPT instead of googling and reading docs. E.g. I can ask “what would be the idiomatic way of solving X in language Y with library Z” and it would give me an answer in seconds that would take me minutes to google.
It doesn’t go deep, but it does have a broad knowledge.
o1 > 4o because o1 can actually read your whole prompt and not literally get dementia. 4o just goes like uhhhhhhhhhh.. nah bro
4o mini: dementia
4o: smarter still dementia
@@brusdhdfhdalt1003
4: no memory
3: no response
@@brusdhdfhdalt1003 true
people when the word predicter predicts wrong:
i think the "i growUp" statement needed a "periodt" at the end, that's where the issue must have been.
Agree, and the AI's second debugging response actually recognized this, but failed to fix it. This is why you take the time to read the manual. Edit: Conner said in another comment that this also didn't work
I had gpt to explain an rxjs code snippet the other day and it got it entirely wrong. The problem was that it was very confident in its answer. I had to end up learning the concept by myself
Which model did you ask?
How did you know it was wrong if you didn’t understand it yourself? 😂
@@Donttouchthesnow research after gpt?
Yeah shit happens all the time, i give him a code snippet of jsx or tsx and he starts making shit up his ass to explain a random point in the code that either doesnt matter or just straight up wrong, when you ask him to solve the problem he just created he just fucks up more and more, at the end gives you a pile of frankensteined garbage shit that you cannot even develop anymore because dependencies coincide with eachother
No, not learning a concept all by yourself? Goodness... What ever will we do?
Periodt is a slang way of saying Period with a "tuh" or "t" sound at the end.. in very diverse environments you'll hear this saying used predominantly by younger women. It's another way of saying something is final. For example, Girl A: "Her outfit looks great!" Girl B: "Periodt!" meaning to strongly agree. It reinforces: to strongly affirm or agree without any opposing argument(s). And typically is said after the sentence to further emphasize what it's reinforcing. In that the girl is slaying her outfit or killing-it, respectfully.
The conclusion is correct, I have the same feeling. Right now, AI can be good to get ideas, but if it does something wrong, most of the time is stuck... and at that point is much faster if you wrote the code entirely, instead of debugging its code (which can include many issues)
Would love for you to try this again when o1 gets fully released instead of just o1-preview.
I NEED ZLang.
We already have that it's called bussin
no cap fr fr
Code with GPT, debug with Claude
exactly, and make the frontend with v0 :))
backend with Devin! >XD *_MUAHAHAHA!!!_*
Sure, Claude, because I like to wait 5 hours after 3 messages or spend a fortune per month because I need access to all the models...
@@catdisc5304 same bro!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! fun!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
buy me claude, pl0x!111!!11!!1!!!1!!!
Can you tell it to execute its own examples and update the interpreter or example as needed to get a final product that works?
it cannot execute anything, chatgpt is a word predicter. it doesnt do that, it literally works by predicting what the most accurate response would be to a prompt.
squad Car being an "extra Vehicle" makes too much sense god damn it lmao
This is really interesting. Using a combination of Claude 3.5 Sonnet and GPT-4o via ChatGPT, I was able to write some rudimentary but very functional 3D rendering code in Rust. Even included a simple lighting/shading system.
Now I'm tempted to recreate using o1.
I made pretty much the same observations as you did. For simple code snippets, AI is adequate most of the time. But once you demand something more complex, there are inevitably issues coming up, and the AI almost always fails to solve it. It gets even worse when it starts hallucination functions or properties which simply don't exist, and mentioning that to the AI usually just leads down a rabbit hole.
At that point it's usually best to work with what you got initially, and do the followup steps yourself. Maybe still keep asking the AI for help with smaller sections, but do not trust it with the entire task :)
Making humans obsolete Alpha 0.1.1
How long did it took you to write the interpreter yourself? A while can be everything from 30 minutes, to hours to days.
It was a while ago, but I think it took a day or two (although while also filming, getting constantly distracted, etc.). Getting the basics to work was pretty quick, but some of the edge cases like recursion took a bit of time IIRC.
bop being a for loop is by far the funniest.
The first thing you’ve asked, it is something I asked ChatGPT almost daily exact same one time I was even able to get it to make a Settings add music to it and a dark mode and an AI difficulty mode so you could say easy, medium or hard when you’re playing against it
Got the ad for your site before watching this video 😂
When debugging, you have to be more helpful. For example, ask 4o to add trace statements so you can give it a more detailed error report. Once you have the trace, go back to o1
probably should have used o1-mini, it s better at coding, but anyway I think that all the models (Claude, gpt..) have the same issue , it generates amazing code mixed with dumb mistakes, and once it fails on something, it gets stuck in an infinite loop of attempting to fix => failing or breaking another thing.... and as a developer you would need to spend a lot of time to do a manual intervention as you are not familiar with the code, and probably even with the concepts it uses... so I think that writing code yourself and using ai as a tool to help speed up is better than prompting
Why is mini better at coding?
@@cdmichaelb it s fine tuned for coding specifically
I generate dumb mistakes mixed with a little bit of code
I got garbage out of o1-mini.
I expect these models to get much better once they're able to autonomously write, run, and test the code piece by piece.
Writing an entire program and only begin testing when every feature is implemented, makes it nearly impossible to debug.
i think you have to start fresh , if you just copy and paste over the other it will remember bad habit one
Yeah. Somtimes the data in the context window is the issue. Ive started doing this as well and it is getting stuck way less often.
Well it's indeed good at CSS animations & timing functions
Whats really cool with something like this is using with with cursor. I have found most of the time gpt or claude or any other ai tends to get you about 70% there. Then you place your code into cursor and with the context of your codebase cursor will really fast find the errors and be able to get it fully functional so you can build off of it.
gpt 4o is a big leap, it supported attachments like zip files and images
That’s fair, although for the purpose of general coding I didn’t notice a huge difference. It has more use cases, but it didn’t seem to get significantly better at its existing ones in my limited testing.
u should try creating a new chat and giving the code to debug if the ai blocks itself. this always works for me
I've always wanted to do this with chatgpt
as someone who is mid gen z, I have never heard periodt before and don't know what it means, but I assume its like saying 'period' at the end of a sentence
i think periodt is supposed to just be period
@@Mysticscubing "periodt" is literally saying period yes, but it is unfortunately the correct way of spelling the slang and is pronounced "period-tuh", just adding a t sound right after the d sound
It's a periodt missing after the closing bracket of the loop } periodt . It needs a periodt equivalent to ";" at the end of each line of code .
Frontend web devs are not even cooked at this point, they are BURNT
What this ai generates is like me quickly making working game or interpreter that works on the surface but will quickly end in bugs.
I was scared before it failed the interpreter prompt
Semicolons are needed in a for loop in javascript, right?
Yeah the two after the initialization variable and the condition are both required
@@ConnerArdman ah yeah i was confused since you said in JavaScript they weren't necessary.
How can I get access to this?
I'd love to see it make a game of chess or something more complex. Incredible
Just tried chess out of curiosity, it's pretty similar to the result in the video. It almost works, but it would probably take me more time to debug it than to do it myself. It was able to generate the board perfectly and piece movement mostly works. But capturing pieces doesn't seem to work correctly, and some other rules like not being able to move into check aren't working properly. Pretty impressive regardless though.
@@ConnerArdman I also tried with the o, not o1 and was something similar.
“decent error handling”
*has no line numbers*
my guess would be that it’s expecting the last statement in a for loop to have a ‘periodt’, so it’d be ‘i glowUp periodt)’
Think BEFORE acting?! Ah, c'mon, you must be kidding. You don't seriously believe this is the norm among Home Devolvus, especially these days, not that it ever really was. What really separates us from the animal is that we have some comprehension regarding our own existence, abilities and indeed flaws, but more importantly, we are able to convince ourselves that we are better than we are, much better.
you need to take your meds man
Bdw it always has been reasoning.
It just does it in process of recursions where it asks itself reasoning couple times and answers until it finds firing answer
It is slower then making it Into infrastructure of the model so gpt5 will once again most likely instantly respond with better architecture...
Or it will also think but faster and better
What app is that that u use with the HTMl, JS and CSS file ? Ty for you’re answer ❤ 17:34
It’s just Codepen if you mean the website I use to test the code
I would check if periodt after glowup was even right, cause you might have sent it to debug the wrong problem. I can easily see it misdiagnose the error cause itself and the trying to use solutions that aren't in the right direction
At the last code example you used, could have tried adding "periodt" after the "glowUp" maybe. Second thing that makes me wonder, if it enforces "periodt" after parenthesis too, hence the previous error. Maybe it's requiring "semicolons".
I tried this off camera and it still didn’t work 🤷♂️
Or at the end of the bop loop.
I wonder how o1-mini would have performed, given that its supposed to specialized for coding
"periodt" is actually pronounced just like "period" btw
Well that just seems like an inefficient use of characters 😂
@@ConnerArdmantell that the Germans, the word for city for example is "Stadt" which is pronounced the same way as "statt" (instead) and similar to "staat" (state/country)
All three of this are pronounced (almost) the same and have redundant characters
@@catdisc5304 Your comment is straight-up off. You're just throwing words into a vacuum. First of all, those words clearly imply the plural forms: Stadt → Städte, Staat → Staaten. If you mess up the singular, you'll mess up the plural too. There are rock-solid West-Germanic rules for this, trust me. And oh yeah, for some weird reason, it’s the same in Dutch: stad → steden, staat → staten. Don’t even get me started on English spelling-full of unnecessary silent letters!
me whos stuck with gpt4o:
_sobs in dumbness_
prompt engineering entered the chat.
This is the equivalent of "everything evolves into crabs".
I have also tried to create gen Z programming language called "genZ++" when o1-preview it came out.
I wonder how many others have done the same.
I went with more of a pythonic syntax since brackets are cringe.
ur mic sounds really good whats the name ?
Thanks! It’s a Shure SM7B, the same mic most major podcasts use.
Gpt creating Languages : I'm cooked
Zlang has actually been a thing for a long time
I wonder if its inaccurate from stackoverflow questions
Alex age 21 reminds me of techno, ik he wasnt 21 when he died, but still.
It doesn't think mate. It's based on reinforcement learning its searching solution.
It might not think in a completely human way, but it is making some form of decision before starting to give its answer. It’s just semantics, but I’d call that thinking.
We actually have no idea exactly how it works. Nobody does. OpenAI doesn’t even know. Their lead researchers have said themselves that the machine learning is beyond their entire understanding at this point
@atsoit314 is that you, Sam? Why are you using a different username blud?
@atsoit314 , soo they are incompetent or lying?
@@MrMMVPP neither. You just don’t understand
yup im cooked imma jus re think my entire future rn
Hey brothet i came here from your (if j could startover) video and i watched the whole thing. And my teacher always use vue.js at school. What about vue.js?and react native is for app development and react js or react id for web development right?
Hi, can you make a video to integrate OpenAI API to analyze and explain pdf, docx, images and other files using javascript?
Not bad from reading 190,000 books!
3:59 Bro the hell did you just draw 🙏💀
great to start things with o1. no way it can start from finish for me...
it will be a good study/research partner... but a colleague to do work together, please avoid to do so...
The else statement should have been "cap" not "no cap".
It will take a while before LLMs can completly replace human coders and do standalone projects. For now they can manage simple to medium complex tasks under strict guidance. There’s a reason they are called AI coding assistants.
It will take less than 5 years
@@atsoit314 no chance. its obvious you are not a programmer.
It just created an interpreter
you should try claude
Not Z tho, but I demand comments to start with 'sike'! Also comment sections start with 'jk' and end with 'fr'. Kidding, as I frequently type 'sike' in my comments :P
On a serious note, though, probably it is a language choice consequence (well, Java as a backend language...), but although this is a joke interpreter it is still an interpreter - and AI shot itself so hard in the leg writing it! What you normally have with a script, a console handler or anything like that (console is a single-command script, right)? A caller routine, that just scans through "some" command list, and fires "some" command callbacks! And a state machine to scan inside the lines. Here, though, we end up with a hard-coded list of command names it is such a huge pain in the rear to add anything to I'd say it added more work instead of helping with the one you initially had to do.
o1 mini should be better for code than preview
why do I feel like gpt o1 has been coded with chatGPT? i've coded alot with gpt and gpt o1 wauld be a type of script chatGPT wauld make like "Humand normally think so lets integrade that" And then how it says "thought for 12 seconds" hm
Why does he look like Culture35??
I’ve never seen us in the same room 🤔
Why javascript, all languages that you use are made in assembly, some high level are made in C or C++ but no language is made in js and assembly is the fastest
*writes code*
yeah!
*try the code and dont work*
hey, this error came up?
*write the code in the most repetitive way you know how*
may i have the code to the tic tac toe game? i need to for something.
Brooo i was having a same idea 😅
Ok here what I thinks about 1
%=5(add 5 = 3 5 )
What about a new video about whether it worth to learn php in 2024 and language tiers where all suck except for JS? WE ARE WAITING🙏🙏🙏
5:29 all of the 'gen z' slang is just aave
you didnt saved the js bro
Try catch should have been fuckAround and findOut
its pronounced "periodt" not "period-t" lol
period-T looks like the only way to pronounce it
@@nerdboy628just say period. Periodt is supposed to sound like you’re putting more emphasis on the end, thus making it more definitive sounding
@@migsy1 ok so kinda like periot
no way
Nah, the best programmers are riddled with ADHD. We do, then think. We're fine..
Honestly, crafting a response and then crafting a clear response is kinda autistic. You do write the body first, then make it sounds better. Maybe we are in trouble..
Looks like AI soon will replace programmers
I hate that GPT even understands the meanings of the slang terms/references
x>0:cap=1/0frfr
loops being bop is crazy, i didnt think gpt would say sum like that, ig it doesnt know the meaning
Anderson Elizabeth Thomas Elizabeth Smith Kevin
js is nasty
Looks like I'm out of a hobby
The "Gen Z" programming language ChatGPT "came up with" is actually a copy of a TH-camr's programming language called "bussin". You can check it out. Turns out, chatgpt is not that creative
meow meow meow meoooow meow meow meow meooow meooooow meow meow meoow meooow meooooow
kewl
Z lang ...
cool
i need to learn zlang 😂
Skibidi
dsa
O1 is terrible just horrendous.
Why’d you say that
@sm1522 it doesn't work good for me, but maybe its because im use to a custom multiagent system that ive built that is customized to my tasks. But i find it does alot of useless crap its temperature is default to like 1 or something. It repeats text when compiling the text from different agents. I dunno a bunch of little things. It takes more prompts to get desired results with o1 than it does with gpt4o or gpt4 turbo
uio