I think you missed the point on why people prefer chatGPT. They don't have to deal with the constant "why do you want to do it this way and not x way?" or "this post is a duplicate of this post from 1-10 years ago" with the most technical response that only a PhD could understand. They can ask "can you explain it in a different way" and not get completely ghosted from the answerer who wants to live their own life and not spend an extra hour explaining the same thing again. They don't like being judged just for asking a question and not understanding the answer. They don't like people calling them lazy because they don't know where to look or that they get overwhelmed when they have to sift through 10-30 links on google or stackoverflow to find the right answer. They don't like getting downvoted because their question was too "low quality" for their tastes. These are necessary evils in the end, but the fact that a way to avoid these evils popped up and it works some of the time makes sense. People would prefer wrong answers from someone giving them their full attention than correct answers from someone who might be annoyed that they should've studied more.
I think that misunderstanding comes from thinking stack overflow is a Q&A site. It is, but there is less “Q” and more “A”. 95% of the time when one goes on stack overflow, one doesn't need to ask a question but instead looks at previous answers. From this perspective, it makes sense why stackoverflow seems to discourage questions. Askers believe that stackoverflow is a Q&A board (like, say, Quora) where they freely accept questions. Responders see stackoverflow as a curated platform whose utility depends directly on the quality of its questions and answers. In that sense I think stackoverflow is more like reference documentation structured in Q&A format rather than a true Q&A board. I find that stackoverflow works a lot better if you "lurk" and only ask questions as a last resort. ChatGPT is the opposite: you can freely ask questions, so in that sense it's a more direct Q&A experience.
Very true. For anything mildly complex or novel, ChatGPT is observably bad. Where I found it most useful is when I am studying something new and need to quickly summarize or distill information from blob of text.
I like using it for that purpose as well! Also, arguing with it is sorta a way to deepen my own understanding on the topic - almost like discussing with yourself in a journal but instead having ChatGPT respond to you in its own ways (which can be insightful).
One of the big things I've liked with bard is that it provides links to its info sources. I've learned a ton just following links from it and reading stackoverflow or GitHub documentation. But I'm a beginner and even I find that gpt and llama 2 with anything beyond like a 10 line function or html busywork that it's fairly useless. I usually only even open them if I'm struggling with a class deadline or I'm having trouble with Google though.
The problem with more iterations for me when solving a specific problem has always been that ChatGPT started to mix things together and disregard parts of the context of the original problem or the rest of discussion.
I find myself debating ChatGPT on the regular. Basically telling it your confused or that doesn’t work or why would you recommend that approach when I clearly told you what I needed
Heyy can anyone say me where these live videos are held? Like in TH-cam or in discord? And what exactly will be discussed in these sessions? Is he updating his neetcode website? It seems interesting to me..
I use chatgpt to tell me how to make snippets of code more efficient and to provide comments and stuff. Or how to approach a problem without giving me code. Like I use it to give me an idea on how to solve something rather than asking it to solve it for me. If you have the app you can just take a picture of the code too and it writes it all out and analyzes the code. It’s not perfect but I find it better than stack. You need GPT-4 though. The key is to learn how to ask gpt your questions in a way where it understands what you want through prompt crafting.
This comment right here is the truth. If you use it as a learning tool and don't cheat yourself (explicitly asking for the solution) it is amazing. Here are some examples of how to use it as a learning tool: "Im about to do the leetcode problem LRU cache. Before I solve the problem I want to be familiar with an LRU cache. Can you talk about it and what are its applications in industry" After chatGPT answered I responded with a follow up question: "What are the best data structures to use to implement an LRU cache and why are they useful. Do not go into specific details about the leetcode problem though I want to solve it myself"
chagpt can't always help you, but it can definitely help you save time reading docs in most cases. instead of coding ask it question, discuss things, then code. Your brain is still the most important thing when you do programming.
I think it really depends on the language and libraries you are using and the difficulty and obscurity of the problem. Something like Spring and Java or C is much older and better documented and chatgpt has been awesome for basic syntax issues or really basic things. That said it does make things up quite a lot and it is important to prompt it the right way. I think ChatGpt is awesome for syntax on a new language for trivial simple well documented things if you know how to prompt it with little simple tasks while you maintain a larger understanding of the system. The same issues you are experiencing with chatgpt i have experienced with Stack Overflow and at one point I pretty much started to exclusively use documentation of the tool to solve my questions.
Yeah copilot caused multiple bugs during the stream that I only caught later in the stream. It's still powerful and hopefully improves, but I mainly use it as an autocomplete tool atm.
Continue to use stackoverflow to get your daily dose of arrogant judgement or encouragement from the internet warriors who can’t give constructive feedback.
I second this so hard, chatgpt is good if I need to create a function with defined inputs and outputs, except that the debugging it's dogshit even gpt 4 one
Is your DSA roadmap still relevant that you uploaded 1 yr ago ?? I am 1yr exp. engineer and knows basics wo DSA. If I do your 150 neetcode problem will I be in a position to tackle faang level questions ?
'they're the type of people who go on stackoverflow, don't even anything they just copy and paste the code' oof I feel attacked 😅 thanks for the reminder to start probing deeper and try to understand so I can be a better eng
Ask Chatgpt how many string matching algorithms it knows. And it would reply to you it knows 15. It likes to talk about KMP and Robin-Karp ones. In actuality there were 80+ algos for string matching in existence by 2010. You can easily find research papers comparing 80+ of them. That's all you need to know. PS. Bard Gemini would reply it knows about 19 string matching algos.
It doesn't matter how bad chat gpt is as long as stack overflow and other devs are so mutagenically toxic that we cant bear to go near them. Even this video is filled with the toxicity we use chat gpt to get away from.
Uhhhhhggg its like a no brainer, why bother going back and forth trying to clarify a question to a mean crowd ready to grill you on the tinyest detail.
Exactly. Chatgpt is still learning and it's only been out for a few years compared to how coding has gone so far. Of course its not going to fix specific coding issues at a complex level. Saying chatgpt is shit right now is like saying that 3rd graders are really stupid for not knowing calculus or like when starting to play an rpg game and you're like "my goodness the equipment on this character is weak." No shit! People for are experts in coding are fools to think that gpt will fix their issues today.
You may be right about the hype but why are you so angry even wanting to ban the guy who suggested just to ask the chatbot. I mean aren't you exaggerating too about this being a toy? I mean really its all about transformers and this was in fact a breakthrough.
It’s stuff like this that’s the issue. You can’t just throw a lump of code at the model and expect it to just generate the right solution. LLMs are super context dependent and if you can’t provide the right amount of detail and guide it to what you’re looking for, it will just hallucinate Bs (in your situation). You should really learn how to prompt the model so you can actually use it in way that is helpful. Researchers behind these models, like Andrej Karpathy, really try to nail this ideology down in lectures and talks, but people just don’t seem comprehend it. Plus, I bet if you used copilot with access to your workspace you would’ve gotten a much better solution (Co-Pilot is a version of GPT-4 btw). And you’re truly gullible if you don’t think these models won’t improve to more proficient than human to the point where developers aren’t needed. We’ve already seen people lose jobs because of it and the models aren’t “that good”, so imagine if millions still gets poured into R&D, while having a huge open-source community (like huggingface) that make improving AI accessible to all.
What is the advantage of AI in its current state then? In this case it is easier to write the code yourself than to learn how to do "prompt engineering" and just hope for somewhat decent results at some iteration
@@yankeedudlizz For one, it’s not a developer dependent tool. It has a variety of use cases outside of developing that make it useful. Two, I think it’s much better and more practical to code it up yourself (especially when you know what to do), but I find generative AI agents to be useful in situations where you are stuck or just need ideas or even when you just need boilerplate for testing. Then, you can use prompt engineering, to give you responses that lead you to what you want. There is a reason developer productivity has increased in the last year and it’s not because we started using stack overflow more. I’m not trying to say it’s a direct replacement for developers today, however I believe it can be in the future (iff it improves).
It's only replacing the bottom of the barrel stuff, just like how phone operators were phased out with automated systems. Anything that actually requires the human intelligence on a high level, it can't do. And I don't see how that can fundamentally change given that it just regurgitates training data, it can never exceed it or even really be as good as human applying knowledge, because there is tacit knowledge underlying the training data that the LLM won't comprehend.
"I need to ban that guy" is hilarious. I also have been thinking about it
I think you missed the point on why people prefer chatGPT. They don't have to deal with the constant "why do you want to do it this way and not x way?" or "this post is a duplicate of this post from 1-10 years ago" with the most technical response that only a PhD could understand. They can ask "can you explain it in a different way" and not get completely ghosted from the answerer who wants to live their own life and not spend an extra hour explaining the same thing again.
They don't like being judged just for asking a question and not understanding the answer. They don't like people calling them lazy because they don't know where to look or that they get overwhelmed when they have to sift through 10-30 links on google or stackoverflow to find the right answer. They don't like getting downvoted because their question was too "low quality" for their tastes.
These are necessary evils in the end, but the fact that a way to avoid these evils popped up and it works some of the time makes sense. People would prefer wrong answers from someone giving them their full attention than correct answers from someone who might be annoyed that they should've studied more.
I think that misunderstanding comes from thinking stack overflow is a Q&A site. It is, but there is less “Q” and more “A”. 95% of the time when one goes on stack overflow, one doesn't need to ask a question but instead looks at previous answers.
From this perspective, it makes sense why stackoverflow seems to discourage questions. Askers believe that stackoverflow is a Q&A board (like, say, Quora) where they freely accept questions. Responders see stackoverflow as a curated platform whose utility depends directly on the quality of its questions and answers. In that sense I think stackoverflow is more like reference documentation structured in Q&A format rather than a true Q&A board.
I find that stackoverflow works a lot better if you "lurk" and only ask questions as a last resort. ChatGPT is the opposite: you can freely ask questions, so in that sense it's a more direct Q&A experience.
Exactly!!!!!
Facts.
factos
Very true. For anything mildly complex or novel, ChatGPT is observably bad. Where I found it most useful is when I am studying something new and need to quickly summarize or distill information from blob of text.
I like using it for that purpose as well! Also, arguing with it is sorta a way to deepen my own understanding on the topic - almost like discussing with yourself in a journal but instead having ChatGPT respond to you in its own ways (which can be insightful).
I also nowadays prefer stackoverflow, documents or blogs.
One of the big things I've liked with bard is that it provides links to its info sources. I've learned a ton just following links from it and reading stackoverflow or GitHub documentation.
But I'm a beginner and even I find that gpt and llama 2 with anything beyond like a 10 line function or html busywork that it's fairly useless.
I usually only even open them if I'm struggling with a class deadline or I'm having trouble with Google though.
the real magic happens when you iterate with more specifications every time
It’s a chatbot after all. You’re not going to get the right answer immediately. You need back and forth iterations with the bot
The problem with more iterations for me when solving a specific problem has always been that ChatGPT started to mix things together and disregard parts of the context of the original problem or the rest of discussion.
I find myself debating ChatGPT on the regular. Basically telling it your confused or that doesn’t work or why would you recommend that approach when I clearly told you what I needed
Agreed it's just supplemental for learning... but not good to use for those specific issues.
Heyy can anyone say me where these live videos are held? Like in TH-cam or in discord? And what exactly will be discussed in these sessions? Is he updating his neetcode website? It seems interesting to me..
I normally do the Livestreams on the main neetcode channel, I usually react to random stuff or do random coding
@@NeetCodeIODo you have a set specific time when you go live? What's your schedule like?
@@NeetCodeIOno vods in youtube like in twitch? I live in asia, practically impossible to catch a live in a good time
I use chatgpt to tell me how to make snippets of code more efficient and to provide comments and stuff. Or how to approach a problem without giving me code. Like I use it to give me an idea on how to solve something rather than asking it to solve it for me. If you have the app you can just take a picture of the code too and it writes it all out and analyzes the code. It’s not perfect but I find it better than stack. You need GPT-4 though. The key is to learn how to ask gpt your questions in a way where it understands what you want through prompt crafting.
This comment right here is the truth. If you use it as a learning tool and don't cheat yourself (explicitly asking for the solution) it is amazing.
Here are some examples of how to use it as a learning tool:
"Im about to do the leetcode problem LRU cache. Before I solve the problem I want to be familiar with an LRU cache. Can you talk about it and what are its applications in industry"
After chatGPT answered I responded with a follow up question:
"What are the best data structures to use to implement an LRU cache and why are they useful. Do not go into specific details about the leetcode problem though I want to solve it myself"
@@tamzidchowdhury624I also use this way to understand.simply I promt in gpt that explain me it with real life e.g and trust me It help alot .
ChatGPT never once has given the answer, "I figured out the answer. Im not going to post it though."
yeah 100% agree, try to ask chat to create tests for anything more complicated than a todo list.
chagpt can't always help you, but it can definitely help you save time reading docs in most cases. instead of coding ask it question, discuss things, then code. Your brain is still the most important thing when you do programming.
"can't" or "hasn't yet". i agree with your points, but it will improve.
And THIS is the key. By GPT 6 chat will be able to do extremely complicated stuff with code. I dont get why people dont see it!
@@user-xedwsgtheyve run out of data.
@@user-xedwsg nope
L's theme so good.
ChatGPT isn't a bulletproof solution but it isn't as awful as you're making it out to be. Have some reads of how to write prompts
exactly
I think it really depends on the language and libraries you are using and the difficulty and obscurity of the problem. Something like Spring and Java or C is much older and better documented and chatgpt has been awesome for basic syntax issues or really basic things.
That said it does make things up quite a lot and it is important to prompt it the right way. I think ChatGpt is awesome for syntax on a new language for trivial simple well documented things if you know how to prompt it with little simple tasks while you maintain a larger understanding of the system.
The same issues you are experiencing with chatgpt i have experienced with Stack Overflow and at one point I pretty much started to exclusively use documentation of the tool to solve my questions.
official Docs > Stack overflow > GPT
can't stand copilot at all. Too intrusive to my flow.
Yeah copilot caused multiple bugs during the stream that I only caught later in the stream.
It's still powerful and hopefully improves, but I mainly use it as an autocomplete tool atm.
When does he stream and where?
The days are pretty random for now, but I usually stream around 12PM PST on the main NeetCode channel.
Continue to use stackoverflow to get your daily dose of arrogant judgement or encouragement from the internet warriors who can’t give constructive feedback.
I second this so hard, chatgpt is good if I need to create a function with defined inputs and outputs, except that the debugging it's dogshit even gpt 4 one
Would like to see if the stackoverflow solution actually worked
Is your DSA roadmap still relevant that you uploaded 1 yr ago ?? I am 1yr exp. engineer and knows basics wo DSA. If I do your 150 neetcode problem will I be in a position to tackle faang level questions ?
if u dont know anything about DSA then first learn DSA then try leetcode. Also if u do neetcode 150 probably is not enough to crack FAANG but depends
'they're the type of people who go on stackoverflow, don't even anything they just copy and paste the code'
oof I feel attacked 😅 thanks for the reminder to start probing deeper and try to understand so I can be a better eng
💀same, copy-cat monkeys.....apes together.....strong😅😅, I do read through ...actually disable my copilot subscription earlier in feb
Ask Chatgpt how many string matching algorithms it knows. And it would reply to you it knows 15. It likes to talk about KMP and Robin-Karp ones. In actuality there were 80+ algos for string matching in existence by 2010. You can easily find research papers comparing 80+ of them. That's all you need to know. PS. Bard Gemini would reply it knows about 19 string matching algos.
It doesn't matter how bad chat gpt is as long as stack overflow and other devs are so mutagenically toxic that we cant bear to go near them.
Even this video is filled with the toxicity we use chat gpt to get away from.
This is really hilarious🤣
Uhhhhhggg its like a no brainer, why bother going back and forth trying to clarify a question to a mean crowd ready to grill you on the tinyest detail.
can you do more live steams plz!
You need to learn prompt Engineering before condemning ChatGPT
Prompt engineering 😂🤣🤣 gadamn the proompters are taking over
@@JRAS_ The bot is as good as your prompting 😀
'Chatgpt is just a toy'😅😅
I'm not worried about AI.
You really have no idea what youre talking about. Bookmarking this for when GPT 6-7 is out and doing all complex coding with ease...
Exactly. Chatgpt is still learning and it's only been out for a few years compared to how coding has gone so far. Of course its not going to fix specific coding issues at a complex level. Saying chatgpt is shit right now is like saying that 3rd graders are really stupid for not knowing calculus or like when starting to play an rpg game and you're like "my goodness the equipment on this character is weak." No shit! People for are experts in coding are fools to think that gpt will fix their issues today.
You may be right about the hype but why are you so angry even wanting to ban the guy who suggested just to ask the chatbot. I mean aren't you exaggerating too about this being a toy? I mean really its all about transformers and this was in fact a breakthrough.
I was just joking about banning the guy and even with calling chat gpt a toy. I was still using it throughout the stream to help with basic things.
@@NeetCodeIO Oh I see. Im sorry.
This is how people felt like ,when the first PC was launched . The rest is history
Facts
It’s stuff like this that’s the issue. You can’t just throw a lump of code at the model and expect it to just generate the right solution. LLMs are super context dependent and if you can’t provide the right amount of detail and guide it to what you’re looking for, it will just hallucinate Bs (in your situation).
You should really learn how to prompt the model so you can actually use it in way that is helpful. Researchers behind these models, like Andrej Karpathy, really try to nail this ideology down in lectures and talks, but people just don’t seem comprehend it.
Plus, I bet if you used copilot with access to your workspace you would’ve gotten a much better solution (Co-Pilot is a version of GPT-4 btw).
And you’re truly gullible if you don’t think these models won’t improve to more proficient than human to the point where developers aren’t needed. We’ve already seen people lose jobs because of it and the models aren’t “that good”, so imagine if millions still gets poured into R&D, while having a huge open-source community (like huggingface) that make improving AI accessible to all.
What is the advantage of AI in its current state then? In this case it is easier to write the code yourself than to learn how to do "prompt engineering" and just hope for somewhat decent results at some iteration
@@yankeedudlizz I find it quite useful for boller plate code...especially in the front end, easily builds boring ui
@@yankeedudlizz For one, it’s not a developer dependent tool. It has a variety of use cases outside of developing that make it useful. Two, I think it’s much better and more practical to code it up yourself (especially when you know what to do), but I find generative AI agents to be useful in situations where you are stuck or just need ideas or even when you just need boilerplate for testing. Then, you can use prompt engineering, to give you responses that lead you to what you want. There is a reason developer productivity has increased in the last year and it’s not because we started using stack overflow more. I’m not trying to say it’s a direct replacement for developers today, however I believe it can be in the future (iff it improves).
@@Nerdimo no doubt it is a great tool when used in addition to other tools we currently have (Google, stackoverflow, docs, etc.)
It's only replacing the bottom of the barrel stuff, just like how phone operators were phased out with automated systems. Anything that actually requires the human intelligence on a high level, it can't do. And I don't see how that can fundamentally change given that it just regurgitates training data, it can never exceed it or even really be as good as human applying knowledge, because there is tacit knowledge underlying the training data that the LLM won't comprehend.
Wow this is salty 😂
ChatGPT is infinitely times more superior and helpful than Stack Overflow. fight me.