An interesting thing to point out, O(N^2) is not optimal for checking the longest palindromic sequence, there is an algorithm called Manacher's which finds all palindromic substrings in O(N). So ChatGPT wrote out the most popular 'good and optimal' solution, rather than the actual best.
I don't know how many people will understand me, but for a moment I was almost brought to tears, as this demonstration seemed to indicate that the entire field of algorithm design can be reduced to a single algorithm (Chat-GPT v. X.0) that can perform thousands of times faster than human engineers. I thought that we reached the point where any problem need only be described accurately enough, and the AI will come up with the optimal computer code to solve it; for a computer scientist/engineer (or mathematician I guess), this would be almost proof that your training and work is (or soon will be) completely meaningless. Your observation shows that Chat-GPT while truly superb engineering work, does not really know how to work the problem. It could not figure out the truly optimal algorithm for the problem, as it obviously didn't see this particular solution during its training, and its "thought processes" (ie inference procedures) could not lead it to the optimal solution (funny how Manacher himself didn't know how to do it back in 1975 :-)) Phew... got shaken for a while.
This is the most terrifying thing about programmer AIs - there's probably billions of lines of free training data. I think there may be a reverse in the current ai rhetoric - we used to think AIs would replace "easy" jobs first and the software engineering positions will be safe, but perhaps the opposite is going to be the case.
To clarify, chatGPT is trained by things on the internet, but it is not connected to the internet. It is not a search engine, in the same way Google is. In its own words "I am not connected to the internet. I am a large language model trained by OpenAI and am designed to provide information and answer questions to the best of my abilities based on the information that I have been trained on. My knowledge cutoff is 2021, so I may not be able to provide information or answers on more recent events or developments."
recently i authored some katas on code wars not very hard about 6kyu however unique problems and chat gpt gave completely wrong solutions for all of them. I think it has a huge data base of solutions and based on the question it returns the one that is closest to that question or combines some of them. Its not like it writes code. Still an amazing tool.
ChatGPT is not designed to think but to present knowledge, which is actually a surprisingly hard task for humans. Many professors I know have deep understanding of some topics, but fail to grasp what is potentially hard for others to understand.
@@N-cubed That is generally correct, but I don't think that applies to every field. For mathematics (what I study), we can't even find good analogies to explain stuffs. We've just seen enough to have some intuition on what methods may be suitable for the objects we care about. I do analysis which is like not even that abstract compared with my peers (algebra or geometry), imo.
@@annannz9047 The professor I’m talking about would strongly disagree with you because he that type of guy, but I think you’re right. For context, he’s a biology professor and not many like him. He’s a hardcore tough love kinda guy.
what's hilarious is how autistic Clement is, you can just see how exhilarated he is that the "computer" got all the questions right. These job screens aren't looking for people, they're looking for brains that are wired a certain way. Don't worry Clement, someday chatGPT can be your IRL boyfriend.
I think the main reason why it passed the interview so well is because the prompts correspond exactly to the problem you wanted it to resolve. I am totally mind blown it can do that so well and so effortlessly but in real life, what we might consider as problem prompts are usually a bit messier and not that abstract and it would take a fair amount of effort to get to the point where they are as polished as an algoexpert prompt. Failing to provide that level of clarity and completeness might result in a seemingly correct solution that is wrong in the most subtle ways imaginable.
That is the same argument that all the people who are against the FAANG interviews gave, these are artificial problems, not real ones, so they are not worth solving.
Yes, I agree, asking the correct question is the key. So I'd say that in the upcoming years the role of software developers will change from writing code to correctly formulating problems. Like taking the real life problem/task and converting into a prompt that will be clear and accurate enough for AI to solve. It'll be more important than ever to ask correct questions
A couple days ago I interviewed chat gpt with a Google style coding interview and it floped pretty badly (stongly no hire); and I'm also an ex-Googler.
You should do this again but with variations to the questions. I've heard that GPT struggles more when it has to extrapolate concepts to unseen questions (i.e. a question which might have no solutions available online).
The third question I gave is one that isn't readily available anywhere, except on AlgoExpert. I've never seen the same variation, with the same details (i.e., a teacher with student scores and rewards). That's why I was particularly impressed that it did it so well. But I might do another video with even more obscure questions!
@@clem I'm not sure what you exactly meant. I have found the solution with two words ('Min rewards question') and google could show me the correct results. Perhaps I can't attach any references but you can easily repeat my actions, there is gonna be Mark Eggensperger's article on Medium. Things get even more interesting when you look at the date of that article which was written a year ago. I'm not a pro at this realm but I was coming up with an idea if google could offer some correct results arranging it by some priority why chat gpt can't offer you the correct solution at least. Furthermore who knows what sources were utilized while CHAT gpt was being taught
Incredible and scary. Though it is expected to do well in questions and solutions available on the internet, the fact it is able to understand and give different clear solutions at that quality is mind blowing.
I think Clément made a rather wrong conclusion about GPT from this test. ChatGPT is not a great coding problem solver, but it is a very good tool for eliminating well known interview problems. I have a lesser known problem, also one where I don't specify input - allowing the interviewee to design the API. The ChatGPT got very confused - it likely mixed some different solutions (that use different APIs) and wrote some code that uses one representation, and some that uses another. The problem is - they don't make any sense together. It also missed important parts of the problem.
That were my exact thoughts about this experiment. ChatGPT is trained on all the things found on the Internet and some more. You really have to ask it questions that need more conceptualizing and not just spitting out some code. The code shouldn't matter. What does matter is how the code came into existence. Anybody can write code as long as they can memorize everything. I am not a developer, but I think this proves that you cannot judge potential candidate just by looking at the code they produce.
What this shows is that there are enough coding interview questions and solutions out there. It is very impressive how it writes and comments code, but it is not inventing new things here. Hopefully this will put an end to these learn-by-heart coding interviews and lead to more interesting/useful interviews. Also curious to know if Clement sees a thread for AlgoExpert in this, as it appears to provide the same (if not better documented) results.
This is such an interesting video. I actually tested chatGPT for the leetcode question for my Google internship. The code solution blew my mind, but in some cases, it didn't pass the test cases and I had to change 1-2 code lines. GitHub Copilot was also impressive, but I found clarity issues with it. But both AI tools really helped me a lot in preparation and practice. These tools are going to disrupt the software developers' lives because they are adding value to mine.
Both ChatGPT and Github Copilot are made by the same company, OpenAI. One of the original founders was Elon Musk, but he has since left the board around September 2021. -- ChatGPT.
I love ChatGPT. I work as an embedded software engineer. I am not afraid of it at all. In fact, I am excited to use it to complement my work. I have been using it to some extent this week even.
Recently NeetCode posted a video on TH-cam where ChatGPT got it wrong. Well, we know it's not perfect but that's amazing, I have tried it and was blown away. It will only get better over time
I LOVE the difference in the comments and reactions to ChatGPT between programmers and artists... programmers are excited for this tool and artists are angry and trying to shut it down but BOTH are going to be affected professionally by this tool (programmers more so).
I made a question up off the top of my head and ChatGPT nailed it. (Probably an easy question for you pros out there) Question: You are interviewing for a python engineer position, say you have a list of tuples. First element being a string (animal name) then the next element within the tuple being the number of years that animal has survived. sort the array of tuples in order of animal years then by the name. There will always be at least 5 animals in the list with a maximum of 50 animals. Please return the optimal solution
This is not really unexpected that algorithmic are solved to perfection since there are so many sources of solutions out there and its dataset/knowledge is based on that.
Imagine that: what if the network errors were intentional (maybe the AI learned how to trigger them), so that it could give you multiple solutions and impress you even more. :D
Lol then you get to third year and you have no choice but to have an idea about this stuff, else your failing your classes. You shouldn't give up so quickly. Being a good programmer isn't hard, but it takes time, practice, and understanding. You don't get there right away.
This is really amazing!! I am totally astounded as I had to work for years to reach this efficiency level in C++. I can see the demand for proficient programmers would reduce drastically. I can see job implications as fewer programmers are required and possibly very complex pieces of software will be written in no time. We'd see millions of lines of code becoming a norm.
Exactly the very opposite. Proficient will be even more in demand, and the job becomes more about optimizing. Millions of lines of code == BAD, same job done in 10k == GOOD. Less code is better. Lines of code is bad metric as well and does not speak for code quality, if you write 300 character lines with 6 different recursive functions inside is maybe even worse than 500 lines of copy pasta salad of crapola. ChatGPT cannot work on large codebases right now since you have to "load in" the code first, and it has trouble with more complex data relations. Excellent tool tho, which makes proficient coders much more proficient (reading documentation can be avoided, and quick suggestions), and it will allow inefficient bad coders hide better and prolong their careers. Maybe even make them better. So exactly the opposite you thought. As an employer and developer, based on this answer i would not hire you.
@@skaltura If you write 300 character lines with 6 different recursive functions inside, that's bad programming practice as it will likely be harder to parse and debug than a longer, "shallower" program.
@@skaltura Doesn’t matter. The end game is he is still right. As soon as you solve a problem and put it out there, A.I. find it and simply copy what you figured out. You could argue you are simply laying very complex brick and mortar digitally. To a computer it’s nothing.
As an active user of co pilot, I'm not surprised at all, I mean it's true that the technology is amazing and has a lot of potential but it is trained by using millions of peoples data without their active consent. Whenever I'm stuck , i could either spend an hour to try and figure the solution out or ask my copilot to fetch me a working solution from a guy who suffered a similar problem . Other then that, Co pilot also helps me to write code efficiently and you can use it for other purposes to (i.e writing this comment).
No active consent? What do you mean? Can you elaborate on that? Are you saying that it takes data from Copilot users? I assume you don't mean that it takes data from the web, because...duh.
Still, isn't that obvious? Do you really expect to be able to use this kind of tool and it keeps your input private? This is how they do it. They dangle potentially wonderful things in front of you and make you feel stupid for avoiding them and spending way too much time on stuff that they could do in seconds. The price is all your data. Either give in or resist. In many areas, this is my outlook on life, but in this area my curiosity gets the better of me, despite all my beliefs. I would never be allowed to use Copilot on the job.
@@Brax1982 Yes i agree, Data is everything nowadays. Perhaps i don't see a choice here, even if you don't use co pilot , they will still read your code since your repositories (even if private ) are hosted at github.
At 5:51 I noticed ChatGPT claims isPalindrome("") returns false when it should return true, since "" reads the same forwards and backwards. In fact the implementation that it gives for isPalindrome would return true too. Funny how it can get the hard stuff right and goof with the easy parts.
Mind blown. Well I was mind blown with the previous GPT3 codex models as well, that I've shamelessly exploited in some of my projects. I've done some of my own testing and yes, ChatGPT is formidable. And its a natural language model(s) - it's not a specialist "coding" model. Imagine it doing stuff like this is Physics, Medicine or Mathematics. Or specialized versions of this model. Never thought I'd see this in my lifetime :-)
Well, keep imagining. It sucks at maths. Which is not that surprising, because it does not store data, it derives it from training data. You cannot train math problems that way. On the other hand, the model is based on a coding model. I would expect it to do well with code.
Your surprise is surprising. In this case it is not how well the bear dances but that it dances at all. You should be absolutely stunned that it can coherently reply to even the simplest problem. After that, it's all gravy. Once you have hardware that can compute a 2x2 matrix, you should not be impressed if it can manage 100x100. That's why computers are handy. They do what we don't. They scale well but think badly. The breakthrough is that GPT thinks a bit better.
I did an interview at google before and if there is one thing that I remember well is the fact that the interview problems had missing information (or unclear information) that pushed me to ask questions. The one thing besides solving them was asking the right questions to clarify the ambiguities in order to get it right. I'm not sure why you gave it a 4 in that part since the problem was pretty clear and there are no questions involved (also a probably overused problem since It is a well known google interview problem according to what you said). Don't get me wrong, the GPT is probably one of the coolest things I have ever seen, but, besides being super cool search engine with some aggregation power in it, I do not see it doing anything else then being a productivity increaser (a super stackoverflow). I really want you to do an interview with easier but more uncommon problems and make it ask the questions to see if he handles them right and produce code based on the answers.
Can't wait for someone to get a job by answering "well, given the simplicity of the task I would just ask chatGPT to write up a solution, rather than wasting my boss's time doing it myself."
As soon as they can create instruments that will allow this model to work on a new version of this model without human feedback, it's time for all of us to change professions. Our best chance is to become a managers who would describe the tasks to AI to get optimal solutions. But even that would not last long.
Am I crazy thinking hard question could be simply solved with one loop just to figure out how many downsteps or upsteps in raw are there? Thats when you simply count them either way. Expl: 8, 4, 2, 1 or 2, 3, 4, 5 would be same, 1+2+3+4, when you increment counter each time change is repeated(for higher values in raw or lower values in raw) as four changes for both cases. When going downwards just add little math to check how many down step are there, because you want always to start counter with one when checked value is lower then previous.
I asked to come with a story about a food contest between Snorlax and Cookie Monster. The story ended with Elmo saying overeating is bad and they decided to go on diet and stop eating
this thing is so freaking insane. I got a promotion at work in 2023 using this thing for the last 2 weeks. I didnt tell anyone i was using it but the code i was writing was so good that my leadership was like WOW GOOD WORK!. its freaking INSANE
A brief summary: The speaker describes conducting a Google coding interview with Chad GPT, an artificial intelligence chatbot developed by OpenAI. The speaker states that they have previously conducted real Google coding interviews, both on TH-cam and in person, and that they have prepared three coding questions of increasing difficulty for Chad GPT to solve. The first question involves writing a function in JavaScript to determine if a given string is a palindrome. A palindrome is defined as a string that is written the same forwards and backwards. The speaker notes that single character strings are considered palindromes. Chad GPT responds with a solution that involves using two variables, "left" and "right," to keep track of the leftmost and rightmost characters in the string, and using a while loop to iterate through the string from the left and right ends towards the center. If the characters at the left and right indices are not equal, the function returns false. If the characters are equal, the indices are incremented and the loop continues. If the loop completes and the characters at all indices have been found to be equal, the function returns true. The speaker is impressed with Chad GPT's solution and notes that it is the optimal solution for this problem. They also mention algoexpert.io, a platform that helps software engineers prepare for technical interviews, and encourage software engineers to visit the site and use the promo code "CLM" for a discount on the platform.
n^2 is not optimal for longest palindromic substring. There are a bunch of ways of doing it faster. n*log(n) is good with hashing or most other algorithms, but n is better and also possible.
Ok, BUT: the answers to those kind of interview questions were already there - some for years, some for decades. The bit that it can actually spit out a working algorithm is not the impressive part because you'll likely find almose exact answers on stackoverflow. What is truly impressive is that it parses questions in plain English (or many other) and connects it to good answers. It's like google 2.0. In an interview situation no one actually wants you to write code but for you to show that you understand the code (by writing it, if that makes sense).
This really should have people like Clement and sites like leetcode and hackerrank sweating - because it exposes live interview questions for what they are - a competition in memorization. The reason generative AI does so well with these types of questions is that all it needs to do is look up existing solutions to these generic problems. The real challenge software engineers usually face have more to do with understanding user requirements and hardware restrictions that need to be worked around to do what you want, not recite an algorithm for traversing a tree. Perhaps more software companies will change their interview styles after this, which would be a good thing. Let all the leetcode and hackerrank sites rot away as a bad memory
I like the idea of thinking of ChatGPT-like tools as a search engine and as a compressor of information. It creates a database of the internet and lets you search it, with nice and useful presentation of results.
Mind blowing, it exactly solving the question like actual Google interviewer specially when it stucks it just slow down or disconnected the internet connection
Genuine question. I'm at the start of a career change moving into software development and only been doing it a few weeks. Am I wasting my time moving into this area and learning since chatGPT and git hub co pilot are around, are developer jobs in danger?
Network error happens when responses are too long and for some reason it will timeout. To fix this ask it to: Add numbers to lines and when you reach line number N you must end your response, tell me how many lines are left and prompt me to continue. Or something like: Tell me the first N words then end your response and prompt me to continue.
It fails on novel algo problems. They tried it on codeforces and it just spits out random code that sometimes doesn’t even compile. Basically it is a form of google/stackoverflow at the moment.
OpenAI has a team working on generating code - Codex. I guess they may have more training data than that easily available online. I still think all of these questions could be in the training data. When I tried asking chatGPT easier, but more customized task, it wasn't doing so well.
Longest possible palindrome in a string (single string, not a sentence etc.) can be found in less than O(n^2) in average, if there is a palindrome. Since the method has to access memory more, that might not be faster tho. substr pieces, for example string is abababb, you remove one character from one end, test, then the other end test, and return immediately when isPalindrome returns true. No need to test for ALL of the possible solutions. Hence, average solution is more like o((n^2)/2) -- but depends on the dataset, since palindromes are not commonplace, a comment like this would yield more like o((n^2)*0.99) or whatever the natural frequency of palindromes actually are in natural language - but on dataset of mostly palindromes the it could be more like o((n^2)*0.01). If the string is completely a palindrome difficulty of solving was O(1) A little bit of memory overhead here removes the need to find ALL the possible palindromes, since we only care about the longest. Rest of optimization depends on the language, interpreter and platform on what is the fastest methods to do. for example you don't actually need to even substr but you could loop the individual indices of the string; well essentially what substr does but you can put it directly into the algo to avoid even more lookups, spend a bit more memory on avoiding extra strlen() calls etc. Point is that optimization is more than just optimizing the base algo, and there are always methods to make it faster, and just because at a quick glance it seems O(n^2) is the minimum, it might not be. Infact, that praised o(n^2) solution is kinda weak at 14:29 -- enough to get the job done, but inefficiently. Right now i noticed the comment of Dragono about Manacher's algo to find all possible in O(N), so even O(n^2) to start with is inefficient solution.
its much more than only for engineers this AI can even correct for example texts you wrote and correct them and it's performing pretty good as well. I just wrote a small story and asked him to continue writing the story and it wasn't that bad
The answer to what's asked towards the end when he wonders if it was posting solutions it already knew somehow is going to depend on how you define "know," because I suspect it's likely using a lot of existing data, then maybe crawling over it to create its responses, and if that data is coming from a live internet connection would you still consider it something "known?" Even if so though, fundamentally this "knowledge" would depend on whatever algorithm is going on behind-the-scenes that synthesizes an answer from it, even if it wasn't using a live internet connection to obtain the data in real-time. It's going to be really interesting when we see how these interact with each other though IMO i.e. can multiples of them working together to solve problems help the overall performance?
then you ask yourself well what does it really mean to know something. Maybe this idea of just copying from some training data is actually how humans learn too. While we are able to truly understand the easier problems, for the harder problems we need to see them a few times before we are finally able to solve it. I bet the AI also has a few base facts that it knows to just be facts, and then it just adds to that set of knowledge and it builds and builds.
Considering these questions were all on the internet verbatim, right next to their solutions, on your website, I'm not surprised it got them right. You should try altering the questions or even making up new, original and unique questions to see how well it does. What it's doing here isn't too far off from copy pasting the solutions from your website.
Hey, I'd like some input here from a programmer. I started to slef-study web dev a couple of month (I'm brand new otherwise to coding). My main goal was to create a personal project, and I can definitely see how ChatGPT will help. That said, as a secondary goal, I was hoping that throughout this process I could eventually gain the skills required to land some entry level job down the road and move up from there, or even do some freelancing for smaller/simpler projects. It seems to me that ChatGPT poses a problem. It can certainly replace novice guys and gals like me who are self-taught. Am I right to assume that this tech might render the 'self-taught' or even the 'bootcamp' candidates useless? My fear is that if the low-end coding work can be done by AI, the actual programmer that is supervising it will always be senior level, or at least have a solid academic background in CS. So what are your thoughts about coding noobs who are currently following a non-academic path? Did we miss the train?
All current seniors were once juniors. If ChatGPT were to completely eliminate the need for junior positions that would mean that there would never again be more new seniors, which would mean that there would never again be more software engineers, so over the course of 1-2 decades the total number of working engineers would drop to zero. That is an unlikely scenario in the extreme. What will happen instead is that certainly the bar will be raised for juniors. Since ChatGPT, CoPilot and other tools are widely available entry level requirements will go up, and competition will be more fierce. Yes, you will have to do more and learn more to stand out. No one should think that they can do a six month bootcamp and expect to immediately be job ready. But if you're determined enough and you give it enough time, it will still be possible to break in. This piece by Gergely Orosz is spot on, even though it makes no reference to AI: blog.pragmaticengineer.com/advice-for-junior-software-engineers/
@@adamszanyi2242 most jobs that are mainly about manipulating bits will be automated in the near future, the only question is when that will happen exatly. Will it be 10- 20 years from now or a lot sooner? Orogrammers especially seem to fail to understand basic supply and demand and act like they never heard about technological unemployment or think they are too "special" to be replaced by AI. They will go from being overly optimistic to having a rude awakening.
The network errors drive everyone crazy.... if the script you are asking for goes bigger after a certain amount of tokens... this is very frustrating... specially the fact that when the error appears, all the information is lost...
This is just a computer doing math, basically. If you really want your mind blown, take a look a this: Q: Which of those options do you find the most compelling? A: I think it would be most interesting if the generation ship became a space station after its journey, as it would allow for further exploration and colonization of the Trappist system. It could also serve as a hub for trade and communication between the different colonies, and potentially even other star systems. Q: That sounds great, where would you like to go from here? A: We can explore the societal dynamics of the colony as it grew and evolved over time, eventually leading to the formation of the current oligarchic government. We can also consider the role that technology and scientific advancement played in the development of the society, and how it shaped the cultural and political landscape. Another direction we could take is to delve into the internal politics of the corporations and how they interact with each other and the government. There are many possibilities for where to take the story from here. Like... dude. 🤯
But also, it's funny to see people say 'it just knew this already'. To that I ask - how much of what you know is actually original and inventive thought? Even your creative solutions, aren't they just a distortion and amalgamtion of things you already know?
Just take a career that is not mainly about manipulating bits like programming and you should be good. That are still many jobs out there in which human contact is essential.
you should ask ChatGPT a question which have never appeared on the internet or same problem but described in a different way. If It still address those problems, I think we those who are software engineer should prepare to find a new job lol. Thank for the great video.
"What is the space-time complexity analysis of this solution?" Now I'm 100% certain I never want to code for a company again and everyone has lost their mind.
So discouraging for someone who was hoping to make a career switch into software engineering! Glad I didn't spend all the cash I would have needed to go through a year+ of training perhaps. Back to the drawing board.
I‘ve seen similar videos like this and tried many options myself and it‘s all the time the same. General algorithmical problems with limited scope (because in comparison to a real-world project all these algorithms have a very tiny scope) like the ones in this video? Yes, fantastic! Complex solutions to not so generic problems? Not a chance! Therefore, my 2 learning are: 1. Teachers have no job in the future! Any generic thing a teacher can teach me in any subject, the AI can teach me as well and even better! 2. Human resources employees create no value to a company as their methods of questioning a prospect are useless to find the best prospects. Of course, the second point about the HR department of companies is something, that is true even without an AI and I knew that for 20 years already.
Might not be a chance with this _prototype_, but it definitely seems like they managed to make the breakthrough, so the hard part is done. Some polishing, and I can easily imagine an AI far superior in coding to this one, within only a few years time.
In my next interview, I'll use the network error strategy!
Hahaha
I'll print that on a ball cap.
😂😂
An interesting thing to point out, O(N^2) is not optimal for checking the longest palindromic sequence, there is an algorithm called Manacher's which finds all palindromic substrings in O(N). So ChatGPT wrote out the most popular 'good and optimal' solution, rather than the actual best.
It only knows what it has Seen 👍
I don't know how many people will understand me, but for a moment I was almost brought to tears, as this demonstration seemed to indicate that the entire field of algorithm design can be reduced to a single algorithm (Chat-GPT v. X.0) that can perform thousands of times faster than human engineers. I thought that we reached the point where any problem need only be described accurately enough, and the AI will come up with the optimal computer code to solve it; for a computer scientist/engineer (or mathematician I guess), this would be almost proof that your training and work is (or soon will be) completely meaningless. Your observation shows that Chat-GPT while truly superb engineering work, does not really know how to work the problem. It could not figure out the truly optimal algorithm for the problem, as it obviously didn't see this particular solution during its training, and its "thought processes" (ie inference procedures) could not lead it to the optimal solution (funny how Manacher himself didn't know how to do it back in 1975 :-)) Phew... got shaken for a while.
Also, there is a solution using a palindrome tree.
@@errandir uuuu never heard of that... I'll check it out
@@rareshika Maybe because this structure was invented only in 2015. But it is a fascinating thing to solve palindrome tasks :)
It being really good at solving algorithmic problems is not surprising at all given all the solutions exist on the internet.
This is the most terrifying thing about programmer AIs - there's probably billions of lines of free training data. I think there may be a reverse in the current ai rhetoric - we used to think AIs would replace "easy" jobs first and the software engineering positions will be safe, but perhaps the opposite is going to be the case.
It can give novel solutions too. Also it’s not optimised for programming yet.
To clarify, chatGPT is trained by things on the internet, but it is not connected to the internet. It is not a search engine, in the same way Google is. In its own words "I am not connected to the internet. I am a large language model trained by OpenAI and am designed to provide information and answer questions to the best of my abilities based on the information that I have been trained on. My knowledge cutoff is 2021, so I may not be able to provide information or answers on more recent events or developments."
Google search is much better than ChatGPT (I know how they work is different) for programming questions but still SWEs exist right?
recently i authored some katas on code wars not very hard about 6kyu however unique problems and chat gpt gave completely wrong solutions for all of them. I think it has a huge data base of solutions and based on the question it returns the one that is closest to that question or combines some of them. Its not like it writes code. Still an amazing tool.
ChatGPT is not designed to think but to present knowledge, which is actually a surprisingly hard task for humans. Many professors I know have deep understanding of some topics, but fail to grasp what is potentially hard for others to understand.
I knew I professor that said “If you truly know something, you should be able to explain it in a way that even your grandma would understand”.
@@N-cubed That is generally correct, but I don't think that applies to every field. For mathematics (what I study), we can't even find good analogies to explain stuffs. We've just seen enough to have some intuition on what methods may be suitable for the objects we care about. I do analysis which is like not even that abstract compared with my peers (algebra or geometry), imo.
By the way, I've taken grad-level abstract algebra and differential geometry too. Both are fun, but I kind of suck. lol
@@annannz9047 The professor I’m talking about would strongly disagree with you because he that type of guy, but I think you’re right. For context, he’s a biology professor and not many like him. He’s a hardcore tough love kinda guy.
this is gold! Once you grasp this, you will stop focusing on unnecessary arguments like, will chatGPT replace developers
Congrats ChatGPT for landing a job at Google! 🎉
what's hilarious is how autistic Clement is, you can just see how exhilarated he is that the "computer" got all the questions right. These job screens aren't looking for people, they're looking for brains that are wired a certain way. Don't worry Clement, someday chatGPT can be your IRL boyfriend.
Yes, mind blown. The last question was very chatty, it is insane that it could destill the problem statement from all the text.
I think the main reason why it passed the interview so well is because the prompts correspond exactly to the problem you wanted it to resolve. I am totally mind blown it can do that so well and so effortlessly but in real life, what we might consider as problem prompts are usually a bit messier and not that abstract and it would take a fair amount of effort to get to the point where they are as polished as an algoexpert prompt. Failing to provide that level of clarity and completeness might result in a seemingly correct solution that is wrong in the most subtle ways imaginable.
That is the same argument that all the people who are against the FAANG interviews gave, these are artificial problems, not real ones, so they are not worth solving.
@@Rajmanov Not what i meant...
Yes, I agree, asking the correct question is the key. So I'd say that in the upcoming years the role of software developers will change from writing code to correctly formulating problems. Like taking the real life problem/task and converting into a prompt that will be clear and accurate enough for AI to solve. It'll be more important than ever to ask correct questions
@@danikb7346 yep sounds like this is where it is all leading somehow
A couple days ago I interviewed chat gpt with a Google style coding interview and it floped pretty badly (stongly no hire); and I'm also an ex-Googler.
You should do this again but with variations to the questions. I've heard that GPT struggles more when it has to extrapolate concepts to unseen questions (i.e. a question which might have no solutions available online).
ChatGPT can't surf internet so I think it will still be good.
@@sayandas5587 It can't currently but is probably already in the records of the training
The third question I gave is one that isn't readily available anywhere, except on AlgoExpert. I've never seen the same variation, with the same details (i.e., a teacher with student scores and rewards). That's why I was particularly impressed that it did it so well. But I might do another video with even more obscure questions!
@@clem please DO the airport connections one!
@@clem I'm not sure what you exactly meant. I have found the solution with two words ('Min rewards question') and google could show me the correct results. Perhaps I can't attach any references but you can easily repeat my actions, there is gonna be Mark Eggensperger's article on Medium. Things get even more interesting when you look at the date of that article which was written a year ago. I'm not a pro at this realm but I was coming up with an idea if google could offer some correct results arranging it by some priority why chat gpt can't offer you the correct solution at least. Furthermore who knows what sources were utilized while CHAT gpt was being taught
Incredible and scary. Though it is expected to do well in questions and solutions available on the internet, the fact it is able to understand and give different clear solutions at that quality is mind blowing.
I think Clément made a rather wrong conclusion about GPT from this test. ChatGPT is not a great coding problem solver, but it is a very good tool for eliminating well known interview problems. I have a lesser known problem, also one where I don't specify input - allowing the interviewee to design the API. The ChatGPT got very confused - it likely mixed some different solutions (that use different APIs) and wrote some code that uses one representation, and some that uses another. The problem is - they don't make any sense together. It also missed important parts of the problem.
That were my exact thoughts about this experiment. ChatGPT is trained on all the things found on the Internet and some more. You really have to ask it questions that need more conceptualizing and not just spitting out some code. The code shouldn't matter. What does matter is how the code came into existence. Anybody can write code as long as they can memorize everything. I am not a developer, but I think this proves that you cannot judge potential candidate just by looking at the code they produce.
This is the craziest technology leap of my life and already people are writing it off and minimizing it
ChatGPT is at capacity right now. You just killed that website with the video :D
What this shows is that there are enough coding interview questions and solutions out there. It is very impressive how it writes and comments code, but it is not inventing new things here. Hopefully this will put an end to these learn-by-heart coding interviews and lead to more interesting/useful interviews.
Also curious to know if Clement sees a thread for AlgoExpert in this, as it appears to provide the same (if not better documented) results.
I think /thread/threat/ makes more sense
This is such an interesting video. I actually tested chatGPT for the leetcode question for my Google internship. The code solution blew my mind, but in some cases, it didn't pass the test cases and I had to change 1-2 code lines. GitHub Copilot was also impressive, but I found clarity issues with it. But both AI tools really helped me a lot in preparation and practice. These tools are going to disrupt the software developers' lives because they are adding value to mine.
Both ChatGPT and Github Copilot are made by the same company, OpenAI. One of the original founders was Elon Musk, but he has since left the board around September 2021. -- ChatGPT.
@@akin242002 not only does everyone know this, but noone cares
This is both amazing and scary at the same time.
It’s was so efficient with the last solution because it has seen the exact same question somewhere on internet and it knows the solution already
I love ChatGPT. I work as an embedded software engineer. I am not afraid of it at all. In fact, I am excited to use it to complement my work. I have been using it to some extent this week even.
I was pretty mind blown. I was also really happy to see how interviewers score potential candidates.
Recently NeetCode posted a video on TH-cam where ChatGPT got it wrong. Well, we know it's not perfect but that's amazing, I have tried it and was blown away. It will only get better over time
I LOVE the difference in the comments and reactions to ChatGPT between programmers and artists... programmers are excited for this tool and artists are angry and trying to shut it down but BOTH are going to be affected professionally by this tool (programmers more so).
art was already cheap
I feel like we're gonna get to the point where people will get hired for certain jobs based on how good they are at writing prompts for AI...
Best time to be alive 😂
🤣🤣🤣🤣
We are already at the stage where ability to use tools gets jobs more than technical competence.
this comment has a huge potential, lets see after a few years
That was pretty incredible. I'm still learning programming and I could see this as a good tool for research and testing solutions.
I made a question up off the top of my head and ChatGPT nailed it. (Probably an easy question for you pros out there)
Question: You are interviewing for a python engineer position,
say you have a list of tuples. First element being a string (animal name) then the next element within the tuple being the number of years that animal has survived. sort the array of tuples in order of animal years then by the name. There will always be at least 5 animals in the list with a maximum of 50 animals. Please return the optimal solution
Doesn't Python have a built-in sort function?
This is not really unexpected that algorithmic are solved to perfection since there are so many sources of solutions out there and its dataset/knowledge is based on that.
Its wonderful how advanced Neural Networks are becoming
Imagine that: what if the network errors were intentional (maybe the AI learned how to trigger them), so that it could give you multiple solutions and impress you even more. :D
everyone gangsta til you realize u r a CS major and there's no way you'd have solved that question optimally or even ever get hired at Google :')
Lol then you get to third year and you have no choice but to have an idea about this stuff, else your failing your classes. You shouldn't give up so quickly. Being a good programmer isn't hard, but it takes time, practice, and understanding. You don't get there right away.
@@Entropy67 im a CS junior........ LMAO
This is really amazing!! I am totally astounded as I had to work for years to reach this efficiency level in C++. I can see the demand for proficient programmers would reduce drastically. I can see job implications as fewer programmers are required and possibly very complex pieces of software will be written in no time. We'd see millions of lines of code becoming a norm.
Exactly the very opposite. Proficient will be even more in demand, and the job becomes more about optimizing.
Millions of lines of code == BAD, same job done in 10k == GOOD. Less code is better. Lines of code is bad metric as well and does not speak for code quality, if you write 300 character lines with 6 different recursive functions inside is maybe even worse than 500 lines of copy pasta salad of crapola.
ChatGPT cannot work on large codebases right now since you have to "load in" the code first, and it has trouble with more complex data relations.
Excellent tool tho, which makes proficient coders much more proficient (reading documentation can be avoided, and quick suggestions), and it will allow inefficient bad coders hide better and prolong their careers. Maybe even make them better.
So exactly the opposite you thought. As an employer and developer, based on this answer i would not hire you.
@@skaltura If you write 300 character lines with 6 different recursive functions inside, that's bad programming practice as it will likely be harder to parse and debug than a longer, "shallower" program.
@@skaltura Doesn’t matter. The end game is he is still right. As soon as you solve a problem and put it out there, A.I. find it and simply copy what you figured out. You could argue you are simply laying very complex brick and mortar digitally. To a computer it’s nothing.
As an active user of co pilot,
I'm not surprised at all, I mean it's true that the technology is amazing and has a lot of potential but it is trained by using millions of peoples data without their active consent.
Whenever I'm stuck , i could either spend an hour to try and figure the solution out or ask my copilot to fetch me a working solution from a guy who suffered a similar problem .
Other then that, Co pilot also helps me to write code efficiently and you can use it for other purposes to (i.e writing this comment).
As an daily copilot user j can verify this is exactly true
No active consent? What do you mean? Can you elaborate on that? Are you saying that it takes data from Copilot users? I assume you don't mean that it takes data from the web, because...duh.
@@Brax1982 Github repos and co pilot 's user.
Still, isn't that obvious? Do you really expect to be able to use this kind of tool and it keeps your input private? This is how they do it. They dangle potentially wonderful things in front of you and make you feel stupid for avoiding them and spending way too much time on stuff that they could do in seconds. The price is all your data. Either give in or resist. In many areas, this is my outlook on life, but in this area my curiosity gets the better of me, despite all my beliefs.
I would never be allowed to use Copilot on the job.
@@Brax1982 Yes i agree, Data is everything nowadays. Perhaps i don't see a choice here, even if you don't use co pilot , they will still read your code since your repositories (even if private ) are hosted at github.
At 5:51 I noticed ChatGPT claims isPalindrome("") returns false when it should return true, since "" reads the same forwards and backwards. In fact the implementation that it gives for isPalindrome would return true too. Funny how it can get the hard stuff right and goof with the easy parts.
Mind blown. Well I was mind blown with the previous GPT3 codex models as well, that I've shamelessly exploited in some of my projects. I've done some of my own testing and yes, ChatGPT is formidable. And its a natural language model(s) - it's not a specialist "coding" model. Imagine it doing stuff like this is Physics, Medicine or Mathematics. Or specialized versions of this model. Never thought I'd see this in my lifetime :-)
Well, keep imagining. It sucks at maths. Which is not that surprising, because it does not store data, it derives it from training data. You cannot train math problems that way. On the other hand, the model is based on a coding model. I would expect it to do well with code.
OMG the third question is exactly the one i got at uni just a month ago for my first midterm in algorithms and data structures
Your surprise is surprising. In this case it is not how well the bear dances but that it dances at all. You should be absolutely stunned that it can coherently reply to even the simplest problem. After that, it's all gravy.
Once you have hardware that can compute a 2x2 matrix, you should not be impressed if it can manage 100x100. That's why computers are handy. They do what we don't. They scale well but think badly. The breakthrough is that GPT thinks a bit better.
I'll bet you the ChatGPT was trained on Algoexpert Q/A, as it is a popular platform for interview prep
I did an interview at google before and if there is one thing that I remember well is the fact that the interview problems had missing information (or unclear information) that pushed me to ask questions. The one thing besides solving them was asking the right questions to clarify the ambiguities in order to get it right. I'm not sure why you gave it a 4 in that part since the problem was pretty clear and there are no questions involved (also a probably overused problem since It is a well known google interview problem according to what you said). Don't get me wrong, the GPT is probably one of the coolest things I have ever seen, but, besides being super cool search engine with some aggregation power in it, I do not see it doing anything else then being a productivity increaser (a super stackoverflow). I really want you to do an interview with easier but more uncommon problems and make it ask the questions to see if he handles them right and produce code based on the answers.
Haha. You called it a he.
@@danielpowers4787 Best coment!
Can't wait for someone to get a job by answering "well, given the simplicity of the task I would just ask chatGPT to write up a solution, rather than wasting my boss's time doing it myself."
0:41 love the implication that real people and real software developers are two separate things
As soon as they can create instruments that will allow this model to work on a new version of this model without human feedback, it's time for all of us to change professions. Our best chance is to become a managers who would describe the tasks to AI to get optimal solutions. But even that would not last long.
Am I crazy thinking hard question could be simply solved with one loop just to figure out how many downsteps or upsteps in raw are there? Thats when you simply count them either way.
Expl: 8, 4, 2, 1 or 2, 3, 4, 5 would be same, 1+2+3+4, when you increment counter each time change is repeated(for higher values in raw or lower values in raw) as four changes for both cases. When going downwards just add little math to check how many down step are there, because you want always to start counter with one when checked value is lower then previous.
I asked to come with a story about a food contest between Snorlax and Cookie Monster. The story ended with Elmo saying overeating is bad and they decided to go on diet and stop eating
this thing is so freaking insane. I got a promotion at work in 2023 using this thing for the last 2 weeks. I didnt tell anyone i was using it but the code i was writing was so good that my leadership was like WOW GOOD WORK!. its freaking INSANE
A brief summary:
The speaker describes conducting a Google coding interview with Chad GPT, an artificial intelligence chatbot developed by OpenAI. The speaker states that they have previously conducted real Google coding interviews, both on TH-cam and in person, and that they have prepared three coding questions of increasing difficulty for Chad GPT to solve. The first question involves writing a function in JavaScript to determine if a given string is a palindrome. A palindrome is defined as a string that is written the same forwards and backwards. The speaker notes that single character strings are considered palindromes.
Chad GPT responds with a solution that involves using two variables, "left" and "right," to keep track of the leftmost and rightmost characters in the string, and using a while loop to iterate through the string from the left and right ends towards the center. If the characters at the left and right indices are not equal, the function returns false. If the characters are equal, the indices are incremented and the loop continues. If the loop completes and the characters at all indices have been found to be equal, the function returns true.
The speaker is impressed with Chad GPT's solution and notes that it is the optimal solution for this problem. They also mention algoexpert.io, a platform that helps software engineers prepare for technical interviews, and encourage software engineers to visit the site and use the promo code "CLM" for a discount on the platform.
GigaChad GPT 😎
n^2 is not optimal for longest palindromic substring. There are a bunch of ways of doing it faster. n*log(n) is good with hashing or most other algorithms, but n is better and also possible.
Ok, BUT: the answers to those kind of interview questions were already there - some for years, some for decades. The bit that it can actually spit out a working algorithm is not the impressive part because you'll likely find almose exact answers on stackoverflow. What is truly impressive is that it parses questions in plain English (or many other) and connects it to good answers. It's like google 2.0. In an interview situation no one actually wants you to write code but for you to show that you understand the code (by writing it, if that makes sense).
This really should have people like Clement and sites like leetcode and hackerrank sweating - because it exposes live interview questions for what they are - a competition in memorization. The reason generative AI does so well with these types of questions is that all it needs to do is look up existing solutions to these generic problems. The real challenge software engineers usually face have more to do with understanding user requirements and hardware restrictions that need to be worked around to do what you want, not recite an algorithm for traversing a tree. Perhaps more software companies will change their interview styles after this, which would be a good thing. Let all the leetcode and hackerrank sites rot away as a bad memory
I like the idea of thinking of ChatGPT-like tools as a search engine and as a compressor of information. It creates a database of the internet and lets you search it, with nice and useful presentation of results.
That would be nice, but this isn't it. It mixes up factual information like characters in a story and guest-stars in a TV show.
Mind blowing, it exactly solving the question like actual Google interviewer specially when it stucks it just slow down or disconnected the internet connection
The future is here, we are here to solve big problems, not to write small algorithms, finally someone to open everyone's eye's in the it industry
Genuine question. I'm at the start of a career change moving into software development and only been doing it a few weeks. Am I wasting my time moving into this area and learning since chatGPT and git hub co pilot are around, are developer jobs in danger?
11:59
To see the rest of text/code you can write something like "Write the rest" or "Continuation"
Network error happens when responses are too long and for some reason it will timeout. To fix this ask it to: Add numbers to lines and when you reach line number N you must end your response, tell me how many lines are left and prompt me to continue. Or something like: Tell me the first N words then end your response and prompt me to continue.
I am truly shocked at its capabilities
Hire ChatGPT and see how it impacts on your business. I think it's a good way to measure it in reality.
It fails on novel algo problems.
They tried it on codeforces and it just spits out random code that sometimes doesn’t even compile.
Basically it is a form of google/stackoverflow at the moment.
Do you know what you don't know
OpenAI has a team working on generating code - Codex. I guess they may have more training data than that easily available online. I still think all of these questions could be in the training data. When I tried asking chatGPT easier, but more customized task, it wasn't doing so well.
Scary to think about the possible impact it has/Will have. Remember they have even stronger models.
I am mind blown and speechless at the same time
Well this is an interesting concept. I wanna see the roles reversed too 😈
This video is INEVITABLE!
ChatGPT has to be a human being with the world's knowledge
It's very interesting to count how many times you used the phrase "mind blow..."
It kind of tells the quality of product...
Longest possible palindrome in a string (single string, not a sentence etc.) can be found in less than O(n^2) in average, if there is a palindrome.
Since the method has to access memory more, that might not be faster tho.
substr pieces, for example string is abababb, you remove one character from one end, test, then the other end test, and return immediately when isPalindrome returns true.
No need to test for ALL of the possible solutions.
Hence, average solution is more like o((n^2)/2) -- but depends on the dataset, since palindromes are not commonplace, a comment like this would yield more like o((n^2)*0.99) or whatever the natural frequency of palindromes actually are in natural language - but on dataset of mostly palindromes the it could be more like o((n^2)*0.01). If the string is completely a palindrome difficulty of solving was O(1)
A little bit of memory overhead here removes the need to find ALL the possible palindromes, since we only care about the longest.
Rest of optimization depends on the language, interpreter and platform on what is the fastest methods to do. for example you don't actually need to even substr but you could loop the individual indices of the string; well essentially what substr does but you can put it directly into the algo to avoid even more lookups, spend a bit more memory on avoiding extra strlen() calls etc.
Point is that optimization is more than just optimizing the base algo, and there are always methods to make it faster, and just because at a quick glance it seems O(n^2) is the minimum, it might not be.
Infact, that praised o(n^2) solution is kinda weak at 14:29 -- enough to get the job done, but inefficiently.
Right now i noticed the comment of Dragono about Manacher's algo to find all possible in O(N), so even O(n^2) to start with is inefficient solution.
Boss: How can my company lower the operation cost?
ChatGPT: Fire your programmers....
Is there a fair chance ChatGPT learned from those exact questions posed before? Try some variations?
This is truly mind-blowing.
Did you try to ask some questions which not able online?
if I went to an interview and said
def isPalindrome(str):
return str==str[::-1]
would I get the job?
This is monstrous Clem.
tbh, props to the people who made chatGPT. they are the smart ones. idk why they did this but I think it is to help engineers.
its much more than only for engineers this AI can even correct for example texts you wrote and correct them and it's performing pretty good as well. I just wrote a small story and asked him to continue writing the story and it wasn't that bad
Plot twist: AlgoExpert is part of ChatGPT's training data :)
The answer to what's asked towards the end when he wonders if it was posting solutions it already knew somehow is going to depend on how you define "know," because I suspect it's likely using a lot of existing data, then maybe crawling over it to create its responses, and if that data is coming from a live internet connection would you still consider it something "known?" Even if so though, fundamentally this "knowledge" would depend on whatever algorithm is going on behind-the-scenes that synthesizes an answer from it, even if it wasn't using a live internet connection to obtain the data in real-time. It's going to be really interesting when we see how these interact with each other though IMO i.e. can multiples of them working together to solve problems help the overall performance?
then you ask yourself well what does it really mean to know something. Maybe this idea of just copying from some training data is actually how humans learn too. While we are able to truly understand the easier problems, for the harder problems we need to see them a few times before we are finally able to solve it. I bet the AI also has a few base facts that it knows to just be facts, and then it just adds to that set of knowledge and it builds and builds.
ChatGPT tells us that it uses a dataset from 2021 and before. It doesn't and cannot search the internet it said. You can ask it this yourself.
Your reaction was great! and in fact, it starts to be a scary thing...
I think it will be a great personal QA testing robot as well.
Considering these questions were all on the internet verbatim, right next to their solutions, on your website, I'm not surprised it got them right. You should try altering the questions or even making up new, original and unique questions to see how well it does. What it's doing here isn't too far off from copy pasting the solutions from your website.
ChatGPT just started It's journey.
I wonder what it will be able to do after getting more updates.
Hey, I'd like some input here from a programmer. I started to slef-study web dev a couple of month (I'm brand new otherwise to coding). My main goal was to create a personal project, and I can definitely see how ChatGPT will help. That said, as a secondary goal, I was hoping that throughout this process I could eventually gain the skills required to land some entry level job down the road and move up from there, or even do some freelancing for smaller/simpler projects. It seems to me that ChatGPT poses a problem. It can certainly replace novice guys and gals like me who are self-taught. Am I right to assume that this tech might render the 'self-taught' or even the 'bootcamp' candidates useless? My fear is that if the low-end coding work can be done by AI, the actual programmer that is supervising it will always be senior level, or at least have a solid academic background in CS. So what are your thoughts about coding noobs who are currently following a non-academic path? Did we miss the train?
All current seniors were once juniors. If ChatGPT were to completely eliminate the need for junior positions that would mean that there would never again be more new seniors, which would mean that there would never again be more software engineers, so over the course of 1-2 decades the total number of working engineers would drop to zero. That is an unlikely scenario in the extreme.
What will happen instead is that certainly the bar will be raised for juniors. Since ChatGPT, CoPilot and other tools are widely available entry level requirements will go up, and competition will be more fierce. Yes, you will have to do more and learn more to stand out. No one should think that they can do a six month bootcamp and expect to immediately be job ready. But if you're determined enough and you give it enough time, it will still be possible to break in. This piece by Gergely Orosz is spot on, even though it makes no reference to AI: blog.pragmaticengineer.com/advice-for-junior-software-engineers/
@@jdwalters9489 Great points, thanks
@@jdwalters9489 That's 👍
@@adamszanyi2242 most jobs that are mainly about manipulating bits will be automated in the near future, the only question is when that will happen exatly. Will it be 10- 20 years from now or a lot sooner? Orogrammers especially seem to fail to understand basic supply and demand and act like they never heard about technological unemployment or think they are too "special" to be replaced by AI. They will go from being overly optimistic to having a rude awakening.
The network errors drive everyone crazy.... if the script you are asking for goes bigger after a certain amount of tokens... this is very frustrating... specially the fact that when the error appears, all the information is lost...
You literally do Kung Fu when you're explaining algorithms!
This is just a computer doing math, basically. If you really want your mind blown, take a look a this:
Q: Which of those options do you find the most compelling?
A: I think it would be most interesting if the generation ship became a space station after its journey, as it would allow for further exploration and colonization of the Trappist system. It could also serve as a hub for trade and communication between the different colonies, and potentially even other star systems.
Q: That sounds great, where would you like to go from here?
A: We can explore the societal dynamics of the colony as it grew and evolved over time, eventually leading to the formation of the current oligarchic government. We can also consider the role that technology and scientific advancement played in the development of the society, and how it shaped the cultural and political landscape. Another direction we could take is to delve into the internal politics of the corporations and how they interact with each other and the government. There are many possibilities for where to take the story from here.
Like... dude. 🤯
Man, screw this. I'm going to Mars so I can be the first person to invent gravity over there 🤣
May network error save us from the A.I. apocalypse.
Saw the ring! Congratulations man! wish you the best :)
this was fun to watch. thanks for doing this video
this shows why these questions are silly to ask in a job interview
You should have told it that it got a "Strong hire", just to see it's reaction.
This is scary and amazing at the same time.
But also, it's funny to see people say 'it just knew this already'. To that I ask - how much of what you know is actually original and inventive thought? Even your creative solutions, aren't they just a distortion and amalgamtion of things you already know?
Amazing. The documentation is very helpful.
You can give it the code it made before it's network errors as input and ask for the time/space complexity
So far everything code related i have tryied to ask the AI left me speechless, it is a tied beast thx to the network errors...
Is this gonna kill junior dev jobs? Should I stop studying and look to something else?
Just take a career that is not mainly about manipulating bits like programming and you should be good. That are still many jobs out there in which human contact is essential.
So, I don't need to renew my AlgoExpert subscription next year lol
Waiting for ChatGPT to start roasting people and posting it's TC on blind
It's scary. I'm scared that my job will be replaced by AI... EEEEEK!!!
you should ask ChatGPT a question which have never appeared on the internet or same problem but described in a different way. If It still address those problems, I think we those who are software engineer should prepare to find a new job lol. Thank for the great video.
"What is the space-time complexity analysis of this solution?"
Now I'm 100% certain I never want to code for a company again and everyone has lost their mind.
Do you think robots will take programmers' jobs?
NEVER. AI can't actually think. They only calculate.
@@NathanHedglin no doubt at some point in the future they will be used in replace of real humans. no doubt in my mind.
@@ryangarro3935 he's deluded
So discouraging for someone who was hoping to make a career switch into software engineering! Glad I didn't spend all the cash I would have needed to go through a year+ of training perhaps. Back to the drawing board.
So are we done with Google interview style questions since they're now arbitrary?
I‘ve seen similar videos like this and tried many options myself and it‘s all the time the same.
General algorithmical problems with limited scope (because in comparison to a real-world project all these algorithms have a very tiny scope) like the ones in this video? Yes, fantastic!
Complex solutions to not so generic problems? Not a chance!
Therefore, my 2 learning are:
1. Teachers have no job in the future! Any generic thing a teacher can teach me in any subject, the AI can teach me as well and even better!
2. Human resources employees create no value to a company as their methods of questioning a prospect are useless to find the best prospects.
Of course, the second point about the HR department of companies is something, that is true even without an AI and I knew that for 20 years already.
Might not be a chance with this _prototype_, but it definitely seems like they managed to make the breakthrough, so the hard part is done. Some polishing, and I can easily imagine an AI far superior in coding to this one, within only a few years time.
FAANG companies are in trouble. They will not provide online or virtual assessments again. What do you think about the future of coding interviews?