Me: Know what’s _not_ my favourite thing? Debugging other people’s code. Copilot: Hey, how would you like to spend most of your dev time debugging other people’s code?
@@shawningtoneven trying to use it to learn something totally new, you can catch obvious mistakes, and sometimes it will continue to repeat them ad nauseum.
@user-lw5wm2hg7s Ironically, it seems that marketing people will be the first in line to be replaced by AI. Both of them works on numbers and statistics anyway
@@MatichekTH-cam The difference between visual composers and text code is that a single visual node is ~equivalent to a single line of code. If visual was so much better, programmers would have used it themselves.
Got my first car around '95. It had a manual transmission. It was awesome, my car felt like an extension of my body. Bought my first car with an automatic transmission in 2012. While the experience wasn't as fun, it was still awesome because I no longer dreaded my 45m+ commute in stop and go traffic. Bought my first car with some AI self driving in 2022. It was awesome because after a long drive I didn't feel drained. I strongly think I'm a better AI supervisor because I have decades of experience driving without it. I worry about people getting into any skill or craft using automation without first understanding the older, manual processes. A few months ago I almost got into a head on collision in a wonky intersection. By the time I knew what had happened, I had already overridden the AI, swerved out of the path of the oncoming car, and came to a safe stop all on instinct. Don't think a teenager using my car would've been able to do the same. I think developers will see the same.
Absolutely! There is no replacement (yet) for years of reasoning through code. I use Copilot to help me with coding, but find that it occasionally takes me down the wrong path. My years of experience coding allow me to notice this and redirect it, were I less experienced, I would end up letting it write me code that can run, but has dangerous corner cases.
Exactly my thoughts on it. It's similar to the argument about resources during exams and stuff in schools: students are going to have those resources in the workplace, why not give them those during tests? Problem is now they rely on regurgitating from the resources and have no genuine knowledge for themselves. The resources act as a crutch, rather than a reference. An AI programming assistant is great when you know what you're looking for: it can hopefully get something close enough that you just need to tweak it a bit to make it work for your specific situation. But if it's wrong? Well, you better be able to tell.
8:52 Don't worry about that, only like 40% of students in my class of just completed Computer Science education used ChatGPT and co-pilot exclusively to crawl through the education.
Entry level is where copilot causes the most problems. New devs aren't often aware of industry standards, best practices, or good, well engineered solutions. They're better off getting it wrong on their own in hopes they ultimately solve it than to lean on a LLM they don't understand.
If you work in a large legacy system, with not much documentation, and a ton of business rules enforced through copied and pasted code, and need to interact with it respecting existing rules, you do end up reading 10x more than you write, because no function or other abstraction sums up what you need done, separated from details of the most user facing part of the old system. (could be UI forms, could be http controllers etc). It takes even more reading to decide the least damaging place (or if you're lucky it feels like a search for the most appropriate place) to make changes.
Man, do you work with me?! Also, finding the best place to put your change to ensure the least amount of testing resources are needed... even more important than how it performs lol. got to love it.
he also read that line wrong, ironically, it didnt say "people spend more time reading" it said "code spends more time being read", because multiple people will need to read that code
I work in consulting world and I spent 90% of my time reading code. On a good day, I might spend 3 productive hours coding if there isn't a ton of meetings. If there's 2-4 meetings that day, I might get 1 hour to code. More important than writing code is making code easy to maintain and well documented.
17:07 For every line of code written at my office at least 2 people need to read it during review. Then an additional 2 people need to read it for testing and review. So I kinda believe that in general a lot more time is spent on reading code compared to writing it.
Yeah, and I'd be surprised if you didn't re-read your code and the surrounding code it interacts with as you're writing. Maybe 10 to 1 is too high, but it is undeniable that you read more than you write.
so nobody puts an LGTM on code reviews? Nobody reads code. Nobody spends time thinking "what if this implementation is wrong?" or "what if I have to maintain this code in 6 months, when the requirements change?" Doesn't matter if your branch protections requires 4 code reviewers and an affidavit from management that this code is free and clean of bugs.
Exactly, I have seen someone demonstrating generating a spectrogram to find some audio anomaly, instead of analysing the spectrogram's values in a normal way and come to some conclusion, he passed the image through a Convolutional neural network (analysing pixels) and tries to explain how great his solution was.... he has no clue what he is doing, I'm afraid we will see more of these "solutions" in the future..
Sometimes, it's burning obvious what you're about to write. Sometimes, nobody except you knows what you're about to write. Based on this, Copilot's suggestion will be somewhere in the range of convenient to not at all useful. As long as the obvious lines are frequent and as long as reading a suggestion is faster than typing it, Copilot is useful. If you're a zombie and you just complete everything, you have a problem.
Writing code vs reading code: When I'm in the middle of developing something, I can write hundreds of lines a day. That does not happen often. But then I have to come back to that code, say a month later, and need to fix some bug. In that situation I'll spend an hour or two reading code (and maybe writing a bunch of _printf_ commands to figure out the edge-cases), and at the end of the day I'll write maybe 10 lines of code. The bug fix was a *lot* of reading and relatively little writing. And the nature of my job is I'll spend much more of my time fixing bugs than writing brand new code.
I’d like to mention is that for me, churn is much higher at the start of a project when I haven’t established the patterns I’d like to use. Often I’ll implement things one way, then realize it won’t fully fit my requirements, so I refactor. As a project matures, the established patterns have proven effective, and existing code doesn’t need to be changed to fit new ones as frequently. So maybe there are just more new projects entering the GitHub space causing an increase in churn that’s probably typical of new projects
As a learner AI mainly serves a the role of docs + stack overflow. It immediately gives an answer without belittling me and it knows utility functions off-hand that I didn't know exist or couldn't find. I don't let it write code that I don't understand. If it does I either delete it or look up whatever function I don't understand. It can also explain what's wrong with my code. I feel the follow up is important here. If you ask questions until your problem is resolved it's a crutch that hampers learning. If you prompt it until you understand it speeds along learning.
Not my experience with CoPilot in my limited time using it, but my exp with ChatGPT-4 is close to what you said. Albeit I don't think it teaches very well - it often hallucinates ideas that fit your bias, rather than actually teach you valuable insights like a senior could (or tinkering with debugger or reading good documentation). It's very dogmatic and can often waste your time leading you astray - and waste case let your problem solving skills become rusty. Mind you, so can SO very often. With experience, I've found SO to be an incredibly unreliable source except for quick help with a poor library needed for prototyping in something terrible like Android.
yeah i think the most helpfull tool i would need is something where i say the language im using, and what im trying to do and it just lists the libraries i need to import
I've never had an AI teach me something above college-level. I tried, but I simply haven't been able to make it useful. Maybe I'm a bad "prompt engineer", but I just don't see how it can come close to replacing reading real documentation and mentorship from real experts. I mostly use it for boilerplate stuff.
who would've thought that stealing code from the web and automating a plagiarism bot would result in garbage code, amiright? /s modern techbro world is truly backwards 😂
"stealing". Despite basically all open source licenses prior to 2023 permitting it. Despite highly transformative usage of the data. Despite it being explicitly legal in countries like Japan. Etc.
You're only getting a pass on calling it "stealing" because it's copilot which *_does,_* for some reason, have a memory of its training data. Your above water, but not by much.
We definitely read much more code than we write, because 1. Most work is around existing code, e.g. bug fixes, writing tests, small changes in behavior, refactors, enhacements (e.g adding a new field to an API) 2. Before adding new code we have to look at existing code for reference, or at the minimum find the right place for our code 3. In any reasonable professional context (job or open source) someone has to review the code we wrote - this alone means half our code interaction is spent reading Consider a team setting where half the work is maintenance and six devs have to review all code submitted just to stay on top of their team codebase, that's already over 90% reading
That "55%" faster is somewhat true to me although I'd say only for specific case. For most case, Copilot autocomplete felt like distraction because it infer too much context that might not available at the time of us writing the code. One way that's valid (without even considering typing speed) is that for some intermediate/ common algo, Copilot could "feed" you a good boilerplate based on publicly available implementations. E.g I couldn't remember imperative detail of some case-specific quicksort variant (quickselect, mo3, pivot+cache) but with a proper hinting I could get some starting point to work with. The tricky part, however, is that if you don't really understand the underlying algo you might end up botched the implementation.
The "55% faster" number is actually not really representative. This number is based on a short study where people were asked to implement an HTTP server in JavaScript. The group with AI assistance completed this task 55% faster. arxiv.org/pdf/2302.06590.pdf
Sadly I am in the season of "spend hours reading some guys massive project to find one weird bug and then make a 1 line change." I hope to get back to adding features soon because it is more satisfying for me.
I am a TA for for software development for bussines informatics students and the quality of assignment submissions has skyrocketed while the exam quality has become abysmal. Copilot is a real hazard when yo uare learning the basics. The stuff we teach goes up to basic patterns in Java so it's pretty much only the ground work you need to later be able to read code. You should be able to produce a simple Java class, some loops etc. without having to resort to pseudo code or reading anything at all will be much more of a struggle. AI won't replace senior devs in quite some time but for juniors and beginners it is a real danger. Not only in terms of employment opportunities but also in when it comes to learning the fundamental skills.
Aren't you saying here that it's not good enough to teach but it is good enough to replace junior devs? Not sure both can be true. Why are we so quick to assume LLMs can replace devs? Devs who get fired, sure...
@@nickwoodward819 Having a tool do a thing for you doesn't mean you also learn how to do it yourself. A calculator is great at replacing someone whose main task was crunching numbers. It is awful at teaching someone how to do that number crunching in their head. Similarly an AI might be great at comprehending and working on simple assignment code for you, but not good at teaching you how to do so yourself. And you'll need those basic skills once you work on code that's too complex for the AI to handle.
@@cameron7374 Lucky then that I didn't say that :) Like you said, they're good tools - I never said they weren't. But ultimately they can't teach juniors for the exact same reason they aren't good enough to be juniors: They're pretty rigid, frequently wrong and need constant supervision. Granted that's *similar* to a junior, but not the same. For some reason we've been superficially wow'd into thinking otherwise.
It feels like something I finally have an advantage on. People seem to be trying to use it the wrong way. My code is still my code, just accelerated in being complete
Another confounding factor is that over the time period in question Covid happened. A lot of devs would have changed to work from home. It may be the case that junior devs who would ask another dev questions weren't able to as easily, so were more likely to check in code and ask someone to look over it or for other advice.
Uncle Bob said that the influx of new developers was for years so, that the number of devs doubled every 5 years. So the experience level of devs is in average 2,5 years - a new "Senior". This, and the quick change of tools/technologies is constantly eating away on any mastery that the older devs are building up. Btw: Yes, you also read more code than you write. Everytime you go and check what a method you call actually does, or how something handles something or where to place a new piece of code. All that time you are reading the existing code. Still you feel like actively editing/adding code, but mostly you need to find the right spots. So clear/clean and understandable code is important.
Not being DRY has consequaces, especially in js and CSS shipped to the client, but not only, also server side can be affected at scale, more code to parse, more files to read from disk. That said as you stated many times being too DRY is the issue of premature bad abstractions.
Ai at code generation: meh results Ai at code explanation (and Reverse engineering): really good! That has been my experience so far. However, if you beat it long enough it also produces good code, but in that same time you would have written in yourself probably
I can cartainly relate to the reading 10x more code thing, at least when i do Don't Starve Togther modding, which as a hobby coder is probably the closest thing to working on a big project i've done. Certainly tons of legacy code, with the bonus that you can't even modify the code directly and also no debugger, just a dev console to type print(whatever) into.
I am doing some CS studies rn, because I needed a line on my resume to actually get a dev job in France, and I swear I am worried about my fellow students. I haven't spoken a word about copilot due to this worry, and I still see some of them using chat gpt, and I wonder if they'll ever even get half the knowledge and competency they're supposed to acquire. To top it off, they're about 4 years younger than me, and they're the generation that got not only the new highschool reform (thus went through a fresh, unproved system) but also covid during their highschool time. Some of them are smart, capable of things, but I don't know if they'll reach half of their capacities. 16:40 One of the first thing I told them was to train to type faster, I don't think they did even a minute, it's been around 6 months.
I feel very similarly, every day in CS classes I'm overwhelmed by amount of people who so openly talk how their code was all AI generated. I had a group partner on a lab where we were doing WiFi socket programming and he generated code for the server in few seconds during the class, while it took me dozen of minutes to write code for the client. Then we spent an hour debugging his code to make it work, while it became clear he got no idea how sockets work. All he tried to do was to paste code errors into ChatGPT as well while I went looking for documentation. It makes me seriously worried about IT field.
This and chat gpt 4 only makes me write my code 100% slower. The amount of time I have spent explaining the problem to it, with examples, and then cross checking it vs the documentation is actually more than what I could have looked up myself. It sometimes adds unexplained complexicity and doesn't even consider nulls in data unless I tell it to consider that while making code snippets
Copilot is pretty bad outside of boilerplate code, but i was pleasantly surprised by GPT 4 last night. I still had to make some suggestions which significantly improved the code quality, but it was definitely faster than writing it myself.
I started to learn go three days ago and Copilot in GO is amazing, in typescript it give you shit code all the time, but in GO is like the opposite. Perhaps is because GO code is usually very repetitive and simple
pearson correlation coeff is between -1 and 1, so 0.98 is an insane correlation if there's enough (reliable) data. basically means increasing the % of copilot use increases the amount of mistake code, in almost a straight line. it doesn't say how steep that line is though, just that e.g. the median copilot usage data point is also (very close to) the median mistake code point. the actual increase could be tiny, and again its correlation not causation, or whatever.
I describe Copilot as a senior dev with an eidetic memory who is an outstanding teammate as long as he's sober, but you gotta watch it when he's on the sauce.
As a general rule, statistics on any company's sales page shouldn't be taken seriously, especially when one of them is an abstract thing like "fulfilled"
I think that a recent paper has shown that LLM suffer from fix point problems: i.e. a LLM is less accurate when its input data was generated with an other LLM. So as internet gets full of non-human text, this text is used to feed LLMs which performance is worse than before... I am trying to find the reference.
the read write ratio depends on the code base. tools, greenfield, etc is mostly writing. legacy applications that has turned into a big ball of mud and the devs are new is mostly reading.
I don't know how I stumbled upon this channel, but I am fascinated about this view into world of computers and math which I am both very bad at, but envy the skilled. I am curious as to how software translates to what hardware is actually doing. Like I am guessing all code translates to zero and one which zero and one means an open or closed circuit?
17:00 - "code is being 10x more read than written" is not about you, it's about code. If you write code that lives for 10 years, then people read this code 10 years, but it is written only once. If your code lives for 30 years oh boy people will read it 1million more times than it is written.
I am in College right now and they heavily advertise copilot and how it is free to use. These brain dead kids in class don't know how to code, and they think they know something when they make a half baked website with copilot. On the software engineering project, my team was the only one to actually deliver on what we promised. The reason why is because I have been coding since 2017. These kids probably started coding last year. 😂
As far as percentage of time reading vs writing code... It's just a difference of style! When I code, I will think very deeply about what I'm doing and not really start writing until I know pretty much exactly what I am going to do - then it takes like, 2 seconds to actually write out (assuming all my assumptions are correct, which they often aren't 😉). While others (like yourself, no doubt) are more about getting in and writing code as soon as possible to experiment and tinker and find the solution - I have a colleague who is the exact same way. It's a whole spectrum, I imagine This is a good thing, though! Different styles of coding are better suited to different kinds of tasks and having a diversity of coding styles available to a team will better ensure they are always best suited to deal with whatever issues might come up :)
About the code being read 10x more than being written, imagine what percentage of code you change when you open a file. Sometimes it's going to be 100%, but usually it's going to be very little. You open a file to see what that function does. You open a file to add a line to it. I write a lot of code too, sometimes, but usually I'll be editing a file, not creating a file, and in those existing files I'll be scanning a lot of names and structure.
Students where i live at least isn't allowed to use co-pilot while learning , whatever it would be. But then on bigger projects they're allowed to turn on co-pilot, also co-pilot not allowed during exams. So all it does it help them create bigger projects where i live which i think is kind of nice. Could be different at other schools of course.
A professor of mine who also teaches in the CS faculty told me that projects now are better than ever but people are eating shit in the exams at a rate never before seen. He believed it was because of AI assistance when doing projects.
@@andresmartinezramos7513 Interesting i would have thought it would help the students. Since you have to learn the basics and understand everything before you are allowed to use AI. We'll i guess you find out a lot of problems during a project where AI solves it for you. But honestly thought it helped with learning process not hurt it.
@@zivkobabz The thing is that he suspected that his students were using the AI without knowing the basics. There is nothing to stop them from using AI at home, but there is in an exam setting.
@@andresmartinezramos7513 Ah that is correct they could be cheating themselves on purpose, even if the teacher says it's going to hurt them if they use it for while learning basics.
As a student copilot is the worst I see so much code that I'm just like why would I do this and they have no response and it's bc they don't know they just used AI. Lmao at least I'll know how to code properly compared to them when I graduate
The future is looking sad for people learning to program imo. Im not that old but I remember learning to program 10 years ago by reading books and written tutorials online and it was amazing
Personally, I really dislike having to read learning material. Most of the time I'm not actually interested in the learning part itself, I'm interested in building stuff and having the skills I need for that. So if people are able to do that, I don't think it'll be all that bad.
16:42 I think the quote is about the code that’s written, not the person writing it. I can easily imagine if you write a new feature, you spent 90%+ of your time writing rather than reading. But over the lifetime of the code itself, I would totally believe 10x more hours were spent *by others* reading over it as needed.
From someone with 4yrs of exp, Copilot is very good at repetitive tasks like wiriting something similar to what you've already written (i.e. DB access layer of a program), it's also good at writing base documentation to build upon. For actual code it's: 5% amazing code, 10% serviceable, 20% a little bug in there that you won't notice until you've read it 10 times 50% not really what I want and 15% "go home you're drunk"
As a web developer I really only use Copilot for basic frontend shit to quickly implement design ideas and build rough layouts. But %90 of the generated code never makes production.
Btw: When using feature branches: Committing and pushing in small Steps happens with modifications of already pushed code. This could lead to false positive “churn” I guess.
That was/is one of my hopes for AI that we could perhaps maybe be less DRY (refactor everything that has more than 3 words in common to functions;) (like these lines at 5 different places almost do the same thing with slight variation, let's just make it a function with a "couple" of arguments...:) and then maybe we'd be able to actually read trough code without having to at some point jump 20 function calls deep meanwhile holding 27 template parameters and 8 actual arguments in your head and trying to figure out what an abstract/interface method call will actually call... And they said "We only need you to add 1 new ATA command to this FireWire attached SATA black box" without ever seeing the code... how hard could it be:) (Good luck debugging a 40MHz microcontroller with 16kB of RAM that has to translate SCSI commands to ATA with 300 MBytes/s data transfer rate, in case that was your idea how to figure out what a virtual method call actually does.;) (...and by the way it took around 30 minutes to compile the code... each time! not just the first time... partially thanks to nearly every class and function being templated) Maybe a skill issue:), or tooling issue, but I don't want to touch projects like that ever again.
My fav thing about copilot is not the code. I'm writing a game mod and sometimes it will just know IDs for mobs and NPCs, saving me writing out a long list where I have to look up each and every one. If someone made a copilot which only autocompleted constants from various games I would be the happiest man alive.
So, some time ago, I read a blog or somerhong that said that when you download vscode from the "installer" you implicitly accept to share some data with vscode while if you build it from the open source repo, there is no such data gathering. Dont know if it was rlly true but seems to kinda make sens ?
Yes, telemetry is the sole reason we have VSCodium as "clean" alternate build. Even the name is a hint (homage to Chrome vs Chromium that strip Google's telemetry as much as possible). You can disable extended telemetry in settings. But I can't remember whether it stop all telemetry or strill send context-free data.
Honestly, I use it for “advanced rubber ducking” at work. Maybe it generates some worth while code or boilerplate but ultimately I fall back to reading vendor docs or peeking the class. Sometimes it’s good for quickly summarizing your code in a comment…sometimes!
Keep in mind, we are in the honeymoon period where the people using & overseeing the code that Copilot is generating, are usually competent & experienced devs themselves. They can catch the bugs & security problems because they recognize them. What happens in 1 generation when we have vastly fewer senior devs who understand code in a deep way? And what happens when Copilot trains on older copilot generated code? Quality is going to degrade over time & the ability to debug & catch mess in review will be a fading skill. Yikes.
I was trying to make a small thing yo automate a config format conversion and used bing ai assistant, which is basically co-pilot (it's even what they write when you ask it programming questions). I asked a question about Yaml class in a java lib and it was clear that it completely did not understand how that class worked.
when i first saw it in my second year college i was like that's good but untill i get enough experience to know what it is typing while on other hand a friend of mine was just installing anycool extensions and using copilot and when i asked him what's it doing answered no
To me, AI-assisted is like a drunk senior friend with a bit of bad memory but it is very good at reading and seeking references in documentation and reading large sets of output for debugging.
Possible reasons you read 10x more code than writing: - Bad modularity - no matter what you change, even the most distant and seemingly unrelated parts of a big project could break because of it - You are using a game engine or some other huge proprietary piece of code. Some game companies even have teams to write the tools everyone else uses and sometimes they never pick back any bugfixes, so every team around the world keeps fixing the same bugs again and again. - You've joined a very old product and you need to learn a lot about it before you can extend it. The older it is, the higher the changes the other devs already left the company, or maybe even already died by natural causes. Also, the older it is, the more bugs you inherit. - Even a newer project can be mismanaged to such an extent, that it's a mess and nothing can be added without needing to fix half the project first
No the 55% faster claim can easily be inferred from typing speed and how much code you're letting co-pilot write for you instead. That one's not hard at all. If they wanted to account how much less r&D or rewrites that number would probably be much higher actually.
Remember just a few months ago when devs without any understanding of AI thought AI would take their dev jobs and I was like the only person in the world that said the opposite because AI mostly produces trash because there's no ACTUAL intelligence in there?
GitHub's CEO basically valued each line of code written at more than $13,500 per line to reach a valuation of $1.5 trillion being added to the economy by the use of Copilot. I'm willing to bet that the total value of the entire volume of code written by Copilot is worth far less than $110 million, a rough valuation of $1 for every line of code added, updated, deleted, copy/pasted, find/replaced, moved, and no-op. I further assume that most of this code had absolutely no commercial value, so the valuation is so far-fetched that no rational person could ever take it seriously.
0:19 yesn’t - these sorts of papers should really have a DOI number that would outlive whatever medium it was distributed through and their corresponding URL, whether it be blog posts or PDFs
I'll spend days writing 1.5k new net lines and get a rubber stamp approval in like 30 seconds...how many shops out there actually budget for devs to do quality reviews? Cause the push for new features is constant from what I've done in 7 years as a dev.
As far as fulfillment, the thing that makes me want to stop and work on something else is consistently the most boring 20% of a project; either boilerplate or uninteresting glue that should've been a library function. Copilot does great with that (because I'm probably the 10,000th person to ask...), which is huge. Also does great if you need a minimum implementation of something unrelated to what you're working on to make things compile and run - copilot makes it cheap to slap some throwaway code there while you working that you fully know you're going to blow away later.
DRY should be a promotion process: * start by DRY in same file that use the logic * keep copy/paste until at least 3+ use cases * promote to DRY in a common file after enough files are using it * keep copy/paste between packages/modules/projects until at least 3+ distinct packages/modules/projects are using it * promote to DRY in importable packages/modules/projects after enough packages/modules/projects are using it * keep promoting through hierarchy (e.g. organization repo)
I would like to add to the discussion, that you might think Copilot code is good code when learning. And once you have learned wrong patterns is might be hard to get rid of them. Therefore for beginners I would suggest, that reading seeing well written code is really important. It's like the paper where they tested how many code from bad or outdated tutorials/stack overflow articles went into some project. Those multiply with each new developer seeking information.
is there any influence from employee churn at tech companies? I see news about swaths of people getting fired, could that not have a similar impact? Perhaps someone can explain if this is a factor.
I'm a Jr web dev. I use it for writing annoying things like css utility styles. But I generally turn it off as the time to do just enough, then wait, then check if its correct, then tab, I could've written it myself and not felt a weird limbo and forced decision of checking the accuracy.
If ChatGPT's primary use is producing boilerplate code, then there is a problem with the programming paradigm. We should be striving for ZERO boilerplate code. Boilerplate code is a symptom of a flawed development platform. We are severely due for a massive revamp of how we write code. We keep banging the same two sticks together expecting better results. We need a game changing disruptor on the level of General Relativity disrupting Newtonian physics. I'm talking Star Trek Next Generation futuristic shit.
Well, when I was contracting for two years on one huge project my whole was task go though hundreds of bug reports and fix the ones that were actually bugs. I spent orders of magnitude of time searching around and reading the code to figure out the best way to fix the bug than actually writing the code that was the fix. That's after having run the code and observed the symptoms. Only after all that time was I trusted to actually build new features for that thing.
In year 9 I had my first introduction to programming with my year 9 IT class being to build a site in Weebly and then a game in gameMaker... I hated it. My next real introduction was year 11 IT which I completed in year 10 - html, css, javascript (no advanced text editor allowed) and Visual Basic... Suffice to say at this point I thought programming was shit and went back to my calculus, but my third and final "intro" to programming was the coding train, and I can say with confidence that is the reason I have a programming job today. I hope the future generations also find passion from a youtuber or person they look up to and find joy in the creation, rather than being bogged down by the countless libraries and deployment pipelines that are required to even qualify for a job nowadays.
It can be really helpfull but screws anything over with short sighted implementation once your project exceeds a certain size. It will start adding data structures and controls that aren't needed and will increasingly hallucinate from the code you've already written.
It was trained on an internet full of meh code
The phi model is much better at code vs copilot
This is why they push it to companies. To train it on high quality codebases. And to get backpropagation through developer adaptions.
There have been studies showing AI trained on AI output deteriorates rapidly in quality.
The more it's used the more its own output goes into GitHub.
Companies have very strict data rules, enterprise solutions aren't trained on @@McZsh
@101Mant yeah was going to say the same thing
Me: Know what’s _not_ my favourite thing? Debugging other people’s code.
Copilot: Hey, how would you like to spend most of your dev time debugging other people’s code?
truth
lol 😂
It's even worse it's synthesized from several people's code so it's not even consistent.
I love when you know what you are doing and its just suggesting clearly wrong things the entire time, and it's like stop trying to jbait me copilot!
@@shawningtoneven trying to use it to learn something totally new, you can catch obvious mistakes, and sometimes it will continue to repeat them ad nauseum.
we did it guys, we secured our future in the industry!
we will be replaced, it is just a matter of time. Coding will be done in visual composers and templates will be generated on demand
Cybetsecurity bros are partying.
@user-lw5wm2hg7s Ironically, it seems that marketing people will be the first in line to be replaced by AI. Both of them works on numbers and statistics anyway
@user-lw5wm2hg7s Ironically, marketing people might be the first in line to be replaced by AI. Both of them works on numbers and statistics anyway
@@MatichekTH-cam The difference between visual composers and text code is that a single visual node is ~equivalent to a single line of code. If visual was so much better, programmers would have used it themselves.
"Github copilot better than junior"
Dude who the f did you hire?
A 22 year old college graduate who just spent the latter half of their degree using Copilot and ChatGPT to complete their assignments.
@@felixjohnson3874more like 3 month boot camp grad with 2 more months of “self teaching (TM)”
@@felixjohnson3874 That's one smart kid. Not wasting brain cells is a virtue. Should qualify for lead product manager.
@nexovec is 100% right.
me
Got my first car around '95. It had a manual transmission. It was awesome, my car felt like an extension of my body. Bought my first car with an automatic transmission in 2012. While the experience wasn't as fun, it was still awesome because I no longer dreaded my 45m+ commute in stop and go traffic. Bought my first car with some AI self driving in 2022. It was awesome because after a long drive I didn't feel drained. I strongly think I'm a better AI supervisor because I have decades of experience driving without it. I worry about people getting into any skill or craft using automation without first understanding the older, manual processes. A few months ago I almost got into a head on collision in a wonky intersection. By the time I knew what had happened, I had already overridden the AI, swerved out of the path of the oncoming car, and came to a safe stop all on instinct. Don't think a teenager using my car would've been able to do the same. I think developers will see the same.
Absolutely! There is no replacement (yet) for years of reasoning through code. I use Copilot to help me with coding, but find that it occasionally takes me down the wrong path. My years of experience coding allow me to notice this and redirect it, were I less experienced, I would end up letting it write me code that can run, but has dangerous corner cases.
Exactly my thoughts on it. It's similar to the argument about resources during exams and stuff in schools: students are going to have those resources in the workplace, why not give them those during tests? Problem is now they rely on regurgitating from the resources and have no genuine knowledge for themselves. The resources act as a crutch, rather than a reference.
An AI programming assistant is great when you know what you're looking for: it can hopefully get something close enough that you just need to tweak it a bit to make it work for your specific situation. But if it's wrong? Well, you better be able to tell.
Yeah, I think copilot is only useful for experts who can recognize the code as good or trash.
I think companies are gonna start forbidding copilot and AI for any non senior users. You are super right about experience.
@@BusinessWolf1Mine already does -- more for security reasons than bad code quality, though
8:52 Don't worry about that, only like 40% of students in my class of just completed Computer Science education used ChatGPT and co-pilot exclusively to crawl through the education.
I suddenly feel very 'I guess I'm alright' about myself.
Imagine not loving CS enough to do your own work. Why even take CS classes if you're not interested in the subject?
@@7th_CAV_Trooper Money, the 6 fig salary as a junior myth has flooded CS with many unpassionate people sadly
@@7th_CAV_Trooper false hope and brain damage.
The CS field is filled with 6 fig dreams.
iPad kids?
"You live in a bubble of 0 application users" 🔥🔥🔥
"I hate to break this to you but you live in a bubble of zero user applications." 12:15
Best comment
facts, cause most projects are legacy, be it 20 year legacy or 6 month legacy (when vite was not popular yet) :)
More CoPilot code means MORE NEED FOR ENTRY LEVEL JOBS TO FIX UNMAINTAINABLE CODE PROBLEMS!! WOO!!
Job listing: Junior Dev Position for helping fix unmaintainable code problem! Experience requirement: 6 years
@@starmechlxPrOmPt EnGiNeEr
Entry level is where copilot causes the most problems. New devs aren't often aware of industry standards, best practices, or good, well engineered solutions. They're better off getting it wrong on their own in hopes they ultimately solve it than to lean on a LLM they don't understand.
If you work in a large legacy system, with not much documentation, and a ton of business rules enforced through copied and pasted code, and need to interact with it respecting existing rules, you do end up reading 10x more than you write, because no function or other abstraction sums up what you need done, separated from details of the most user facing part of the old system. (could be UI forms, could be http controllers etc). It takes even more reading to decide the least damaging place (or if you're lucky it feels like a search for the most appropriate place) to make changes.
Man, do you work with me?! Also, finding the best place to put your change to ensure the least amount of testing resources are needed... even more important than how it performs lol. got to love it.
@@melski9205 70% of the time it just goes in the nearest util file.
he also read that line wrong, ironically, it didnt say "people spend more time reading" it said "code spends more time being read", because multiple people will need to read that code
I work in consulting world and I spent 90% of my time reading code. On a good day, I might spend 3 productive hours coding if there isn't a ton of meetings. If there's 2-4 meetings that day, I might get 1 hour to code. More important than writing code is making code easy to maintain and well documented.
17:07 For every line of code written at my office at least 2 people need to read it during review. Then an additional 2 people need to read it for testing and review. So I kinda believe that in general a lot more time is spent on reading code compared to writing it.
You're right but there's a caveat. A lot more time is spent thinking about code than reading or writing it.
Yeah, and I'd be surprised if you didn't re-read your code and the surrounding code it interacts with as you're writing. Maybe 10 to 1 is too high, but it is undeniable that you read more than you write.
No, two people need to read every line. Not two readers for every line. That’s 200 reviewers for 100 lines of code. It’s still just 2.
@@ConernicusRexOk but have you considered they might be building voyager 3?
so nobody puts an LGTM on code reviews? Nobody reads code. Nobody spends time thinking "what if this implementation is wrong?" or "what if I have to maintain this code in 6 months, when the requirements change?" Doesn't matter if your branch protections requires 4 code reviewers and an affidavit from management that this code is free and clean of bugs.
When AI tools get better, less people understand what they are doing when they use it. Sounds like a catastrophe waiting to a happen.
Sounds like a good opportunity to shine
Exactly, I have seen someone demonstrating generating a spectrogram to find some audio anomaly, instead of analysing the spectrogram's values in a normal way and come to some conclusion, he passed the image through a Convolutional neural network (analysing pixels) and tries to explain how great his solution was.... he has no clue what he is doing, I'm afraid we will see more of these "solutions" in the future..
Sometimes, it's burning obvious what you're about to write. Sometimes, nobody except you knows what you're about to write. Based on this, Copilot's suggestion will be somewhere in the range of convenient to not at all useful. As long as the obvious lines are frequent and as long as reading a suggestion is faster than typing it, Copilot is useful. If you're a zombie and you just complete everything, you have a problem.
Writing code vs reading code: When I'm in the middle of developing something, I can write hundreds of lines a day. That does not happen often. But then I have to come back to that code, say a month later, and need to fix some bug. In that situation I'll spend an hour or two reading code (and maybe writing a bunch of _printf_ commands to figure out the edge-cases), and at the end of the day I'll write maybe 10 lines of code. The bug fix was a *lot* of reading and relatively little writing. And the nature of my job is I'll spend much more of my time fixing bugs than writing brand new code.
I’d like to mention is that for me, churn is much higher at the start of a project when I haven’t established the patterns I’d like to use. Often I’ll implement things one way, then realize it won’t fully fit my requirements, so I refactor. As a project matures, the established patterns have proven effective, and existing code doesn’t need to be changed to fit new ones as frequently. So maybe there are just more new projects entering the GitHub space causing an increase in churn that’s probably typical of new projects
As a learner AI mainly serves a the role of docs + stack overflow. It immediately gives an answer without belittling me and it knows utility functions off-hand that I didn't know exist or couldn't find. I don't let it write code that I don't understand. If it does I either delete it or look up whatever function I don't understand. It can also explain what's wrong with my code. I feel the follow up is important here. If you ask questions until your problem is resolved it's a crutch that hampers learning. If you prompt it until you understand it speeds along learning.
Not my experience with CoPilot in my limited time using it, but my exp with ChatGPT-4 is close to what you said. Albeit I don't think it teaches very well - it often hallucinates ideas that fit your bias, rather than actually teach you valuable insights like a senior could (or tinkering with debugger or reading good documentation). It's very dogmatic and can often waste your time leading you astray - and waste case let your problem solving skills become rusty. Mind you, so can SO very often. With experience, I've found SO to be an incredibly unreliable source except for quick help with a poor library needed for prototyping in something terrible like Android.
yeah i think the most helpfull tool i would need is something where i say the language im using, and what im trying to do and it just lists the libraries i need to import
I've never had an AI teach me something above college-level. I tried, but I simply haven't been able to make it useful. Maybe I'm a bad "prompt engineer", but I just don't see how it can come close to replacing reading real documentation and mentorship from real experts. I mostly use it for boilerplate stuff.
who would've thought that stealing code from the web and automating a plagiarism bot would result in garbage code, amiright? /s
modern techbro world is truly backwards 😂
Thank you. "Plagiarism bot" is exactly the term I've been looking for to describe the modern "generative AI"
glad that all coders only write their own code
@@matt-dx1jo At least Stack Overflow has a voting system where you can downvote stupid answers. Copilot doesn't.
"stealing". Despite basically all open source licenses prior to 2023 permitting it. Despite highly transformative usage of the data. Despite it being explicitly legal in countries like Japan. Etc.
You're only getting a pass on calling it "stealing" because it's copilot which *_does,_* for some reason, have a memory of its training data. Your above water, but not by much.
AI assitance vas so far been an accellerated technical debt generator in my experience.
Neatly said.
We definitely read much more code than we write, because
1. Most work is around existing code, e.g. bug fixes, writing tests, small changes in behavior, refactors, enhacements (e.g adding a new field to an API)
2. Before adding new code we have to look at existing code for reference, or at the minimum find the right place for our code
3. In any reasonable professional context (job or open source) someone has to review the code we wrote - this alone means half our code interaction is spent reading
Consider a team setting where half the work is maintenance and six devs have to review all code submitted just to stay on top of their team codebase, that's already over 90% reading
That "55%" faster is somewhat true to me although I'd say only for specific case. For most case, Copilot autocomplete felt like distraction because it infer too much context that might not available at the time of us writing the code.
One way that's valid (without even considering typing speed) is that for some intermediate/ common algo, Copilot could "feed" you a good boilerplate based on publicly available implementations. E.g I couldn't remember imperative detail of some case-specific quicksort variant (quickselect, mo3, pivot+cache) but with a proper hinting I could get some starting point to work with.
The tricky part, however, is that if you don't really understand the underlying algo you might end up botched the implementation.
The "55% faster" number is actually not really representative. This number is based on a short study where people were asked to implement an HTTP server in JavaScript. The group with AI assistance completed this task 55% faster. arxiv.org/pdf/2302.06590.pdf
Sadly I am in the season of "spend hours reading some guys massive project to find one weird bug and then make a 1 line change." I hope to get back to adding features soon because it is more satisfying for me.
I am a TA for for software development for bussines informatics students and the quality of assignment submissions has skyrocketed while the exam quality has become abysmal. Copilot is a real hazard when yo uare learning the basics. The stuff we teach goes up to basic patterns in Java so it's pretty much only the ground work you need to later be able to read code. You should be able to produce a simple Java class, some loops etc. without having to resort to pseudo code or reading anything at all will be much more of a struggle.
AI won't replace senior devs in quite some time but for juniors and beginners it is a real danger. Not only in terms of employment opportunities but also in when it comes to learning the fundamental skills.
Aren't you saying here that it's not good enough to teach but it is good enough to replace junior devs?
Not sure both can be true. Why are we so quick to assume LLMs can replace devs? Devs who get fired, sure...
@@nickwoodward819 Having a tool do a thing for you doesn't mean you also learn how to do it yourself.
A calculator is great at replacing someone whose main task was crunching numbers. It is awful at teaching someone how to do that number crunching in their head.
Similarly an AI might be great at comprehending and working on simple assignment code for you, but not good at teaching you how to do so yourself. And you'll need those basic skills once you work on code that's too complex for the AI to handle.
@@cameron7374 Lucky then that I didn't say that :)
Like you said, they're good tools - I never said they weren't. But ultimately they can't teach juniors for the exact same reason they aren't good enough to be juniors: They're pretty rigid, frequently wrong and need constant supervision. Granted that's *similar* to a junior, but not the same. For some reason we've been superficially wow'd into thinking otherwise.
That's why the ones before Google are even better with the basics of C and Linux internals.
It feels like something I finally have an advantage on. People seem to be trying to use it the wrong way. My code is still my code, just accelerated in being complete
Another confounding factor is that over the time period in question Covid happened. A lot of devs would have changed to work from home. It may be the case that junior devs who would ask another dev questions weren't able to as easily, so were more likely to check in code and ask someone to look over it or for other advice.
16:23 My hands shake a lot so being able to tab just to complete the sentience I was already going to type is really useful.
Uncle Bob said that the influx of new developers was for years so, that the number of devs doubled every 5 years. So the experience level of devs is in average 2,5 years - a new "Senior".
This, and the quick change of tools/technologies is constantly eating away on any mastery that the older devs are building up.
Btw: Yes, you also read more code than you write. Everytime you go and check what a method you call actually does, or how something handles something or where to place a new piece of code. All that time you are reading the existing code. Still you feel like actively editing/adding code, but mostly you need to find the right spots. So clear/clean and understandable code is important.
Best course I ever did in my programming journey was a touchtyping course.
Not being DRY has consequaces, especially in js and CSS shipped to the client, but not only, also server side can be affected at scale, more code to parse, more files to read from disk. That said as you stated many times being too DRY is the issue of premature bad abstractions.
Am I right to parse "helps you write code 50% faster" as "makes you think half as much"?
Ai at code generation: meh results
Ai at code explanation (and Reverse engineering): really good!
That has been my experience so far. However, if you beat it long enough it also produces good code, but in that same time you would have written in yourself probably
I can cartainly relate to the reading 10x more code thing, at least when i do Don't Starve Togther modding, which as a hobby coder is probably the closest thing to working on a big project i've done. Certainly tons of legacy code, with the bonus that you can't even modify the code directly and also no debugger, just a dev console to type print(whatever) into.
Nice, I love to play DST with mods and my friends
I am doing some CS studies rn, because I needed a line on my resume to actually get a dev job in France, and I swear I am worried about my fellow students.
I haven't spoken a word about copilot due to this worry, and I still see some of them using chat gpt, and I wonder if they'll ever even get half the knowledge and competency they're supposed to acquire.
To top it off, they're about 4 years younger than me, and they're the generation that got not only the new highschool reform (thus went through a fresh, unproved system) but also covid during their highschool time.
Some of them are smart, capable of things, but I don't know if they'll reach half of their capacities.
16:40 One of the first thing I told them was to train to type faster, I don't think they did even a minute, it's been around 6 months.
I feel very similarly, every day in CS classes I'm overwhelmed by amount of people who so openly talk how their code was all AI generated. I had a group partner on a lab where we were doing WiFi socket programming and he generated code for the server in few seconds during the class, while it took me dozen of minutes to write code for the client. Then we spent an hour debugging his code to make it work, while it became clear he got no idea how sockets work. All he tried to do was to paste code errors into ChatGPT as well while I went looking for documentation. It makes me seriously worried about IT field.
This and chat gpt 4 only makes me write my code 100% slower. The amount of time I have spent explaining the problem to it, with examples, and then cross checking it vs the documentation is actually more than what I could have looked up myself. It sometimes adds unexplained complexicity and doesn't even consider nulls in data unless I tell it to consider that while making code snippets
Copilot is pretty bad outside of boilerplate code, but i was pleasantly surprised by GPT 4 last night. I still had to make some suggestions which significantly improved the code quality, but it was definitely faster than writing it myself.
I started to learn go three days ago and Copilot in GO is amazing, in typescript it give you shit code all the time, but in GO is like the opposite.
Perhaps is because GO code is usually very repetitive and simple
pearson correlation coeff is between -1 and 1, so 0.98 is an insane correlation if there's enough (reliable) data. basically means increasing the % of copilot use increases the amount of mistake code, in almost a straight line.
it doesn't say how steep that line is though, just that e.g. the median copilot usage data point is also (very close to) the median mistake code point. the actual increase could be tiny, and again its correlation not causation, or whatever.
I describe Copilot as a senior dev with an eidetic memory who is an outstanding teammate as long as he's sober, but you gotta watch it when he's on the sauce.
As a general rule, statistics on any company's sales page shouldn't be taken seriously, especially when one of them is an abstract thing like "fulfilled"
I think that a recent paper has shown that LLM suffer from fix point problems: i.e. a LLM is less accurate when its input data was generated with an other LLM. So as internet gets full of non-human text, this text is used to feed LLMs which performance is worse than before... I am trying to find the reference.
This is the story about how many bears soviets launched into the space. That llms producing disinformation faster that human can correct it.
I often find myself agreeing more with blue Theo than with yellow Theo. This video is no exception.
I have found it great for writing C#/.Net code. Sometimes needs tweaks but saves a huge amount of typing and boilerplate.
the read write ratio depends on the code base. tools, greenfield, etc is mostly writing. legacy applications that has turned into a big ball of mud and the devs are new is mostly reading.
I don't know how I stumbled upon this channel, but I am fascinated about this view into world of computers and math which I am both very bad at, but envy the skilled.
I am curious as to how software translates to what hardware is actually doing. Like I am guessing all code translates to zero and one which zero and one means an open or closed circuit?
Code gets translated into cpu instructions, which are implemented by the cpu with 0s and 1s, as well as logic gates.
I agree that the study doesn't seem to explore what other pressures exist around the same time. It just points to it being AI as the issue.
17:00 - "code is being 10x more read than written" is not about you, it's about code. If you write code that lives for 10 years, then people read this code 10 years, but it is written only once. If your code lives for 30 years oh boy people will read it 1million more times than it is written.
I get so used to prime reacts I can't react to anything by myself anymore.
I am in College right now and they heavily advertise copilot and how it is free to use.
These brain dead kids in class don't know how to code, and they think they know something when they make a half baked website with copilot.
On the software engineering project, my team was the only one to actually deliver on what we promised. The reason why is because I have been coding since 2017. These kids probably started coding last year. 😂
As far as percentage of time reading vs writing code... It's just a difference of style! When I code, I will think very deeply about what I'm doing and not really start writing until I know pretty much exactly what I am going to do - then it takes like, 2 seconds to actually write out (assuming all my assumptions are correct, which they often aren't 😉). While others (like yourself, no doubt) are more about getting in and writing code as soon as possible to experiment and tinker and find the solution - I have a colleague who is the exact same way. It's a whole spectrum, I imagine
This is a good thing, though! Different styles of coding are better suited to different kinds of tasks and having a diversity of coding styles available to a team will better ensure they are always best suited to deal with whatever issues might come up :)
About the code being read 10x more than being written, imagine what percentage of code you change when you open a file. Sometimes it's going to be 100%, but usually it's going to be very little. You open a file to see what that function does. You open a file to add a line to it. I write a lot of code too, sometimes, but usually I'll be editing a file, not creating a file, and in those existing files I'll be scanning a lot of names and structure.
Students where i live at least isn't allowed to use co-pilot while learning , whatever it would be. But then on bigger projects they're allowed to turn on co-pilot, also co-pilot not allowed during exams. So all it does it help them create bigger projects where i live which i think is kind of nice. Could be different at other schools of course.
A professor of mine who also teaches in the CS faculty told me that projects now are better than ever but people are eating shit in the exams at a rate never before seen. He believed it was because of AI assistance when doing projects.
@@andresmartinezramos7513 Interesting i would have thought it would help the students. Since you have to learn the basics and understand everything before you are allowed to use AI. We'll i guess you find out a lot of problems during a project where AI solves it for you. But honestly thought it helped with learning process not hurt it.
@@zivkobabz The thing is that he suspected that his students were using the AI without knowing the basics. There is nothing to stop them from using AI at home, but there is in an exam setting.
@@andresmartinezramos7513 Ah that is correct they could be cheating themselves on purpose, even if the teacher says it's going to hurt them if they use it for while learning basics.
just subscribed after watching a few of your videos. very entertaining and informative. long time tech geek here
Why does Prime always select text from the second-last character to the second character? Is there some issue with selecting it all?
As a student copilot is the worst I see so much code that I'm just like why would I do this and they have no response and it's bc they don't know they just used AI. Lmao at least I'll know how to code properly compared to them when I graduate
Was getting déjà vu watching this video, then I remembered Theo did a video on the same topic😅
The theo prime loop
The future is looking sad for people learning to program imo. Im not that old but I remember learning to program 10 years ago by reading books and written tutorials online and it was amazing
Personally, I really dislike having to read learning material. Most of the time I'm not actually interested in the learning part itself, I'm interested in building stuff and having the skills I need for that.
So if people are able to do that, I don't think it'll be all that bad.
16:42 I think the quote is about the code that’s written, not the person writing it. I can easily imagine if you write a new feature, you spent 90%+ of your time writing rather than reading. But over the lifetime of the code itself, I would totally believe 10x more hours were spent *by others* reading over it as needed.
From someone with 4yrs of exp, Copilot is very good at repetitive tasks like wiriting something similar to what you've already written (i.e. DB access layer of a program), it's also good at writing base documentation to build upon.
For actual code it's: 5% amazing code, 10% serviceable, 20% a little bug in there that you won't notice until you've read it 10 times 50% not really what I want and 15% "go home you're drunk"
My colleague are pushing broken code, the pipelines failed, and they ask us for help. What the F
As a web developer I really only use Copilot for basic frontend shit to quickly implement design ideas and build rough layouts. But %90 of the generated code never makes production.
Btw:
When using feature branches:
Committing and pushing in small
Steps happens with modifications of already pushed code.
This could lead to false positive “churn” I guess.
That was/is one of my hopes for AI that we could perhaps maybe be less DRY (refactor everything that has more than 3 words in common to functions;) (like these lines at 5 different places almost do the same thing with slight variation, let's just make it a function with a "couple" of arguments...:) and then maybe we'd be able to actually read trough code without having to at some point jump 20 function calls deep meanwhile holding 27 template parameters and 8 actual arguments in your head and trying to figure out what an abstract/interface method call will actually call...
And they said "We only need you to add 1 new ATA command to this FireWire attached SATA black box" without ever seeing the code... how hard could it be:)
(Good luck debugging a 40MHz microcontroller with 16kB of RAM that has to translate SCSI commands to ATA with 300 MBytes/s data transfer rate, in case that was your idea how to figure out what a virtual method call actually does.;) (...and by the way it took around 30 minutes to compile the code... each time! not just the first time... partially thanks to nearly every class and function being templated)
Maybe a skill issue:), or tooling issue, but I don't want to touch projects like that ever again.
My fav thing about copilot is not the code. I'm writing a game mod and sometimes it will just know IDs for mobs and NPCs, saving me writing out a long list where I have to look up each and every one. If someone made a copilot which only autocompleted constants from various games I would be the happiest man alive.
So, some time ago, I read a blog or somerhong that said that when you download vscode from the "installer" you implicitly accept to share some data with vscode while if you build it from the open source repo, there is no such data gathering. Dont know if it was rlly true but seems to kinda make sens ?
Yes, telemetry is the sole reason we have VSCodium as "clean" alternate build. Even the name is a hint (homage to Chrome vs Chromium that strip Google's telemetry as much as possible).
You can disable extended telemetry in settings. But I can't remember whether it stop all telemetry or strill send context-free data.
Classic if something is free (and not FOSS) then you're the product classic
Honestly, I use it for “advanced rubber ducking” at work. Maybe it generates some worth while code or boilerplate but ultimately I fall back to reading vendor docs or peeking the class. Sometimes it’s good for quickly summarizing your code in a comment…sometimes!
Keep in mind, we are in the honeymoon period where the people using & overseeing the code that Copilot is generating, are usually competent & experienced devs themselves. They can catch the bugs & security problems because they recognize them.
What happens in 1 generation when we have vastly fewer senior devs who understand code in a deep way?
And what happens when Copilot trains on older copilot generated code?
Quality is going to degrade over time & the ability to debug & catch mess in review will be a fading skill. Yikes.
then we should create an open source ai that only be trained with high quality code 😅😅😅
I didn't know Ninja made a guest appearance. I appreciate this. Thank you Ninja
I was trying to make a small thing yo automate a config format conversion and used bing ai assistant, which is basically co-pilot (it's even what they write when you ask it programming questions). I asked a question about Yaml class in a java lib and it was clear that it completely did not understand how that class worked.
when i first saw it in my second year college i was like that's good but untill i get enough experience to know what it is typing while on other hand a friend of mine was just installing anycool extensions and using copilot and when i asked him what's it doing answered no
To me, AI-assisted is like a drunk senior friend with a bit of bad memory but it is very good at reading and seeking references in documentation and reading large sets of output for debugging.
I us GitHub Co-Pilot but not for code. Only to write my Git Commit Messages. I hate that part.
The treesitter thing that Prime said is basically plagarism checker algorithms, right?
Possible reasons you read 10x more code than writing:
- Bad modularity - no matter what you change, even the most distant and seemingly unrelated parts of a big project could break because of it
- You are using a game engine or some other huge proprietary piece of code. Some game companies even have teams to write the tools everyone else uses and sometimes they never pick back any bugfixes, so every team around the world keeps fixing the same bugs again and again.
- You've joined a very old product and you need to learn a lot about it before you can extend it. The older it is, the higher the changes the other devs already left the company, or maybe even already died by natural causes. Also, the older it is, the more bugs you inherit.
- Even a newer project can be mismanaged to such an extent, that it's a mess and nothing can be added without needing to fix half the project first
No the 55% faster claim can easily be inferred from typing speed and how much code you're letting co-pilot write for you instead. That one's not hard at all. If they wanted to account how much less r&D or rewrites that number would probably be much higher actually.
a boilerplate only setting would be nice
Remember just a few months ago when devs without any understanding of AI thought AI would take their dev jobs and I was like the only person in the world that said the opposite because AI mostly produces trash because there's no ACTUAL intelligence in there?
I pay for Co Pilot, programmer of 29 years. I get fulfillment/satisfaction when it writes my boilerplate and I don’t have to. That’s all I use it for.
same
GitHub's CEO basically valued each line of code written at more than $13,500 per line to reach a valuation of $1.5 trillion being added to the economy by the use of Copilot. I'm willing to bet that the total value of the entire volume of code written by Copilot is worth far less than $110 million, a rough valuation of $1 for every line of code added, updated, deleted, copy/pasted, find/replaced, moved, and no-op. I further assume that most of this code had absolutely no commercial value, so the valuation is so far-fetched that no rational person could ever take it seriously.
Copilot is like Monsanto insisting you destroy your heirloom seeds and farm with their GMO. Duh.
0:19 yesn’t - these sorts of papers should really have a DOI number that would outlive whatever medium it was distributed through and their corresponding URL, whether it be blog posts or PDFs
I'll spend days writing 1.5k new net lines and get a rubber stamp approval in like 30 seconds...how many shops out there actually budget for devs to do quality reviews? Cause the push for new features is constant from what I've done in 7 years as a dev.
As far as fulfillment, the thing that makes me want to stop and work on something else is consistently the most boring 20% of a project; either boilerplate or uninteresting glue that should've been a library function. Copilot does great with that (because I'm probably the 10,000th person to ask...), which is huge. Also does great if you need a minimum implementation of something unrelated to what you're working on to make things compile and run - copilot makes it cheap to slap some throwaway code there while you working that you fully know you're going to blow away later.
Scrubs are using Copilot to produce bad code? I'm shocked, Prime! Shocked speechless! How could anyone possibly have anticipated such a thing?!
DRY should be a promotion process:
* start by DRY in same file that use the logic
* keep copy/paste until at least 3+ use cases
* promote to DRY in a common file after enough files are using it
* keep copy/paste between packages/modules/projects until at least 3+ distinct packages/modules/projects are using it
* promote to DRY in importable packages/modules/projects after enough packages/modules/projects are using it
* keep promoting through hierarchy (e.g. organization repo)
I would like to add to the discussion, that you might think Copilot code is good code when learning. And once you have learned wrong patterns is might be hard to get rid of them. Therefore for beginners I would suggest, that reading seeing well written code is really important.
It's like the paper where they tested how many code from bad or outdated tutorials/stack overflow articles went into some project. Those multiply with each new developer seeking information.
is there any influence from employee churn at tech companies? I see news about swaths of people getting fired, could that not have a similar impact? Perhaps someone can explain if this is a factor.
None of these tools write usable code. It’s making a huge gulf between entry level guys who can’t code much at all and working professionals.
I'm a Jr web dev. I use it for writing annoying things like css utility styles. But I generally turn it off as the time to do just enough, then wait, then check if its correct, then tab, I could've written it myself and not felt a weird limbo and forced decision of checking the accuracy.
I wonder how many BSL or GPL repos were scanned to train this stuff...
19:50 best transition to ad with green background
“What do you do dog? Look at your cursor to blink until the GitHub Copilot results come in?” Lmfaooo
The CS graduates coming out of colleges in the coming years are going to be a disaster.
No I dont use copilot. It irritates the fuck out of me.
If ChatGPT's primary use is producing boilerplate code, then there is a problem with the programming paradigm. We should be striving for ZERO boilerplate code. Boilerplate code is a symptom of a flawed development platform. We are severely due for a massive revamp of how we write code. We keep banging the same two sticks together expecting better results. We need a game changing disruptor on the level of General Relativity disrupting Newtonian physics. I'm talking Star Trek Next Generation futuristic shit.
Well, when I was contracting for two years on one huge project my whole was task go though hundreds of bug reports and fix the ones that were actually bugs. I spent orders of magnitude of time searching around and reading the code to figure out the best way to fix the bug than actually writing the code that was the fix. That's after having run the code and observed the symptoms. Only after all that time was I trusted to actually build new features for that thing.
Tried using it for Unity, it's somehow worse with Quaternions than I am
is that gradient in ↘️ corner just to waste ink?
In year 9 I had my first introduction to programming with my year 9 IT class being to build a site in Weebly and then a game in gameMaker... I hated it. My next real introduction was year 11 IT which I completed in year 10 - html, css, javascript (no advanced text editor allowed) and Visual Basic... Suffice to say at this point I thought programming was shit and went back to my calculus, but my third and final "intro" to programming was the coding train, and I can say with confidence that is the reason I have a programming job today.
I hope the future generations also find passion from a youtuber or person they look up to and find joy in the creation, rather than being bogged down by the countless libraries and deployment pipelines that are required to even qualify for a job nowadays.
As a student, I only used notepad and paper, no IDEs.
It can be really helpfull but screws anything over with short sighted implementation once your project exceeds a certain size. It will start adding data structures and controls that aren't needed and will increasingly hallucinate from the code you've already written.