So basically if someone was sniffing my wifi while I was talking with my mom on WhatsApp. They might actually brute force raw data in future? Without having any information about 4 way handshake or anything.
I have a completely different prediction on c++ memory safety. I think the chances of the compiler getting smarter to match go and rust is way more likely than c++ developers migrating to go and rust.
its not gonna happen. More likely C++ devs migrating to Rust. Because C++ is so clumsy and messy, its better to ditch it altogether and start from scratch. C++ is insecure by design and no one has desire and capacity to make it as secure as Rust. You have to discard everything and start from scratch and build the very foundations of language with security in mind. It can't be done with C++, impossible.
@@VapuR8 Is C/C++ clumsy and messy in all applications? If you're doing linear algebra, there's not really that much to go wrong in either language, and when the same subroutine is called one billion times, it does make a difference how efficiently you implement it. In those situations while Rust is probably fine, I actually find C a lot more ergonomic, especially when factoring in AVX kernels (which are still in their early days in Rust in some ways). Shaders also get a bit messy in Rust, when last I checked.
@@novantha1 Not put C and C++ like this C/C++. C is not clumsy and messy, it is elite foundation language. C++ is indeed clumsy and messy. But I hope for C to be replaced by Rust, because C is legacy and doesn't comply with modern higher demand for memory security-by-design. Do you think we might see a fully Rust-written Linux-kernel in near future?
It's super pleasant to listen to your talking. You form nice, understandable and clear sentences about complicated topics - in real time, no script. That's pretty rare.
Someone needs to make an pure RL training pipeline for 8B LLM that takes C code, compiles & decompiles it, then have the LLM predict the original code from the decompilation pseduocode, with descriptive symbols. The LLM generated code could then be compiled, and the intermediate representation compared to the original for symbolic equivilance.
@@ELYESSS We'd blow up the context window too quickly with pure ASM to be useful. Large context such as 1M require too much VRAM at the moment to be practical.
20:34 Definitely agree with you here. In my freetime I use Ghidra to reverse-engineer games in order to create mods. While there are a lot of times where accuracy is key to understanding what is going on, most of the time spent in Ghidra is wasted by making out what hundreds of functions that I am sifting through on the search for certain functionality even do, roughly. To get a more legible approximation in these situations would be highly desirable.
It's a double edged sword. LLMs got me tricked a few times. Excuse could be that it lacked context (I can't give it whole project thus some right solutions just didn't work). But sometimes it was clearly wrong, making errors in quite fundamental stuff (overall and language specific). In conclusion, some might not be ready to have hallucinating AI even remotely close to their product, because of it's non-deterministic nature and all possible drawbacks.
Since you literally asked, here's my unqualified and poorly thought out thoughts: - I'm too far removed from the embedded space to say anything useful about RISC-V itself. My impression from a distance is that most RISC-V fabs are Chinese. The fact that China has an offensive cybersecurity strategy aimed at 'the West' could make key industries in the USA and EU hesitant to adopt RISC-V in the near future, until fabs in more 'Western'-aligned countries pick it up. - My expectation is that, at least until 2030, neither Shor's algorithm nor Grover's algorithm will be successfully applied for practical attacks on cryptographic primitives that, in early 2025, are commonly considered "good" for use in any facet of (D)TLS. Meanwhile, since NIST has already selected some PQC primitives and work is underway to build them into TLS libraries, they will be widely in use by 2030, especially in the industries that pay so much lip service to its importance but don't contribute in any way to the open source projects that do the actual work of implementing them. - I don't expect large open source projects, such as the Linux kernel, or Chromium or Firefox, to make concerted efforts to un-C or un-C++ their codebase. I do expect memory safe programming languages to outpace C and C++ in popularity, to the point that it becomes unsustainably difficult for companies that have large proprietary codebases in memory unsafe languages to find workers to maintain, let alone develop, their product. Those companies will not get rid of their legacy code, as it won't necessarily kill those companies, but it just ends up making things more expensive, which will ultimately be paid for by end users and taxpayers. - I'd be a bit sad if this would be the fifteen minutes of fame for Rust :') But maybe that gives it some focus time to mature. - I don't know enough about the state of compression to say anything useful about it. - I'm entirely too salty about LLMs to say anything useful about it. - I expect that SBOMs will prove to be unhelpful in the next three Log4j-sized vulnerability crises, due to a combination of a low quality of identification of dependencies and the difficulty of including the use of SBOMs in operational processes. If this turns out this way, the infosec community will ridicule SBOMs to death rather than trying to fix it. - I have seen papers, like a decade ago, where the researchers used machine learning to do various interesting things with reconstructing code, but more in the Copilot-ish autocomplete sense. The machine learning was a lot simpler than LLMs (it has been a while since I've gotten into the details of it, but it used conditional random fields, I believe?). My hope (not prediction) is that the LLM hype will implode, people will rediscover simpler models that are easier to explain and build guardrails around, which would then result in also making it easier to fit it into a working process where precision is key (such as reverse engineering).
Supply Chain Attacks are also a topic in LLMs, when you try to manipulate the training data that is gonna be used to train your AI models. I have done my master thesis on reverse engineering binaries using transformers. It's gonna be still challenging for obfuscated programs. I see a great potential here too. If interested, I might upload my presentation soon, its like 2 hours.
6:08 - It was driving me nuts trying to figure out what city you were walking through because I could've sworn it looked like Arlingon County VA, but the reflection in that window cleared it up pretty well.
@@Mastercar77 That's OK, if you miss one, the next one is in 4 minutes. 😛 (Yeah, OK, depends on the trains we're talking about, but... I distinctly remember a moment of realizing that I'd _just_ missed (like, saw it leave) an U-Bahn in Berlin, but the next train was due (and arrived) in 4 minutes. Granted, only at certain times of day, but...)
Love this conversation! Learning CS over about the last year+, and honestly, this single episode has such solid content, realistic and viable. Thank you Laurie!!
doesnt look off to me, he just intentionally started walking backwards with her after he passed her, no idea why but it makes me think this whole video is somehow cgi 😂😂😂
Bro tries to autism-shame a girl who is far above his league in all categories that matter, and this fact triggers both his sensitive little heads... .
Great video and nice walk. I'd like to thank you for making videos like this, without background music, despite what you say in the generative AI section of the video. ;]
People really underestimate how inferior of a language Rust is compared with C++, too. New code will continue being written in C++ for the foreseeable future.
Definitely think you're spot-on with Zig. Also, while I'm actually not a fan, I think seeing a AAA studio implement AI dialogue will likely happen this year as well, if not next. Not all topics here are exactly in my wheelhouse, being just a game dev, but it's really nice to hear predictions regarding parts of the industry I'm not entirely familiar with.
It's crazy while at work I remembered that Laurie hadn't posted a video in a while and was wondering when the next video would come out and here it is!
21:19 - a lot of information, a lot of topics, and a lot of memories you prompted by wandering around in places I used to wander around in... thank you for the trip down memory lane. :) And the predictions! I look forward to a video in a year where you reflect on how you did! (You'll be doing such a thing, right? That's my hopeful prediction, anyway. :) )
Raspberry Pi already released a hybrid ARM / RISC-V last year; the Pico 2 featuring an updated RP2040, the RP2350 with more everything, including ARM TrustZone powered secure boot, and, supposedly unless locked down in secure mode, a software selectable choice of booting RISC-V cores instead. There was a very fun talk at 38C3 about how well that particular choice turned out, and Raspberry Pi have a long blog about it too. So I guess my prediction is that any plans they might have for a main-line hybrid ARM / RISC-V board will be delayed while taking what they learned back to the drawing board.
I participate in ctfs often, and I pretty much always put my ghidra output into chatgpt to get an idea of what the code does. It does usually get specifics wrong (for example, it never understands string initializations), but it gives a much better idea than parsing through the local_XXX mess
So, I had no idea what risk 5 was :) Turns out I was also "spelling" it wrong so, if anyone else was wondering: RISC-V (pronounced "risk-five") is an open-source instruction set architecture (ISA) that is gaining traction in the Linux world. Unlike proprietary ISAs like x86 (Intel/AMD) and ARM, RISC-V is free and open, allowing anyone to develop processors based on it without licensing fees.
It totally made my day to discover that you're secretly a train nerd. If you do a guest show with Adam Something or the Well There's Your Problem team I think I would lose my mind.
8:51 I don't think you compression prediction(AV1 will be used by 2 major streaming platforms) will hold. The reason is hardware decoding. While there might be some AV1 hardware decoders on newer hardware, the vast majority of devices would need to use CPU decoding. I could maybe see that for low-quality video, but anything 720p and up is probably going to stay on h264 until all the old phones, laptops, office computers, etc. have been replaced in sufficient numbers. Wasn't there another licence-free video standard made for the web that compresses better than h264? What happened to that one?
@@leito1996 My GPU doesn't support either AV1 nor VP9(GTX 1070) AFAIK. Neither do most AMD GPUs. And as a Linux user, let me tell you software decoding is not the same. AFAIK TH-cam still offers h264 for all videos for compatibility reasons. While this video seems to have actually been shown to me in VP9, others still show avc1(h264).
@@Maxjoker98 Sure, on desktop with older cards it might not be supported, but that is niche. Take any mobile device or smart TV (even 5+ yrs) old and it will stream VP9 on TH-cam netflix and most other services. And it will stay that way until AV1 takes over. H264 will be dead in few years hopefully. So asking what happened to VP9 - it succeeded on the market :)
Counter prediction: C++ is not going away in any foreseeable future. Memory safety is not free and when you want to squeeze out the absolute most out of your hardware, you don't pay for what you absolutely don't need. C++ is evolving, new standards offering increasingly "modern" syntax and features. It will be fine.
I hate C++ but there are people that seem pretty confident using it, if it works for you... What we both share in common is the fact that there's always a need to squeeze every cpu cycle out of code so moving towards modern languages is not the rule.
@ There is no "unnecessary stuff". Just stuff you haven't learned / had to use yet. This is a low level language. In many cases higher level abstractions are enough, but if you are doing something that C++ is actually required for, you may even go as low as asm with C++.
My dream is a rusty syntax language with a GC. People are doing UIs with Rust, but it seems like you really need to be able to create pointer/ref cycles to make ergonomic APIs.
I think C and C++ will also continue to be relevant for programming new codebases for microcontrollers and embedded systems for a long time. Anything where the developer has total control over what processes will be running on the target system. Especially when there's just one process, and it's the one they're writing. So, developing dedicated devices like synthesizers and samplers, battery controllers, controllers in peripherals, etc.
Not going to front. I clicked for the pretty face. I’m human. But I did not expect to hear half an hour of highly intelligent talk about a variety of topics I’m interested in and familiar with (as a layman). This video ended up being one of the biggest surprises I’ve had in like 2 years. Fun to watch all the way through.
Thanks for the walk in thought and city. A good mind opener :) Need some time to reflect though. Will be interesting to look back on 😊. Have lovely day
Nice work! I like your thoughts on LLM's and method naming. They'll need to get a lot better than where they are currently for sure, but that's why it's a prediction! Spot on with asymmetric crypto. The factorization problem is going to be much easier to solve.
That was really great! I think more forward engineering tools should be made. I saw a demonstration of Binary Ninja that was really impressive. That's getting to the kind of level we should have with ordinary IDEs: where you can do stuff like changing the name of a function and all its references. But to do that you need to have a tool that really knows the syntax of the language perfectly and which also knows the build environment so you can track which modules a function is defined in and all the dependencies etc. Current IDEs like VSCode don't really know all this, though they try to give the impression they do, they don't really. The only bit of cognitive dissonance I had was when you talked about Broadcom. Raspberry Pi 5 and the new Pi Pico don't use Broadcom IP as far as I know. RPI have that expertise in-house now, and they're a correspondingly more scary company. I wouldn't want to work for them: they're a baby British version of Intel now.
Generative AI for NPCs has been toyed with by a few devs in smaller games. One of the major hurdles about this strategy is that LLMs are *really really bad* about encouraging players to do things the game can't do, or go on quests that are outside the scope of the game. As conversation partners they're great, but once you need it to interact with a mechanic, they all become the overzealous game company CEO promising features the devs never planned on. So while it's not out of the question, I think a lot of AAA devs are gonna be wary of inserting this technology into their games just yet. Not without some hefty guardrails.
yeah computers are too scary for me, i just wanna crawl up under a rock and hibernate for like 20 years. or i could look for my next developer job, choices amiright
So glad I left CS and went into the welding trade back in 2020, failed initially but eventually graduated 4 years later after working a dead end cashier job then 3 years after that job applying hopelessly to jobs went back to school and graduated from a welding technical program with full ride.
WOW! New here. What an informative walk you've shared here. Thank you! Something to consider: @ 12:04 - There will definitely be an avalanche of AI music for many types of applications, including for background music. However, it's a risky proposition to use AI-generated content for anything you wish to monetize, as AI music is derived from copyrighted music - any semblances for any AI-generated content to copyrighted human-generated will be problematic and best to avoid if you want peace of mind that, one day out of the blue, much of your treasured video catalog that you worked so hard on get copyright strikes. EDIT: Ok, @ 12:42, you implied that the AI used to generate the background music would be itself trained on copyright-free music. In that case, 100% agreed.
I think server-hosted LLMs for NPC dialogue generation in video games would only work for subscription-based titles, given the running costs of continuously generating new text. This potentially means more game-as-a-service type of titles and maybe an increase in subscription costs. Alternatively, running something like Phi-4 locally under the hood would not be too taxing on modern GPUs and avoids the issue of added costs for the developer/publisher that are then passed onto players.
I'd like some content about thoughts on programming languages and their evolution, like, Rust may add contracts for preconditions and some stuff, C++ has a proposal for adding lifetimes that we'll see how much dust it picks up, etc.
The game prediction is worrying. If you buy a single player game, few years/months/even weeks later the server for the AI shuts down & you can't play anymore. Then what did you pay for ?
The SBOM discussion reminds me of the preparations for Y2K. Some companies were extremely thorough with their auditing of software and hardware. Not sure they would have been of their own volition but I think it was part of their management certification standards or something. It felt needlessly bureaucratic and wasteful sometimes but it worked. Which makes me think at least in the unix (and by extension probably unix heritage languages eg c family) we might be wise to start documenting which software packages have been tested with times past 2038. I wouldn't be surprised if that starts to kick off in the next year or two though I expect it to be much more low key than Y2K. It would be nice to have a flag on every package to say the maintainer has either tested or verified with upstream that there isn't a 32bit timestamp issue. Also the walk and talk format really works for you.
12:18 I would not use AI generated music or art for the purposes of avoiding copyright issues. It's not clear to me if it's how it'll work or if we're just in a gray area. This is without even mentioning the ethical aspects of AI generated content.
SBOMs only get you part of the picture, but is the first step. You also need to apply configurations, such as a Linux kernel .config, or take into consideration build flags.
It’s cool that she has the knowledge she has including her fore knowledge of Assembly Language. I feel it’s easier to trust predictions from developers who use harder skill sets that are in higher demand. They see something that we’re missing because they had to work under a different scope.
Very cool format. I think the prediction on generating dialog for NPCs by a major publisher is too early (I'll interpret this as "on a major project" as well, as many large publishers do fund some small experimental games occasionally). Development timelines for big games are insanely long, and platform support has to be broad to be profitable. That includes things like mobile hardware such as the Switch 2. Local models will likely still require too much compute. Online models might actually be too expensive, and latency will likely be too high to make sense. I do think this will actually be a great use of generative AI in the future, but my prediction is that it will not happen, in big projects, by a major publisher, for at least three more years. I also think there's about an 80% chance that it will be a local model. Using it for DRM is interesting, but I doubt the economics and latency work out. Adding 200ms RTT, will make having sensible total latency hard.
Binary Ninja already has the function name suggestion as part of their add-on Sidekick plugin, along with summarization. (All the limitations you talk about for current LLM capabilities still apply.)
SBOM - yes exactly what a lot of us in Incident Management needs. Often we get ex-marketing staff giving us something crazy and unusable. Impossible to manage or solve anything. I will ask about SBOMs in the next open session meeting. Background music - Oh man, that's a huge topic. As a passive flautist... 1 day out of 10 days, I get it right and it's amazing and I find the ryhthm and magic flies. 9 days out of 10 it sounds dull and repeating and lifeless. Other full time musicians play existing music to overcome this, but to do something completely new, it's so hard. I mainly hang around the pentatonic space as my vibe. But I know of people locking themselves in the basement for a month, everyone thinks they're on holidays overseas, instead they're just trying to replicate magic. If they get 1 day out of 20 , and complete an album... then happy days. The process is intense for humans, I am un-certain if AI can do this better, probably not. There's a lot that goes to music, having an emotional attachment based on something your mind finds a valuable bias then adding emotion to add more bias, then from that you add music that resonates that bias. How do you do this crazy resonation of bias in AI? Maybe this is why it hasn't happened yet.
This is great, thank Laurie. I like that individuals do this as I've seen predictions done on podcasts like @JupiterBroadcasting Coder Radio, Linux Unplugged, @LateNightLinux, and even @BadvoltageOrg, but yours is the vlog version to the podcast ones. Let's see how many you hit on the mark!
I think most of the predictions are spot on. But I think the last one might be wrong, your video might make some of the watchers build LLM API calls for code approximations. I'd also think there will be better binary analysis techniques with LLMs in the future. Thank you for the video Laurie, loved this format.
Laurie, have you seen the Polish trains controversy? Security researchers had to reverse engineer train firmware to find and bypass malicious DRM that was sabotaging the trains in Poland. It is a fascinating story.
It would be nice if there was a function of your IDE would say "Hey, you're using this library/framework. It has serious vulnerabilities. You should look into fixing that situation." That would only work if you open the project though. A harder to work out, and a little more manual, would be something I could feel a list of libraries and versions into that would keep an eye out and email/notify me if it found a vulnerability in one of those libraries. Maybe rather than feeding it the list if you could tell it monitor this list of folders that would be nicer.
walking with Laurie needs to be a weekly thing
Bi-weekly. Too much fresh air can mess with your mind! *puts on aluminum hat*
Most definitely.
taking Laurie for a walk*
I completely agree!
I agree
the realisation that people could be harvesting our encrypted data so it can be decrypted when quantum computers become available is terrifying
who cares. got nothing to hide anyway
This just broke my brain
So basically if someone was sniffing my wifi while I was talking with my mom on WhatsApp. They might actually brute force raw data in future? Without having any information about 4 way handshake or anything.
Yeah it's called harvest now, decrypt later
@@a21123 In theory yes
100% Odds that 2025 is the Year of the Linux Desktop. Trust me guys. It's going to happen. This year.
edit: what have I done
So young people in tech that don't have an established career yet would be pretty safe learning Linux? -considering they want to
@@dovahking6514 😅 my sweet summer child...
If Gaben drops SteamOS that plays nicely with Nvidia out-of-the-box then you'll see a pretty big adoption I reckon.
@@Carhill If all you ever do is game, then sure. Haven't ever met a person like this.
@@upsilondiesbackwards7360 So you've never had roommates huh
05:18 the guy that walks by moves like an AI model 😀
ikr, why that happen
laurie isn't real she's ai generated
definitely a glitch in the matrix 😄
@@zapz because he saw her recording lol, bro saw an opportunity and took it
He was trolling.
The prediction I have for 2025 is Laurie will continue to bring us the best content
I like this format
a common technocrat viewer, if that's what u can call us, finds this future prospect content very appealing. Sure, catch that train!!!
I have a completely different prediction on c++ memory safety. I think the chances of the compiler getting smarter to match go and rust is way more likely than c++ developers migrating to go and rust.
unfortunately the ISO C++ 26 committee seems divided on memory safety and the proposals to address it.
its not gonna happen. More likely C++ devs migrating to Rust. Because C++ is so clumsy and messy, its better to ditch it altogether and start from scratch.
C++ is insecure by design and no one has desire and capacity to make it as secure as Rust. You have to discard everything and start from scratch and build the very foundations of language with security in mind. It can't be done with C++, impossible.
@@VapuR8 Is C/C++ clumsy and messy in all applications? If you're doing linear algebra, there's not really that much to go wrong in either language, and when the same subroutine is called one billion times, it does make a difference how efficiently you implement it. In those situations while Rust is probably fine, I actually find C a lot more ergonomic, especially when factoring in AVX kernels (which are still in their early days in Rust in some ways). Shaders also get a bit messy in Rust, when last I checked.
@@novantha1 Not put C and C++ like this C/C++. C is not clumsy and messy, it is elite foundation language. C++ is indeed clumsy and messy. But I hope for C to be replaced by Rust, because C is legacy and doesn't comply with modern higher demand for memory security-by-design.
Do you think we might see a fully Rust-written Linux-kernel in near future?
not true at all, c++ will get memory saftey soon...
We went from LaurieWired to Laurie in the Wild
Bro missed the opportunity to say LaurieWireless 😔🫥
@@xboneyt485wireless networks are not safe as wired networks.
Laurie Wireless man 😂
I'm SOOO happy this wasn't a video about the job market. Good video as usual.
I came for the CS, and swooned for the awkward train segue. You're a riot.
TRAIN! plus 10 for the video!
operating systems developer here, just discovered this channel with this video and I'm loving what you do! :D I loved the predictions.
ah, i miss Seattle so much. Thanks for taking a walk around Laurie.
Assembly was key to the DeepSeek R1 optimisations, thank you for continuing to spread your love of code.
It's super pleasant to listen to your talking. You form nice, understandable and clear sentences about complicated topics - in real time, no script. That's pretty rare.
Someone needs to make an pure RL training pipeline for 8B LLM that takes C code, compiles & decompiles it, then have the LLM predict the original code from the decompilation pseduocode, with descriptive symbols. The LLM generated code could then be compiled, and the intermediate representation compared to the original for symbolic equivilance.
Yes please
Use thinking time based translation using tool calls and it's not 8b but 800B and costs 10 million
sounds good and straightforward to do. i wonder how good it can get
why decompile it before predicting? Just use the machine code for the prediction.
@@ELYESSS We'd blow up the context window too quickly with pure ASM to be useful. Large context such as 1M require too much VRAM at the moment to be practical.
20:34 Definitely agree with you here. In my freetime I use Ghidra to reverse-engineer games in order to create mods. While there are a lot of times where accuracy is key to understanding what is going on, most of the time spent in Ghidra is wasted by making out what hundreds of functions that I am sifting through on the search for certain functionality even do, roughly. To get a more legible approximation in these situations would be highly desirable.
Cant fathom the level of autism required
What Mods
It's a double edged sword. LLMs got me tricked a few times. Excuse could be that it lacked context (I can't give it whole project thus some right solutions just didn't work). But sometimes it was clearly wrong, making errors in quite fundamental stuff (overall and language specific).
In conclusion, some might not be ready to have hallucinating AI even remotely close to their product, because of it's non-deterministic nature and all possible drawbacks.
Same, or prodding data to see what format it's actually stored in. It can help spot patterns sometimes.
Since you literally asked, here's my unqualified and poorly thought out thoughts:
- I'm too far removed from the embedded space to say anything useful about RISC-V itself. My impression from a distance is that most RISC-V fabs are Chinese. The fact that China has an offensive cybersecurity strategy aimed at 'the West' could make key industries in the USA and EU hesitant to adopt RISC-V in the near future, until fabs in more 'Western'-aligned countries pick it up.
- My expectation is that, at least until 2030, neither Shor's algorithm nor Grover's algorithm will be successfully applied for practical attacks on cryptographic primitives that, in early 2025, are commonly considered "good" for use in any facet of (D)TLS. Meanwhile, since NIST has already selected some PQC primitives and work is underway to build them into TLS libraries, they will be widely in use by 2030, especially in the industries that pay so much lip service to its importance but don't contribute in any way to the open source projects that do the actual work of implementing them.
- I don't expect large open source projects, such as the Linux kernel, or Chromium or Firefox, to make concerted efforts to un-C or un-C++ their codebase. I do expect memory safe programming languages to outpace C and C++ in popularity, to the point that it becomes unsustainably difficult for companies that have large proprietary codebases in memory unsafe languages to find workers to maintain, let alone develop, their product. Those companies will not get rid of their legacy code, as it won't necessarily kill those companies, but it just ends up making things more expensive, which will ultimately be paid for by end users and taxpayers.
- I'd be a bit sad if this would be the fifteen minutes of fame for Rust :') But maybe that gives it some focus time to mature.
- I don't know enough about the state of compression to say anything useful about it.
- I'm entirely too salty about LLMs to say anything useful about it.
- I expect that SBOMs will prove to be unhelpful in the next three Log4j-sized vulnerability crises, due to a combination of a low quality of identification of dependencies and the difficulty of including the use of SBOMs in operational processes. If this turns out this way, the infosec community will ridicule SBOMs to death rather than trying to fix it.
- I have seen papers, like a decade ago, where the researchers used machine learning to do various interesting things with reconstructing code, but more in the Copilot-ish autocomplete sense. The machine learning was a lot simpler than LLMs (it has been a while since I've gotten into the details of it, but it used conditional random fields, I believe?). My hope (not prediction) is that the LLM hype will implode, people will rediscover simpler models that are easier to explain and build guardrails around, which would then result in also making it easier to fit it into a working process where precision is key (such as reverse engineering).
She asked for your engagement to boost the algo. You could've shared anything. A copy pasta, your favorite cheese cake recipe, etc. It doesn't matter
Supply Chain Attacks are also a topic in LLMs, when you try to manipulate the training data that is gonna be used to train your AI models. I have done my master thesis on reverse engineering binaries using transformers. It's gonna be still challenging for obfuscated programs. I see a great potential here too. If interested, I might upload my presentation soon, its like 2 hours.
will you notify here once you upload please?
Ok we are interested to see where data security is going
Please notify us on upload
Omg i just found my favorite side of youtube, please, please keep this work going!!
Every topic, im surprised other people talk like this
6:08 - It was driving me nuts trying to figure out what city you were walking through because I could've sworn it looked like Arlingon County VA, but the reflection in that window cleared it up pretty well.
Is that the Seattle space needle? Because the area was really giving me Vancouver vibes, but nothing was recognizable
@@mr.hi_vevo414 Yeah it's Seattle. I think she started out the video in the Belltown area if I was to guess.
Come to Europe, we have trains. Lots of trains. Fast trains!
They are always late
@@Mastercar77 That's OK, if you miss one, the next one is in 4 minutes. 😛
(Yeah, OK, depends on the trains we're talking about, but... I distinctly remember a moment of realizing that I'd _just_ missed (like, saw it leave) an U-Bahn in Berlin, but the next train was due (and arrived) in 4 minutes. Granted, only at certain times of day, but...)
Fast trains that are always late or old trains that are always even more late. I appreciate TGV but a majority of European trains are overrated.
Trains and robbers are mostly all we got.
@Mastercar77 Maybe, but that's less of an issue when you have one every 10 minutes rather than one every 10 days like in the US!
The name Sam Altman turns me off wanting to read the post you linked
You don't have to like him to learn from his expertise.
@@deusexaethera he doesn’t really have any.
The term "legacy PHP code" hurt my soul. I was writing code before PHP existed.
Love this conversation! Learning CS over about the last year+, and honestly, this single episode has such solid content, realistic and viable. Thank you Laurie!!
I love this walking talking format, I love walking through city scape and thinking, nice to take a walk with Laurie
I like seeing other cities.
thanks for sharing your predictions!
5:17 btw why does this man's movement look like AI-generated vid haha
it looks intentional lol
doesnt look off to me, he just intentionally started walking backwards with her after he passed her, no idea why but it makes me think this whole video is somehow cgi 😂😂😂
yeah, but the person he was with looked at him like a flesh-and-blood example of "what the fuck are you doing?", so I think it's real. :D
He heard an attractive lady say "C++" and it short circuited his brain for a sec
Pretty sure he saw someone filming and thought "wouldn't it be funny to do some Ministry of silly walks and mess with them when they go through it"
Strong Luke Smith vibes today. Also, very unsurprising that ARM assembly girl is also a train girl.
Luke Smith... I have not heard that name in a loooooong time
Bro tries to autism-shame a girl who is far above his league in all categories that matter, and this fact triggers both his sensitive little heads... .
My favorite part was the train as well.
It was very pleasant to take a walk with you, albeit it was a little disconcerting walking backwards. :) Good predictions.
Great video and nice walk. I'd like to thank you for making videos like this, without background music, despite what you say in the generative AI section of the video. ;]
People really underestimate how inferior of a language Rust is compared with C++, too. New code will continue being written in C++ for the foreseeable future.
Definitely think you're spot-on with Zig. Also, while I'm actually not a fan, I think seeing a AAA studio implement AI dialogue will likely happen this year as well, if not next. Not all topics here are exactly in my wheelhouse, being just a game dev, but it's really nice to hear predictions regarding parts of the industry I'm not entirely familiar with.
Thanks for the advice
Have a great year
Better AI in video games has some crazy potential. Smarter hostiles, never playing the same scenario twice, changing dialog results...
how'd i not know about this channel...this looks like Seattle, my old town, so now i'm already hooked. 😭
ok it totally is. man.
ah and then we get the space needle! along with what looked like 4 different hairstyles, haha love it
It's crazy while at work I remembered that Laurie hadn't posted a video in a while and was wondering when the next video would come out and here it is!
The guy at 5:18 threw me way off. Laurie is in the matrix.
kind of solid predictions or predictive analysis, great video too for a change outta the studio. cool
Framework is releasing a RISC-V dev board for their laptop. Would love to get one in my laptop in the next iteration.
21:19 - a lot of information, a lot of topics, and a lot of memories you prompted by wandering around in places I used to wander around in... thank you for the trip down memory lane. :) And the predictions! I look forward to a video in a year where you reflect on how you did! (You'll be doing such a thing, right? That's my hopeful prediction, anyway. :) )
I have no idea what she is talking about, but I really appreciate her personality
Beautiful Seattle background while listening to awesome CS predictions
Very fun video to watch. I also really loved the tour of the city, I haven't been there in years and I'm debating heading back there
such a good, compact and succinct articulation. love it
Excellent and thought provoking video. You just earned a new sub.
Raspberry Pi already released a hybrid ARM / RISC-V last year; the Pico 2 featuring an updated RP2040, the RP2350 with more everything, including ARM TrustZone powered secure boot, and, supposedly unless locked down in secure mode, a software selectable choice of booting RISC-V cores instead. There was a very fun talk at 38C3 about how well that particular choice turned out, and Raspberry Pi have a long blog about it too.
So I guess my prediction is that any plans they might have for a main-line hybrid ARM / RISC-V board will be delayed while taking what they learned back to the drawing board.
I participate in ctfs often, and I pretty much always put my ghidra output into chatgpt to get an idea of what the code does. It does usually get specifics wrong (for example, it never understands string initializations), but it gives a much better idea than parsing through the local_XXX mess
So, I had no idea what risk 5 was :) Turns out I was also "spelling" it wrong so, if anyone else was wondering:
RISC-V (pronounced "risk-five") is an open-source instruction set architecture (ISA) that is gaining traction in the Linux world. Unlike proprietary ISAs like x86 (Intel/AMD) and ARM, RISC-V is free and open, allowing anyone to develop processors based on it without licensing fees.
Love this video! Nice to see Seattle too! Love all the topics discussed.
It totally made my day to discover that you're secretly a train nerd. If you do a guest show with Adam Something or the Well There's Your Problem team I think I would lose my mind.
Really enjoyed the video, please bring more of it
What a nice voice 🥰
But ye, those are actually pretty well thought out points.
8:51 I don't think you compression prediction(AV1 will be used by 2 major streaming platforms) will hold. The reason is hardware decoding. While there might be some AV1 hardware decoders on newer hardware, the vast majority of devices would need to use CPU decoding. I could maybe see that for low-quality video, but anything 720p and up is probably going to stay on h264 until all the old phones, laptops, office computers, etc. have been replaced in sufficient numbers. Wasn't there another licence-free video standard made for the web that compresses better than h264? What happened to that one?
VP9 and you are using it right now to watch this video LOL
@@leito1996 My GPU doesn't support either AV1 nor VP9(GTX 1070) AFAIK. Neither do most AMD GPUs. And as a Linux user, let me tell you software decoding is not the same. AFAIK TH-cam still offers h264 for all videos for compatibility reasons. While this video seems to have actually been shown to me in VP9, others still show avc1(h264).
@@Maxjoker98 Sure, on desktop with older cards it might not be supported, but that is niche. Take any mobile device or smart TV (even 5+ yrs) old and it will stream VP9 on TH-cam netflix and most other services. And it will stay that way until AV1 takes over. H264 will be dead in few years hopefully. So asking what happened to VP9 - it succeeded on the market :)
@@Maxjoker98 TH-cam only uses avc if vp9 downloading fails
Counter prediction: C++ is not going away in any foreseeable future. Memory safety is not free and when you want to squeeze out the absolute most out of your hardware, you don't pay for what you absolutely don't need. C++ is evolving, new standards offering increasingly "modern" syntax and features. It will be fine.
I hate C++ but there are people that seem pretty confident using it, if it works for you... What we both share in common is the fact that there's always a need to squeeze every cpu cycle out of code so moving towards modern languages is not the rule.
Yeah, matter of taste
It's too bloated. They need to cut the language down by half. Remove all the old unnecessary stuff. Make it simpler and more straight forward.
@@vectoralphaSec what about C?
@ There is no "unnecessary stuff". Just stuff you haven't learned / had to use yet. This is a low level language. In many cases higher level abstractions are enough, but if you are doing something that C++ is actually required for, you may even go as low as asm with C++.
My dream is a rusty syntax language with a GC. People are doing UIs with Rust, but it seems like you really need to be able to create pointer/ref cycles to make ergonomic APIs.
I think C and C++ will also continue to be relevant for programming new codebases for microcontrollers and embedded systems for a long time. Anything where the developer has total control over what processes will be running on the target system. Especially when there's just one process, and it's the one they're writing.
So, developing dedicated devices like synthesizers and samplers, battery controllers, controllers in peripherals, etc.
This is a great video. I appreciate the honesty and insight.
Not going to front. I clicked for the pretty face. I’m human. But I did not expect to hear half an hour of highly intelligent talk about a variety of topics I’m interested in and familiar with (as a layman).
This video ended up being one of the biggest surprises I’ve had in like 2 years. Fun to watch all the way through.
07:20 train nerd, I love it. Walking with Laurie is my new favorite format
If you count Alpine as a major Linux distribution then it already officially supports RISC-V as of latest release. Only 2 more distros to go :)
Thanks for the walk in thought and city. A good mind opener :) Need some time to reflect though. Will be interesting to look back on 😊. Have lovely day
Was that guy eavesdropping a little at 5:18?
lol yeah that was weird 🤣
He's a giga chad
he heard 'C++' and was satisfied :D
Nice work! I like your thoughts on LLM's and method naming. They'll need to get a lot better than where they are currently for sure, but that's why it's a prediction! Spot on with asymmetric crypto. The factorization problem is going to be much easier to solve.
thanks for your thoughts on the future - here's hoping things keep getting better! 🍪☕
You're a real inspiration Laurie, I hope you know that! 😊
Cool as usual.
That was really great! I think more forward engineering tools should be made. I saw a demonstration of Binary Ninja that was really impressive. That's getting to the kind of level we should have with ordinary IDEs: where you can do stuff like changing the name of a function and all its references. But to do that you need to have a tool that really knows the syntax of the language perfectly and which also knows the build environment so you can track which modules a function is defined in and all the dependencies etc. Current IDEs like VSCode don't really know all this, though they try to give the impression they do, they don't really. The only bit of cognitive dissonance I had was when you talked about Broadcom. Raspberry Pi 5 and the new Pi Pico don't use Broadcom IP as far as I know. RPI have that expertise in-house now, and they're a correspondingly more scary company. I wouldn't want to work for them: they're a baby British version of Intel now.
Laurie has an amazing grasp of the details of computer science today and into the future.
Can't wait for end of the year to see the actual outcomes and compare with the predictions.
Generative AI for NPCs has been toyed with by a few devs in smaller games. One of the major hurdles about this strategy is that LLMs are *really really bad* about encouraging players to do things the game can't do, or go on quests that are outside the scope of the game. As conversation partners they're great, but once you need it to interact with a mechanic, they all become the overzealous game company CEO promising features the devs never planned on. So while it's not out of the question, I think a lot of AAA devs are gonna be wary of inserting this technology into their games just yet. Not without some hefty guardrails.
Except maybe Ubisoft. I could see Ubisoft just cramming some terrible version of GPT into one of their games and it going over like a lead balloon.
I don’t know, I don’t care, I hate computers I wanna run away naked in the forest
yeah computers are too scary for me, i just wanna crawl up under a rock and hibernate for like 20 years. or i could look for my next developer job, choices amiright
Good news, you can go do that
@orterves Where at without getting arrested?
@@VoiceDisasterNz oh I didn't say you wouldn't be arrested
Where has your channel been all my life?! Slay, girly.
this is cool, you should do this more often. 👍😁
Strolling in Seatle? Great content!
Can this be a more regular ‘thing” loving the relaxed dialogue while seeing the sights and sounds (or wind) of Seattle 👍🏻
So glad I left CS and went into the welding trade back in 2020, failed initially but eventually graduated 4 years later after working a dead end cashier job then 3 years after that job applying hopelessly to jobs went back to school and graduated from a welding technical program with full ride.
WOW! New here. What an informative walk you've shared here. Thank you! Something to consider: @ 12:04 - There will definitely be an avalanche of AI music for many types of applications, including for background music. However, it's a risky proposition to use AI-generated content for anything you wish to monetize, as AI music is derived from copyrighted music - any semblances for any AI-generated content to copyrighted human-generated will be problematic and best to avoid if you want peace of mind that, one day out of the blue, much of your treasured video catalog that you worked so hard on get copyright strikes. EDIT: Ok, @ 12:42, you implied that the AI used to generate the background music would be itself trained on copyright-free music. In that case, 100% agreed.
Very nice of you to take us for a walk
👍 train
I think server-hosted LLMs for NPC dialogue generation in video games would only work for subscription-based titles, given the running costs of continuously generating new text. This potentially means more game-as-a-service type of titles and maybe an increase in subscription costs. Alternatively, running something like Phi-4 locally under the hood would not be too taxing on modern GPUs and avoids the issue of added costs for the developer/publisher that are then passed onto players.
I'd like some content about thoughts on programming languages and their evolution, like, Rust may add contracts for preconditions and some stuff, C++ has a proposal for adding lifetimes that we'll see how much dust it picks up, etc.
0:30 oh he's VERY good at having that bias, he learned it from the best it seems. makes total sense !
The game prediction is worrying. If you buy a single player game, few years/months/even weeks later the server for the AI shuts down & you can't play anymore. Then what did you pay for ?
Love your informative video. Thanks a lot.
07:15 we all react like this when seeing a train lol, happy to see it, the child inside isn't dead
The SBOM discussion reminds me of the preparations for Y2K. Some companies were extremely thorough with their auditing of software and hardware. Not sure they would have been of their own volition but I think it was part of their management certification standards or something. It felt needlessly bureaucratic and wasteful sometimes but it worked. Which makes me think at least in the unix (and by extension probably unix heritage languages eg c family) we might be wise to start documenting which software packages have been tested with times past 2038. I wouldn't be surprised if that starts to kick off in the next year or two though I expect it to be much more low key than Y2K. It would be nice to have a flag on every package to say the maintainer has either tested or verified with upstream that there isn't a 32bit timestamp issue.
Also the walk and talk format really works for you.
12:18 I would not use AI generated music or art for the purposes of avoiding copyright issues. It's not clear to me if it's how it'll work or if we're just in a gray area. This is without even mentioning the ethical aspects of AI generated content.
SBOMs only get you part of the picture, but is the first step. You also need to apply configurations, such as a Linux kernel .config, or take into consideration build flags.
It’s cool that she has the knowledge she has including her fore knowledge of Assembly Language. I feel it’s easier to trust predictions from developers who use harder skill sets that are in higher demand.
They see something that we’re missing because they had to work under a different scope.
Love your videos. Wish I could walk with you through Seattle ☺️
Inb4 Laurie is the next Ray Kurzweil
Very cool format. I think the prediction on generating dialog for NPCs by a major publisher is too early (I'll interpret this as "on a major project" as well, as many large publishers do fund some small experimental games occasionally).
Development timelines for big games are insanely long, and platform support has to be broad to be profitable. That includes things like mobile hardware such as the Switch 2.
Local models will likely still require too much compute. Online models might actually be too expensive, and latency will likely be too high to make sense.
I do think this will actually be a great use of generative AI in the future, but my prediction is that it will not happen, in big projects, by a major publisher, for at least three more years.
I also think there's about an 80% chance that it will be a local model. Using it for DRM is interesting, but I doubt the economics and latency work out. Adding 200ms RTT, will make having sensible total latency hard.
To me, most of these feel more like Software Engineering Predictions than CS Predictions, since most of this isn't a very foundational shift.
Colloquial use of "CS". Also, does her channel focus on CS or SE more?
Eh. Software engineer is just applied computer science.
This was amazing, thank you.
Binary Ninja already has the function name suggestion as part of their add-on Sidekick plugin, along with summarization. (All the limitations you talk about for current LLM capabilities still apply.)
SBOM - yes exactly what a lot of us in Incident Management needs. Often we get ex-marketing staff giving us something crazy and unusable. Impossible to manage or solve anything. I will ask about SBOMs in the next open session meeting.
Background music - Oh man, that's a huge topic. As a passive flautist... 1 day out of 10 days, I get it right and it's amazing and I find the ryhthm and magic flies. 9 days out of 10 it sounds dull and repeating and lifeless.
Other full time musicians play existing music to overcome this, but to do something completely new, it's so hard. I mainly hang around the pentatonic space as my vibe. But I know of people locking themselves in the basement for a month, everyone thinks they're on holidays overseas, instead they're just trying to replicate magic. If they get 1 day out of 20 , and complete an album... then happy days. The process is intense for humans, I am un-certain if AI can do this better, probably not.
There's a lot that goes to music, having an emotional attachment based on something your mind finds a valuable bias then adding emotion to add more bias, then from that you add music that resonates that bias.
How do you do this crazy resonation of bias in AI?
Maybe this is why it hasn't happened yet.
This is great, thank Laurie. I like that individuals do this as I've seen predictions done on podcasts like @JupiterBroadcasting Coder Radio, Linux Unplugged, @LateNightLinux, and even @BadvoltageOrg, but yours is the vlog version to the podcast ones. Let's see how many you hit on the mark!
I think most of the predictions are spot on. But I think the last one might be wrong, your video might make some of the watchers build LLM API calls for code approximations. I'd also think there will be better binary analysis techniques with LLMs in the future. Thank you for the video Laurie, loved this format.
Laurie, have you seen the Polish trains controversy? Security researchers had to reverse engineer train firmware to find and bypass malicious DRM that was sabotaging the trains in Poland. It is a fascinating story.
Love this walking video. No idea where or what time this is, but the streets seem to be almost completely empty of people.
It would be nice if there was a function of your IDE would say "Hey, you're using this library/framework. It has serious vulnerabilities. You should look into fixing that situation." That would only work if you open the project though. A harder to work out, and a little more manual, would be something I could feel a list of libraries and versions into that would keep an eye out and email/notify me if it found a vulnerability in one of those libraries. Maybe rather than feeding it the list if you could tell it monitor this list of folders that would be nicer.