Nah. NSA prefers it's math nerd probablistic cracks. Nobody but NOBODY at the NSA interested or capable of doing field work. If this fault is only locally accessible you can count NSA out of the running to take advantage of.
@@no_name4796Nah, Apple would try to spin it like it's a good thing: "In accordance with the fact that we truly care about our customers, we've generousely decided to offer separate replacement cpus for the price of a whole new pc, just for this occasion. (official replacement procedure costs 7 grand extra)"
Computer architecture is so much more complicated than most people realize, even many programmers. I remember in college learning about concepts like superscalar pipelining and micro code for the first time. It still fells like there was always something new and complex to learn about computers.
I was so proud of building my first CPU and ISA in an EE class I was taking in the 90s. Then I learned about superscaler pipelining and micro code, and realized I knew nothing :P. My little ISA and CPU is an Arduino at best :P Still though, I love this job. It helps to understand how things work under the hood when writing code.
@@delarosomccay Yes, it helps to understand how things work under the hood to some extent, but modern CPUs are mindblowingly complex. For programming it helps to grasp the bigger picture of it, but the details are honestly way above my skill and paygrade to understand.
@@jbird4478Back in the 80's, I was able to understand in detail how every part of my computer worked. In the 90's the details started to become challenging. Anything made this century I have no chance, it's hard enough just at a conceptual level.
@@phillippereira6468 apple patched it pretty quickly with the m2, right? Seems reasonable that it was discovered and addressed in private in the interest of national security
This reminds me of that one time on the oldest anarchy server in Minecraft when some nerds found out you could punch a block anywhere in the world to 1) see if that chunk is loaded, and 2) see what type of block it is. Well turns out by comparing what chunks are loaded and when against when players log in and out, you're able to figure out which group of chunks is from what player, and track everybody on the server in real time. Then through a long series of punches in those areas, you're able to reconstruct an entire base block for block. Getting all the memory of a process by listening closely to see how long each operation takes reminded me a lot of that.
"...constant time programming..." Decades ago, there was a game (something with ghosts in a graveyard ?) for the Z-80 based TRS-80. The game was designed so that an AM radio, placed next to the computer, would pickup RFI in the form of the game's musical soundtrack. Yes, the programmer(s) embedded music into the RFI based on intentionally non-constant time programming.
I used the TRS-80 when I was 13. My middle school bought a few of them and would let students sign up to take them home over the weekend. The ROM had a BASIC interpreter and supported a cassette tape player for mass storage. I never tried to do assembly code for the thing, but I understood that the games I played were written in assembly/machine code.
Yep. I think he might have confused 'wreak' with the figurative use of 'reek' (ie. ”This reeks of [a bad thing that might not smell in the literal sense]”).
Most CPU stuff isn't really that hard. Really, the only hard thing about CPUs is the fact that they're all made to use X86 these days, and X86 specifically is RIDICULOUSLY overcomplicated
@@zackbuildit88 maybe for something like RISC V, but for any mainstream ISA including ARM it’s just absurd. Jim Keller’s own words - the ARM instruction set is just unfathomably complex.
Knowing stuff is cool, but being able to explain it to others is the real talent. You managed that on a very complicated topic. Very impressive. Very well done.
but if he didn't understand what was talked about either, its just nonsense. Perhaps its more "I know, but let you read it yourself because if I explain it, will go over everyone's head; ...or could just be he has no idea:)
Someone at my university was working on a side channel attack that would measure the fluxuations on a power rail of the processor and use that to eliminate possible attempts at crytographic keys. Wild stuff.
Power based side channel attacks are really "common", it's part of why secure programs sometimes use branchless programming so you can't correlate power draw and process state as easily.
I feel like the description given of constant time programming is missing something. If memory accesses are forced to take exactly the same amount of time no matter what, then surely the cache would be removed entirely, since even if something was in the cache, the CPU would have to wait as long as it WOULD have taken to get it from anyway to ensure the operation is constant time, no?
Description was incorrect. I asked my son the same question. It is _much_ more complicated... It involves the malicious code predicting cache locations used, filling those with own values, then seeing if overwritten by using the access timing, and probably much more that he left out or I could not absorb...
@ericelfner That describes the problem that constant time programming would solve: if cache hits and misses take the same amount of time, then you can't get any information out of dumping part of the cache and then timing other processes to see if they access it. My question is, if constant time programming makes them take the same amount of time, why have a cache at all? My guess is that there's probably some specialized set of constant time operations that are very slow but can be used with extremely sensitive data, in addition to the normal variable time operations, but that wasn't explained in the video, which I think it should have been
@@ericelfneryes!! also bleeds into any Iphone nearbye. Its a big problem as it destroys your logic board by writing over ssd so much. I have a 2021 imac m1 that has had 3 logic board replacements(they paid $700/board). also that its not a bug it is a feature for some, very unsafe feature!! my first mac and iphone because i thought they would be a little more safer then others....wrong again! Good day sir + smart son!
@@HowtheheckarehandleswitI don't fully understand it but the video does mention that not every operation uses constant time programming, so those probably still use the cache
I'm thinking that this is three or more orders of magnitude below where my main concerns lie. I'm still trying to figure out how to bypass the ads on TH-cam.
User: I tried holding the rabbit ears different ways, then I shook it and tapped on it, but what really worked was when I crushed it with a hammer. No more bugs. Apple dweeb: Oh yea, we've heard that from some other loyal users too.
@@inverlock Correction.. NSA has not mentioned it, so the *assumption* is "they don''t". There is no truth to that, but there is also no true they don't either.
If all that's required to access the side channel is for the listener to be running on the same computer, how does that require physical access? All someone would have to do in the hardest case is convince the user to install the listening program.
It doesn't. This is a catastrophic vulnerability and it is wildly irresponsible to say "rest assured". You should not rest remotely assured. Any executable running on your machine can steal crytographic keys from any other process. Download a game from Steam? Screwed. Download a tool? Screwed. Does anything on your PC autoupdate? Screwed.
the attack itself is "easy", the harder part is getting that info back to the attacker, which in that case would require the same sort of malware as usual anyway
*if the entire cache adheres. That's why some parts do and some don't. The fact they didn't do it at the point where they should have is the reason of this vulnerability
So, can we have a tool allowing the backing up of the keys from the M1, M2 and M3's before something goes wrong so the flash data can be decrypted in case of recovery? Nice...
This is literally Apple's version of the Death Star's exhaust port. To me, this seems far worse than is casually suggested - your cryptographic keys could be leaked by a simple process executed locally.
@@Skulletit's slightly different, though very similar. Slightly more exploitable on apples due to the easier spoofing. Still pretty impractical in use though
This hugely underplays the impact of this bug in my opinion. In plain language, it means that we're back to any kind of installed program that contains suitable malware (however it chooses to fetch and execute it) being able to compromise everything on your machine and by virtue of that, providing a gateway to the network you are on. How hard is it to get people to install malware? Not very. How difficult it uncommon is it for organised criminals/state actors to release useful free software who's real purpose is to gain machine access? Not very.
It's 'funny' to me when companies (e.g., Apple) shrug and say there's nothing to worry about - because you have to have physical possession of the machine in order to do the hack. Except all you need is to be able to run software on the machine, which can be done remotely from anywhere in the world. This reminds me of a time (surely patched by now, though it'd been years unpatched already before I learned about it; I've been on Linux for decades) that Windows had a process running on the desktop as local admin - that you could, nevertheless, simply send it key commands as if you were admin operating the UI. They (Microsoft) also said you had to have local access in order to exploit it, and, once again, they ignored anyone who would have a remote desktop on the machine would have access to exactly that. Yes, there are plenty of hacks that require actual physical access to the hardware (if someone nefarious can physically touch your machine, it's not yours any longer!), but to claim anything hardware based is immune from remote exploit shows either colossal ignorance of security - or a willingness to bold face lie to their customer base. Knowing how many security experts are at Apple, I'm going with the latter.
The constant time programming mentioned at 4:55 needs to be emphasized that it only applies to cryptography functions which the paper itself does. A move operation by itself as stated in the video would not run in constant time mode. Thus constant time execution would apply to the example given at 1:50. The paper also presents a solution for Apple to resolve this issue as this exploit only occurs on one of the two processor types. It is not clear if some op codes can be altered to fix this on the main processor cores, though ironically it may very well end up slower on the bigger cores with the patch than the smaller ones. It should also be noted that this exploit also exists on the Intel side of things in at least the 13th generation (and 14th as it is the same as the 13th from a design perspective) of chips per the paper.
The prefetch optimisation is not available on M1 or M2 efficiency cores, and the M3 has the ability to disable the optimisation. So, while the research is worthy of great respect and a Phd grant or two, this is not the end of the world. Crypto code can be bound to efficiency cores on M1 and M2, and the optimisation can be disabled for anything that may leak key on M3. When TH-cam offered this video to me my instant reaction was click-bait and reminiscent of the tech press reaction to the researchers press release. But your description of the flaw was pretty clear.
This video was awesome besides the fact that the "if" on your shirt is the only non color-coded word on the shirt despite being the most commonly used.
Microarchitectural attacks are a very fascinating area of attacks. Unfortunately, the way they are presented in media is often very inaccurate and frankly, contains a lot of straight up factual errors. I get that it is hard to understand such vulnerabilities especialyl without a background in computer science. But sometimes I wish the reporters would try to understand what they are reporting on before writing an article. In the past, people trying to actually understand how such vulnerabilities worked had to read though the paper and try to understand it. A very high barrier of entry for people interested in such topics. Luckily, researchers tend to put out more high-level (but factually accurate) descriptions for many vulnerabilities in recent years. And, quite a few TH-cam channels covering such attacks on a deeper level than mainstream media whilst still being "noob friendly" have gained popularity. As a researcher (who is quite new to this field and to research in general), this leaves me very excited for the future, as more and more people interested in this field can find actual information and educate themselves.
Remember a few years ago the Bit-Banger attack on Androids exploiting RAM? Chip design in the future will have to include protection against these attacks.
A couple things. If a bad guy has physical access to your machine, it doesn't matter what kind of machine it is, or whether it has some new just discovered vulnerability, you've already lost. Second, sounds to me like there are vulnerabilities in any digital password system, just like there are vulnerabilities in any lock. A skilled picker is going to get in, it's just a matter of time, and the amount of time needed is directly proportional to the skill times the determination. If it's a determined attacker and the information is highly valuable, then computer hacking skills may very well fall wayside to actual physical hacking skills, as in hacking off fingers one at a time till the owner coughs up the password. When deciding on how to invest in security, it's wise to assess ALL the factors.
Which is wrong. Just because you can read the source doesn't mean they'll let you use it. Seriously, I hated the shirt so much that I stopped the video after reading it. (I'm watching it again for the first time now.)
@@eksortso Way to deliberately misunderstand a sentence. It means, 'as opposed to secret, commercial, closed-source projects like Windows'. It obviously doesn't mean you get to use reverse-engineered coder as if it's FOSS. Gah!
Welcome to the wonderful world of Arm’s Weak Memory Model and insanely aggressive realtime optimizations. It’s a blessing and a curse. For those that write highly optimized low level multicore code getting burned by an obnoxiously hard to reproduce bug at runtime because a memory fence wasn’t done exactly correct is a whole new level software engineering shenanigans. The only thing the x86 architecture has going for it, in this regard, is the use of a Strong Memory Model. The order code is written is the way it gets executed.
This is retro computing at it's peak. Disasters i had long forgotten about, brought back to be enjoyed by a younger audience. So nostalgic, so nice of them shelter bugs that were about to go extinct, those poor bugsies.
So you forgot about specter and meltdown? Ever heard of stuxnet? If not check out how America might have (never claimed guilt) stifled Iran's nuclear program for years.
Another example (if possibly an apocryphal one) is that supposedly US government buildings will have spikes in nighttime pizza orders in the leadup to military operations since the people overseeing it will stay up waiting for updates.
Anyone else have to read this a couple of times to understand that there weren’t physical spikes being put in pizzas, just an increase in number of deliveries?
Don’t get me wrong this is technically a nice hack. But the fact that you already need to run on the target machine kind of defeats the purpose. By then a smart hacker can do whatever he wants anyways. It’s the same with the CBC cipher attack. Yes, you can read keys. But you can do that anyways when you’re on the same machine. The OS can’t really protect you from getting hacked locally.
The sentiment that people exist on this earth that know this much resonates. I am still blown away that people figured out how to program IRC into a SNES by just pressing the buttons on a controller programmatically to manipulate the runtime of the SNES itself. I feel dumb.
I remember first hearing it in 2010 for the _TRON: Legacy_ ARG. It was kind of implied that you trying to find Flynn by participating in one let CLU find out about the outside world.
I'm going to hazard a guess that if its in the M-series chips its also in the A-series chips that were developed right at the start of the Apple Silicon push. There's a lot of shared infrastructure there.
What's funny is the fix for all of these side channel attacks it's extremely simple, it's just that nobody thought about it before: a simple CPU flag to enable/disable constant time operations on a per-operation basis. That's it. If the CPU had this, then in code you could target very specific code where it wasn't safe to optimize the cache, like cryptographic functions, but everything else can run fully optimized. If the CPU can mode switch fast enough it also enables less secure but still potentially effective solutions like randomly showing down 5% of cache hits on an operation where you need 100% accuracy to work properly, like encryption/decryption.
That is exactly what I was about to comment. If you add in enough randomness to where statistical tests can't tell you anything about the likelihood of thisTime corresponds to a multiply and thatTime corresponds to an add, then you've effectively solved this time-to-compute vulnerability. Also, I've heard that that bit exists. It's called a Chicken Bit because it's a bit that you purposely turn on in order to avoid something else (in this case, avoiding faster execution and therefore avoiding the vulnerability). I read about it on Sophos's website I think.
It doesn't need physical access to your computer. "Like other attacks of this kind, the setup requires that the victim and attacker have two different processes co-located on the same machine and on the same CPU cluster. Specifically, the threat actor could lure a target into downloading a malicious app that exploits GoFetch." - The Hacker News
If your cache doesn't speed up your processor, you shouldn't have cache. It's such a difficult and frustrating thing to navigate cause the simple solution is just crap.
Cache isn't about speeding up the processor... It's about speeding up memory access, which speeds up code execution. The code itself is what determines how much of an effect cache has, not the processor.
I think the point of confusion here stems from when it was said that "most processes adhere to constant-time programming." This perhaps makes it sound like they removed most of the cache fetching-vs.-failing variability but I think that in the case of actual fact they just underscored which opcodes use it and put in the processor's instruction manual "use these only if you'd like to admit time variability in your process." In terms of actual use, almost every program *will* use these almost as much as it did before, but e.g. encryption programs won't. Anyway, that's my takeaway after puzzling about this same issue. Someone who actually knows should weigh in.
Yup, I am aware! Constant time programming is a software technique, not a hardware technique. I found it to be slightly misleading how it was mentioned in-context in the video. While secure software can be written using constant time programming techniques, that can't be used to mitigate this issue on the hardware side, since it would involve also mitigating the effectiveness of cache and the CPU would have to wait around for memory all the time. (Or do something like speculative execution, which can also run into this issue.) The multiple levels at which security needs to be analyzed is why FIPS certification is so stringent even about the operating system and hardware a software package runs on.
@@danielbriggs991 I don't know if you are correct. But I know what you said is far more reasonable than the other statements made here. And you even qualified your answer. You are a refreshing change from normal commenting behavior.
I was about to comment the same thing about how the explanation of constant time programming was very misleading. You can easily Google this to see that @cinderwolf32 is right. The paper even explicitly mentions that the programs it exploits use constant time programming techniques
“Like” and “Similar” kinda mean the same thing, so when you started you said your work was not doing this type of thing. I was confused. But the show is great… Love low level learning… at my level.
@@xE92vD It can be prevented by simply disabling DMP. This will cost some performance. Fun Fact: You can't "adjust the CPU's internal hardware" after the hardware has been delivered. So Apple will have to rely on software to fix this.
I would be a little surprised if any particular key exists long enough and is used to process enough data fast enough (i.e. being resident in cache, not just an SSL network stream stuck in memory) for this attack to be practically executable in a real world situation. The information is statistically recovered so throwing more processing power at the attacking end doesn't help at all. I'd also be surprised if this attack isn't equally effective against almost every other processor made in the past 15(ish) years. Just for people who missed it, this reminds me of simple but still effective attacks where you can recover all typed information from an ssh session just from the timing of the packets being sent.
I love side channel attacks, they are always so interesting and ingenious. Sometimes they can literally look like science fiction like the acoustic or electromagnetic ones.
That’s right, but what he meant was, you must have access to run code on the machine first. This particular vulnerability doesn’t give you remote access.
@@cattleco131but why is everyone playing that down? That's literally just a bad email attachment. And the malicious code ran locally will likely give remote access
Doing some research for implementing a login process for a web application, I read that you should compare the entire string and then return if true or false, so the amount of time to check is always the same
That is a good point, but make sure the functions you use are ALSO constant-time functions. I.e., the code that YOU write may be constant-time, but the implementations of the libraries you use are likely optimized for speed and therefore are likely NOT optimized for security.
Just one correction, the thing that we want are actually the “maybe” addresses themselves. Because on a cache miss, the information about which address was accessed is actually accessible that is the core design flaw. Since the addresses are now visible, we can try to “train” the speculative execution engine to prefetch for many signatures of data that looks like addresses, and when it does we can look at the fetch information to see what was the address (which is actually data in the cache), we dont really care about what is being fetched from ram.
I've been wondering how did they make such a fast and efficient CPU. They just skipped safeguards that other vendors had to deal with since meltdown and spectre were discovered.
This was a great explanation which helped clear my doubts around how this was different from Spectre and Meltdown attacks on Intel chips from the past. Thank you. Just subscribed, hoping to learn and appreciate the world of IT a lot more with you.
Every CPU has some sort of this vulnerability, it's not just Apple. And the frustrating part of this is it is very hard to fix without affecting performance!
as someone who owns apple hardware and works in IT....shrug. we live in a time where there are vastly more trying to find the exploits than work on the design teams, so there will only be more and more of this. I've patched millions of CVEs in my job, but John and Lisa in marketing are the ones that have gotten us hit with something.
It's amazing how far you can get with a phishing email. Re: the Apple silicon bug... meh. So you've gotta have physical access to the machine, and want to peer into another process' secrets. I'm not sure if that's another Spectre/Meltdown, exactly, but if it is, this is far less of a big deal, because the point of those vulns was that they existed on processors that power the vast majority of the world's cloud computing servers, meaning there's a real chance you're sharing the machine with someone else. Very, very, very few Macs host cloud servers.
@@lenerdenatorit's slightly worse than spectre, or easier i guess. I don't like people using the word 'physical' for access. It requires local installation, like most malware. And once you have that malware, you bet that information is no longer confined to your local device (unless your firewall settings are nice and intrusive). This could be a huge problem.. i doubt it will be though
My first thought when clicking this video is flashbacks of sorting through unsolicited bug bounty emails at work. So this was nice to not hate watching.
The way this side channel vulnerability takes advantage of the difference between operation speed in branch prediction, reminds me of a bug mentioned in EVE Online lore. There is a way to use a ship equipment module called a data analyzer to gain information regarding when a player owned space station becomes vulnerable to being attacked and destroyed by other players. The description of this module mentions branch prediction vulnerabilities in something called a recursive computing module, which basically is the Eve Online version of a CPU for a space station.
In this specific case, the idea of constant time programming is on the implementation of the encryption algorithms, not on the CPU runtime of the instructions. The underlying issue is that the data that the implementation uses can be understood by the CPU as memory addresses, so based on the side-channel attach, another process can know that the implementation (at some point) produced some data that has that specific shape. The proposed solution in the paper and by other CPUs that have the same optimisations is adding an instruction that prevents this behaviour, even when in my (highly subjective) opinion something like CHERI would be a better solution.
Quite sure you need to be logged in already to run the script in a terminal to do any exploitation. If someone has my computer and is logged in, then I'd less worried about this vulnerability, and more about how they got my computer and managed to log in.
I mean the golden rule of cybersecurity is to not leave your device in public places or to plug any strange IO into your computer. Even if this vulnerability didn't exist, there are a plethora of other vulnerabilities in every computer that can be exploited to log in; nothing is unhackable. Another tenet of cybersecurity: security is constrained by practicality and circumstance. This vulnerability probably won't mean that much to the general public, like the Spectre or Meltdown bugs, because they're so complex that average person isn't going to go through that effort when it much easier to use social engineering (e.g. record you typing in your password). The only entities that might use this exploit would likely be government agencies like the FBI, but if they're after you then you have bigger things to worry about.
I have zero technical knowledge and happened to stumble upon this video out of sheer curiosity. There seems to be some questions of ethics regarding unified processes regarding the nature of efficiencies and privacy. The attack also seems novel due to the ability to derive information from the CPU through natural leakages and use this information to build the identity of the CPU such that privacy is violated. It seems like a 'low level' existential attack! Interesting af video. Thanks for uploading!
Why does this need physical access to a machine, surely this is the same class as Spectre and Meltdown in that you just need to be able to execute code on the target machine, right?
Doesn't it invalidate the utility of the cache for an operation to take the same amount of time whether or not it misses? What is the performance cost of requiring operations to run in constant time?
Yes, but I imagine he means that cryptographic operations are implemented in such a way that the cache hit or miss does not affect the running time of the algorithm. I do not think the cache does or should know anything about what its being used for. I wish the video was more clear on that, I think it was so confusing because he was talking about the chip performing encryption, which is something that is commonly done not in software, but in hardware (for the most popular algorithms)
I'm torn. Research like this is interesting and totally valid, but in the end, just causes harm. Attackers will never use this bug for the same reason other sidechannel attacks like Meltdown and Spectre will never be used in the wild: there are 1000 easier ways for an attacker to achieve their goal. They just don't need bugs like these to achieve their goals. So the outcome: performance is reduced in exchange for no reduction in risk.
I'm such a noob at programming, but I love watching your videos. Almost everything you say goes over my head and you legit seem like a wizard to me. So the fact that this flies even over your head, I can't even fathom it. I am usually inspired to get better when I see better programmers than me, but sometimes when I see people who are such badasses on a whole other level, it kind of demotivates me because it doesn't seem like it's possible for me to ever get even half as good. Anyways, this bug sounds insane!
Know that it is possible! You've just got to keep going. I mean he even says, he doesn't understand all of it! No one finds it easy and everyone has been exactly where you have been with programming. Just keep going and focus on the fundamentals
I have a masters in cmpe from a top 5 university. My senior seminar prof's parting wisdom was to always be suspicious of cache, regardless of whether youre validating it or producing something to be sold on the market. Sure this is a novel attack, but there are plenty that are barely hidden and definitely not accidents. Theres a reason why most ms cmpe grads get a S or TS SCI. A lot of modern spy craft happens when you design computer hardware and understand tricks you can do with the already clever designs.
So for password hash checks, have a pass/fail flag, and a dummy flag, and set them both to true. Loop through the entire hash, checking it against the has of whatever was entered. For each match, clear the dummy, and for any mismatch clear the pass/fail flag. Code these two paths so that they take the exact same amount of time. Always check the entire hash, even if the first character of the hash fails the check. Return the pass/fail flag. The result is that password checks always take the same amount of time, no matter how closely or badly the password matches.
I've noticed in the past (Windows 7) that if I type my password in *nearly* correct, it's more likely to take a while to process before it tells me it's wrong. If I enter it completely wrong, it almost always tells me right away.
That prefetch sounds really clever, but damn, how do you not notice that it would be a treasure trove for cache-timing side-channel attacks?! Unless you're using some kind of fancy provenance-based pointer validation setup like ARM has, it would be way too easy to forge pointers and wreak havok in exactly this way.
That's still not enough, you can find out how long the expected password is by trying out different lengths first, and then pad out your attempts every time, which will still be linear complexity
Come learn C so this doesn't happen again at lowlevel.academy (there's a SALE)
should I buy a new mac?
C, the notorious bug killer
Fix your website first
@@negativesevenIt's the safest language out there
@@oniimaxxxx6479 🤣🤣
always terrible when researchers accidentally stumble upon your NSA backdoor :(
As they say, when one door closes, another one opens...
@@isbestlizard ... And hit your balls ...
NSA be like "Damn, we've been outed."
ISIS and others be like "Time to switch to Lenovo."
China be like "Sounds good. More data for us."
It doesn't close though, since it's unfixable.
Nah. NSA prefers it's math nerd probablistic cracks. Nobody but NOBODY at the NSA interested or capable of doing field work. If this fault is only locally accessible you can count NSA out of the running to take advantage of.
The only hard things in computer science are naming things and cache invalidation.
I like this variant: The two hardest things in computer science: naming things, cache invalidation, and off by one errors
@@Jeremy-rg9ug And cache invalidation
That's just a 0 indexed array
conditions and race
@@cinderwolf32 I think we forgot cache invalidation.
0:15 "It is unpatchable unless you literally go to the store and get different CPU..."
If only it can be a thing with Apple
"Oh you want a safe cpu, without any (found) vulnerability? That's gonna cost you 2000$ for a mac, thanks!"
Unpathcable?
@@no_name4796Nah, Apple would try to spin it like it's a good thing: "In accordance with the fact that we truly care about our customers, we've generousely decided to offer separate replacement cpus for the price of a whole new pc, just for this occasion. (official replacement procedure costs 7 grand extra)"
@@Eutropios Meant to be unpatchable
@@Eutropios thanks for noticing my typo (:
Computer architecture is so much more complicated than most people realize, even many programmers. I remember in college learning about concepts like superscalar pipelining and micro code for the first time. It still fells like there was always something new and complex to learn about computers.
So true. And most of the time, they are not important until they are.
I was so proud of building my first CPU and ISA in an EE class I was taking in the 90s. Then I learned about superscaler pipelining and micro code, and realized I knew nothing :P. My little ISA and CPU is an Arduino at best :P Still though, I love this job. It helps to understand how things work under the hood when writing code.
@@delarosomccay Yes, it helps to understand how things work under the hood to some extent, but modern CPUs are mindblowingly complex. For programming it helps to grasp the bigger picture of it, but the details are honestly way above my skill and paygrade to understand.
@@jbird4478Back in the 80's, I was able to understand in detail how every part of my computer worked. In the 90's the details started to become challenging. Anything made this century I have no chance, it's hard enough just at a conceptual level.
Yeah tbf the levels of abstraction are wild
The CIA must be really broken up about this getting discovered.
how do you know what they use
@@phillippereira6468 apple patched it pretty quickly with the m2, right? Seems reasonable that it was discovered and addressed in private in the interest of national security
Apple is trash anyways
@@andrewferguson6901 it’s not patched, I don’t know where you’re getting that information
@@andrewferguson6901 maybe it was placed there to begin with "in the interest of national security"
One of the guys who discovered Meltdown/Spectre is my Prof at University
Niiiiiice
tug student spotted
@@maximilianstallinger735 😂
you're at CISPA or Graz? xD
Woah!
This reminds me of that one time on the oldest anarchy server in Minecraft when some nerds found out you could punch a block anywhere in the world to 1) see if that chunk is loaded, and 2) see what type of block it is. Well turns out by comparing what chunks are loaded and when against when players log in and out, you're able to figure out which group of chunks is from what player, and track everybody on the server in real time. Then through a long series of punches in those areas, you're able to reconstruct an entire base block for block.
Getting all the memory of a process by listening closely to see how long each operation takes reminded me a lot of that.
is there a video about it?
@@pawek02 yep there's a good documentary about it but forgot the title and channel
@@keent its called fitmc.
Just say 2b2t, everybody knows it, it's not a secret
@@iwolfman37 The meme is that everyone refers to 2b2t as "the oldest anarchy server in Minecraft"
"...constant time programming..." Decades ago, there was a game (something with ghosts in a graveyard ?) for the Z-80 based TRS-80. The game was designed so that an AM radio, placed next to the computer, would pickup RFI in the form of the game's musical soundtrack. Yes, the programmer(s) embedded music into the RFI based on intentionally non-constant time programming.
I remember those days.
I used the TRS-80 when I was 13. My middle school bought a few of them and would let students sign up to take them home over the weekend. The ROM had a BASIC interpreter and supported a cassette tape player for mass storage. I never tried to do assembly code for the thing, but I understood that the games I played were written in assembly/machine code.
In the description: Reeking implies that it smells, wreaking is the word you were looking for.
it does reek cause this is some bullshit
Yep. I think he might have confused 'wreak' with the figurative use of 'reek' (ie. ”This reeks of [a bad thing that might not smell in the literal sense]”).
maybe he meant its a stinky bug
If enough people keep making this error they'll just change the dictionary. Sad.
It does reek because this code smells
To me the thought that people actually even know how the cpu works is unfathomable, but then there's people who want to abuse it that know even more.
You need people who know how to make them to begin with
Often the people who find CPU vulnerabilities are people who design them
Most CPU stuff isn't really that hard. Really, the only hard thing about CPUs is the fact that they're all made to use X86 these days, and X86 specifically is RIDICULOUSLY overcomplicated
@@zackbuildit88 maybe for something like RISC V, but for any mainstream ISA including ARM it’s just absurd. Jim Keller’s own words - the ARM instruction set is just unfathomably complex.
I mean, ask any random Joe on the street how a CPU works and 99% of them won’t give a sufficient answer.
Knowing stuff is cool, but being able to explain it to others is the real talent. You managed that on a very complicated topic. Very impressive. Very well done.
but if he didn't understand what was talked about either, its just nonsense. Perhaps its more "I know, but let you read it yourself because if I explain it, will go over everyone's head;
...or could just be he has no idea:)
In 4:16 you said you would link the paper you are referencing, but I cannot the see the url, I guess you forgot it. Please could which paper it is?
Fixed sorry
@@LowLevelTV thanks sir
Someone at my university was working on a side channel attack that would measure the fluxuations on a power rail of the processor and use that to eliminate possible attempts at crytographic keys. Wild stuff.
Power based side channel attacks are really "common", it's part of why secure programs sometimes use branchless programming so you can't correlate power draw and process state as easily.
It's quite common. Over the air power analysis is also one way
Side channel attacks are cray cray
that's interesting, I thought uni was only for people with nose rings, neon-colored hair and 'right opinions' on gender and social justice
@illegalsmirf Those you mention are in the "humanities", this is on the "Exact" department
I feel like the description given of constant time programming is missing something. If memory accesses are forced to take exactly the same amount of time no matter what, then surely the cache would be removed entirely, since even if something was in the cache, the CPU would have to wait as long as it WOULD have taken to get it from anyway to ensure the operation is constant time, no?
That was my thought too. If the cpu needs to pretend the cache hit took as long as a memory fetch then why bother in the first place
Description was incorrect. I asked my son the same question. It is _much_ more complicated... It involves the malicious code predicting cache locations used, filling those with own values, then seeing if overwritten by using the access timing, and probably much more that he left out or I could not absorb...
@ericelfner That describes the problem that constant time programming would solve: if cache hits and misses take the same amount of time, then you can't get any information out of dumping part of the cache and then timing other processes to see if they access it. My question is, if constant time programming makes them take the same amount of time, why have a cache at all? My guess is that there's probably some specialized set of constant time operations that are very slow but can be used with extremely sensitive data, in addition to the normal variable time operations, but that wasn't explained in the video, which I think it should have been
@@ericelfneryes!! also bleeds into any Iphone nearbye. Its a big problem as it destroys your logic board by writing over ssd so much. I have a 2021 imac m1 that has had 3 logic board replacements(they paid $700/board). also that its not a bug it is a feature for some, very unsafe feature!! my first mac and iphone because i thought they would be a little more safer then others....wrong again! Good day sir + smart son!
@@HowtheheckarehandleswitI don't fully understand it but the video does mention that not every operation uses constant time programming, so those probably still use the cache
As an apple developer I would like to state that there are much more then 1 unfixable bug on apple computers.
As an Apple user I would like to state that is very obvious.
What you mean Apple hardware isn't perfect??? 😂😂😂
ok nerd
Y’all do some weird stuff, I reverse engineer macOS on a regular basis for fun
Some of the stuff you guys do is odd to say the least
@@nathantaylor2026 said the guy who reverse engineers macOS for fun
I'm thinking that this is three or more orders of magnitude below where my main concerns lie. I'm still trying to figure out how to bypass the ads on TH-cam.
Brave browser works seamlessly on mobile and on windows.
uBlock
I haven't had them for years 🤷
Adblock lmao, and Revanced for Android
Apple's response: People need to learn to hold their CPU the right way.
Tracks
Exactly‼🤣🤣😂😂
🤣🤣🤣 Classic…
User: I tried holding the rabbit ears different ways, then I shook it and tapped on it, but what really worked was when I crushed it with a hammer. No more bugs.
Apple dweeb: Oh yea, we've heard that from some other loyal users too.
Aww. We miss ol’ Steve and his BS. Lol
Remember, is not a bug, it’s a feature.
An NSA feature
just about everything is. or maybe people are are in denial..... You choose.
NSA doesn’t need this LOL
@@inverlock Correction.. NSA has not mentioned it, so the *assumption* is "they don''t". There is no truth to that, but there is also no true they don't either.
Just looked it up, looks like the iPad Pro 12.9-inch and iPad Air use the M1 chip. Now we're one step closer to jailbreaking them!
if it is a requirement that the timing of a cache hit is the same as a cache miss, the cache has no effect and can be skipped
Last time I was this early I didn’t have kids
Congratulations?
Do you have kids now?
You are the fastest man alive sir.
Bro is flexing his one pump chump pull out game. 🎉😂
Are the kids now grown up enough to get their own home?
If all that's required to access the side channel is for the listener to be running on the same computer, how does that require physical access? All someone would have to do in the hardest case is convince the user to install the listening program.
This, it seems this attack doesn't need physical access? can someone confirm?
this sounds scary, allowing for existing apps to upgraded automatically that would listen in via osx caching api + timers/custom cleverness.
It doesn't. This is a catastrophic vulnerability and it is wildly irresponsible to say "rest assured".
You should not rest remotely assured. Any executable running on your machine can steal crytographic keys from any other process. Download a game from Steam? Screwed. Download a tool? Screwed. Does anything on your PC autoupdate? Screwed.
the attack itself is "easy", the harder part is getting that info back to the attacker, which in that case would require the same sort of malware as usual anyway
@@DDracee all corporations are saints with honor
tbh if cache adheres to the constant time programming rule, then it's better not to have cache
Yeah what I was thinking 😂 but shh their marketing overlords would have a Meltdown (hehe) over this...
But what about the backdoor then?
*if the entire cache adheres. That's why some parts do and some don't. The fact they didn't do it at the point where they should have is the reason of this vulnerability
No cache would be the single biggest performance drop in computing ever.
So, can we have a tool allowing the backing up of the keys from the M1, M2 and M3's before something goes wrong so the flash data can be decrypted in case of recovery?
Nice...
This is literally Apple's version of the Death Star's exhaust port.
To me, this seems far worse than is casually suggested - your cryptographic keys could be leaked by a simple process executed locally.
Not just Apple, this also effects 13th gen Intel CPUs.
@@Skulletit's slightly different, though very similar. Slightly more exploitable on apples due to the easier spoofing.
Still pretty impractical in use though
This hugely underplays the impact of this bug in my opinion. In plain language, it means that we're back to any kind of installed program that contains suitable malware (however it chooses to fetch and execute it) being able to compromise everything on your machine and by virtue of that, providing a gateway to the network you are on. How hard is it to get people to install malware? Not very. How difficult it uncommon is it for organised criminals/state actors to release useful free software who's real purpose is to gain machine access? Not very.
typo in description. *wreaking havoc. "reeking" means smelling like something.
smells like a bunch of bad apples.
@@JasonKaler Smells like BoeingMAX.
Well, I guess soldering everything to the board finally backfired.
It's 'funny' to me when companies (e.g., Apple) shrug and say there's nothing to worry about - because you have to have physical possession of the machine in order to do the hack. Except all you need is to be able to run software on the machine, which can be done remotely from anywhere in the world. This reminds me of a time (surely patched by now, though it'd been years unpatched already before I learned about it; I've been on Linux for decades) that Windows had a process running on the desktop as local admin - that you could, nevertheless, simply send it key commands as if you were admin operating the UI. They (Microsoft) also said you had to have local access in order to exploit it, and, once again, they ignored anyone who would have a remote desktop on the machine would have access to exactly that.
Yes, there are plenty of hacks that require actual physical access to the hardware (if someone nefarious can physically touch your machine, it's not yours any longer!), but to claim anything hardware based is immune from remote exploit shows either colossal ignorance of security - or a willingness to bold face lie to their customer base. Knowing how many security experts are at Apple, I'm going with the latter.
I really dislike the wording they've used, and i can see people think the hackers need to physically have access to your device.
The constant time programming mentioned at 4:55 needs to be emphasized that it only applies to cryptography functions which the paper itself does. A move operation by itself as stated in the video would not run in constant time mode. Thus constant time execution would apply to the example given at 1:50.
The paper also presents a solution for Apple to resolve this issue as this exploit only occurs on one of the two processor types. It is not clear if some op codes can be altered to fix this on the main processor cores, though ironically it may very well end up slower on the bigger cores with the patch than the smaller ones.
It should also be noted that this exploit also exists on the Intel side of things in at least the 13th generation (and 14th as it is the same as the 13th from a design perspective) of chips per the paper.
that's old school hack, you younglings crack me up
The prefetch optimisation is not available on M1 or M2 efficiency cores, and the M3 has the ability to disable the optimisation. So, while the research is worthy of great respect and a Phd grant or two, this is not the end of the world. Crypto code can be bound to efficiency cores on M1 and M2, and the optimisation can be disabled for anything that may leak key on M3.
When TH-cam offered this video to me my instant reaction was click-bait and reminiscent of the tech press reaction to the researchers press release. But your description of the flaw was pretty clear.
This video was awesome besides the fact that the "if" on your shirt is the only non color-coded word on the shirt despite being the most commonly used.
Somewhat analogous to being able to figure out what is being typed just by hearing the clacking of the typewriter keys…
Microarchitectural attacks are a very fascinating area of attacks. Unfortunately, the way they are presented in media is often very inaccurate and frankly, contains a lot of straight up factual errors.
I get that it is hard to understand such vulnerabilities especialyl without a background in computer science. But sometimes I wish the reporters would try to understand what they are reporting on before writing an article.
In the past, people trying to actually understand how such vulnerabilities worked had to read though the paper and try to understand it.
A very high barrier of entry for people interested in such topics.
Luckily, researchers tend to put out more high-level (but factually accurate) descriptions for many vulnerabilities in recent years.
And, quite a few TH-cam channels covering such attacks on a deeper level than mainstream media whilst still being "noob friendly" have gained popularity.
As a researcher (who is quite new to this field and to research in general), this leaves me very excited for the future, as more and more people interested in this field can find actual information and educate themselves.
Remember a few years ago the Bit-Banger attack on Androids exploiting RAM? Chip design in the future will have to include protection against these attacks.
A couple things. If a bad guy has physical access to your machine, it doesn't matter what kind of machine it is, or whether it has some new just discovered vulnerability, you've already lost. Second, sounds to me like there are vulnerabilities in any digital password system, just like there are vulnerabilities in any lock. A skilled picker is going to get in, it's just a matter of time, and the amount of time needed is directly proportional to the skill times the determination. If it's a determined attacker and the information is highly valuable, then computer hacking skills may very well fall wayside to actual physical hacking skills, as in hacking off fingers one at a time till the owner coughs up the password. When deciding on how to invest in security, it's wise to assess ALL the factors.
As far as I've seen, this vulnerability doesn't require physical access.
I like how his shirt says "everything is open source if you can read assembly" 😂
well.. technically .. yes
Assembly is rarely the source code though
Which is wrong. Just because you can read the source doesn't mean they'll let you use it. Seriously, I hated the shirt so much that I stopped the video after reading it. (I'm watching it again for the first time now.)
@@eksortso Open source doesn't mean it's free to use either, it depends on the license.
@@eksortso Way to deliberately misunderstand a sentence. It means, 'as opposed to secret, commercial, closed-source projects like Windows'. It obviously doesn't mean you get to use reverse-engineered coder as if it's FOSS. Gah!
Welcome to the wonderful world of Arm’s Weak Memory Model and insanely aggressive realtime optimizations. It’s a blessing and a curse.
For those that write highly optimized low level multicore code getting burned by an obnoxiously hard to reproduce bug at runtime because a memory fence wasn’t done exactly correct is a whole new level software engineering shenanigans.
The only thing the x86 architecture has going for it, in this regard, is the use of a Strong Memory Model. The order code is written is the way it gets executed.
This is retro computing at it's peak.
Disasters i had long forgotten about, brought back to be enjoyed by a younger audience.
So nostalgic, so nice of them shelter bugs that were about to go extinct, those poor bugsies.
So you forgot about specter and meltdown?
Ever heard of stuxnet? If not check out how America might have (never claimed guilt) stifled Iran's nuclear program for years.
Another example (if possibly an apocryphal one) is that supposedly US government buildings will have spikes in nighttime pizza orders in the leadup to military operations since the people overseeing it will stay up waiting for updates.
Anyone else have to read this a couple of times to understand that there weren’t physical spikes being put in pizzas, just an increase in number of deliveries?
The Manhattan Project could have been compromised because all of a sudden, all these scientists were changing their mailing addresses...
I’ve burned my mouth on pizza before, but I’ve never encountered spikes in my pizza. It must be painful. 😂
this channel is by far one of my favorite channels on youtube! Good work, mate! you are awesome!
adding delays is also how they stopped crashes & ddos attacks online in the early days
Sadly, fixing this sounds like it will slow down cpus.
Linux has boot option to disable those fix. But Apple don't. I recall my iMac actually slow down after those meltdown fix went in.
Only apple cpu's, in this case.
@@brandonhoffman4712It was am Intel iMac. I am not sure if current M1/M2 cpu has those mitigations.
Don’t get me wrong this is technically a nice hack. But the fact that you already need to run on the target machine kind of defeats the purpose. By then a smart hacker can do whatever he wants anyways. It’s the same with the CBC cipher attack. Yes, you can read keys. But you can do that anyways when you’re on the same machine. The OS can’t really protect you from getting hacked locally.
always amazed at cache/tlb/memory exploits...very deep rabit hole to dwelve into
The sentiment that people exist on this earth that know this much resonates. I am still blown away that people figured out how to program IRC into a SNES by just pressing the buttons on a controller programmatically to manipulate the runtime of the SNES itself. I feel dumb.
heard the phrase "side-channel-attack" some two years ago during my undergrad. never googled it to find what it is, thanks for explaining :3
I remember first hearing it in 2010 for the _TRON: Legacy_ ARG. It was kind of implied that you trying to find Flynn by participating in one let CLU find out about the outside world.
Heard you in ThePrime yt clip. I love that he credited your Twitter but not your yt channel.
I'm going to hazard a guess that if its in the M-series chips its also in the A-series chips that were developed right at the start of the Apple Silicon push. There's a lot of shared infrastructure there.
What's funny is the fix for all of these side channel attacks it's extremely simple, it's just that nobody thought about it before: a simple CPU flag to enable/disable constant time operations on a per-operation basis.
That's it. If the CPU had this, then in code you could target very specific code where it wasn't safe to optimize the cache, like cryptographic functions, but everything else can run fully optimized.
If the CPU can mode switch fast enough it also enables less secure but still potentially effective solutions like randomly showing down 5% of cache hits on an operation where you need 100% accuracy to work properly, like encryption/decryption.
That is exactly what I was about to comment. If you add in enough randomness to where statistical tests can't tell you anything about the likelihood of thisTime corresponds to a multiply and thatTime corresponds to an add, then you've effectively solved this time-to-compute vulnerability.
Also, I've heard that that bit exists. It's called a Chicken Bit because it's a bit that you purposely turn on in order to avoid something else (in this case, avoiding faster execution and therefore avoiding the vulnerability). I read about it on Sophos's website I think.
Check out the other comments. These exact instructions do already exist on the latest processors.
Damn this channel is amazing. It has so many cool stuff I didn't know.
Extremely interesting! And detailed, with a lot of pedagogy (lowering stuff to the lewel of the audience). THanks.
…and by using a word no one has heard of simultaneously showing your audience they are lower.
Not low enough for me. Still working on
"2 + 2 = 4."
@@Tailspin80 Yes we are, at least me, this watching to learn, big time!
@@jimgardner5129 ASM...
It doesn't need physical access to your computer. "Like other attacks of this kind, the setup requires that the victim and attacker have two different processes co-located on the same machine and on the same CPU cluster. Specifically, the threat actor could lure a target into downloading a malicious app that exploits GoFetch." - The Hacker News
If your cache doesn't speed up your processor, you shouldn't have cache. It's such a difficult and frustrating thing to navigate cause the simple solution is just crap.
Cache isn't about speeding up the processor... It's about speeding up memory access, which speeds up code execution. The code itself is what determines how much of an effect cache has, not the processor.
I think the point of confusion here stems from when it was said that "most processes adhere to constant-time programming." This perhaps makes it sound like they removed most of the cache fetching-vs.-failing variability but I think that in the case of actual fact they just underscored which opcodes use it and put in the processor's instruction manual "use these only if you'd like to admit time variability in your process." In terms of actual use, almost every program *will* use these almost as much as it did before, but e.g. encryption programs won't.
Anyway, that's my takeaway after puzzling about this same issue. Someone who actually knows should weigh in.
Yup, I am aware! Constant time programming is a software technique, not a hardware technique. I found it to be slightly misleading how it was mentioned in-context in the video. While secure software can be written using constant time programming techniques, that can't be used to mitigate this issue on the hardware side, since it would involve also mitigating the effectiveness of cache and the CPU would have to wait around for memory all the time. (Or do something like speculative execution, which can also run into this issue.) The multiple levels at which security needs to be analyzed is why FIPS certification is so stringent even about the operating system and hardware a software package runs on.
@@danielbriggs991 I don't know if you are correct. But I know what you said is far more reasonable than the other statements made here. And you even qualified your answer. You are a refreshing change from normal commenting behavior.
I was about to comment the same thing about how the explanation of constant time programming was very misleading. You can easily Google this to see that @cinderwolf32 is right. The paper even explicitly mentions that the programs it exploits use constant time programming techniques
“Like” and “Similar” kinda mean the same thing, so when you started you said your work was not doing this type of thing. I was confused. But the show is great… Love low level learning… at my level.
It can probably be mitigated by a software update, just like meltdown or specter. There is no way that apple replaces all these devices.
From what i know, a process can just stop DMP from reading its memory by setting a flag.
@@xE92vD It can be prevented by simply disabling DMP. This will cost some performance.
Fun Fact: You can't "adjust the CPU's internal hardware" after the hardware has been delivered. So Apple will have to rely on software to fix this.
@@xE92vD I wrote it probably can be mitigated.
Mitigated != patched, but it will prevent the vulnerability from being exploited.
@@oberpenneraffe with fuses you can remove functionality
@@oberpenneraffeNo, this is only possible on M3 (the bit to disable the feature)
I would be a little surprised if any particular key exists long enough and is used to process enough data fast enough (i.e. being resident in cache, not just an SSL network stream stuck in memory) for this attack to be practically executable in a real world situation. The information is statistically recovered so throwing more processing power at the attacking end doesn't help at all. I'd also be surprised if this attack isn't equally effective against almost every other processor made in the past 15(ish) years.
Just for people who missed it, this reminds me of simple but still effective attacks where you can recover all typed information from an ssh session just from the timing of the packets being sent.
Your channel is really interesting!
I love side channel attacks, they are always so interesting and ingenious. Sometimes they can literally look like science fiction like the acoustic or electromagnetic ones.
From what I have seen, this is not a physical access attack.
That’s right, but what he meant was, you must have access to run code on the machine first. This particular vulnerability doesn’t give you remote access.
@@cattleco131but why is everyone playing that down? That's literally just a bad email attachment. And the malicious code ran locally will likely give remote access
Doing some research for implementing a login process for a web application, I read that you should compare the entire string and then return if true or false, so the amount of time to check is always the same
That is a good point, but make sure the functions you use are ALSO constant-time functions. I.e., the code that YOU write may be constant-time, but the implementations of the libraries you use are likely optimized for speed and therefore are likely NOT optimized for security.
I love episodes like this
Just one correction, the thing that we want are actually the “maybe” addresses themselves. Because on a cache miss, the information about which address was accessed is actually accessible that is the core design flaw. Since the addresses are now visible, we can try to “train” the speculative execution engine to prefetch for many signatures of data that looks like addresses, and when it does we can look at the fetch information to see what was the address (which is actually data in the cache), we dont really care about what is being fetched from ram.
Thanks for the quality information. Remember to stay hydrated ❤
Hey thank you for watching! Always hydrated lol. (there's water in coffee)
I've been wondering how did they make such a fast and efficient CPU. They just skipped safeguards that other vendors had to deal with since meltdown and spectre were discovered.
Explained a complex problem super super well.
Well done.
And now we need a super super Mario!
This was a great explanation which helped clear my doubts around how this was different from Spectre and Meltdown attacks on Intel chips from the past. Thank you. Just subscribed, hoping to learn and appreciate the world of IT a lot more with you.
😂😂😂Apple developers solving 5 hard Leetcode problems to ship a patch
You can either have fast or secure, pick one
Boy, Apple is just having more and more problems this week
Every CPU has some sort of this vulnerability, it's not just Apple. And the frustrating part of this is it is very hard to fix without affecting performance!
@@tylerdurden9083except in case of other manufacturers, it's easier to apply fixes etc.
@@tylerdurden9083 well no. Many had this issue. Years ago. Right now it's apple who put old known bugs in their new architectures.
What you are explaining is exactly what I am going through, the RSA algorithm
as someone who owns apple hardware and works in IT....shrug. we live in a time where there are vastly more trying to find the exploits than work on the design teams, so there will only be more and more of this. I've patched millions of CVEs in my job, but John and Lisa in marketing are the ones that have gotten us hit with something.
It's amazing how far you can get with a phishing email.
Re: the Apple silicon bug... meh. So you've gotta have physical access to the machine, and want to peer into another process' secrets. I'm not sure if that's another Spectre/Meltdown, exactly, but if it is, this is far less of a big deal, because the point of those vulns was that they existed on processors that power the vast majority of the world's cloud computing servers, meaning there's a real chance you're sharing the machine with someone else. Very, very, very few Macs host cloud servers.
@@lenerdenatorit's slightly worse than spectre, or easier i guess. I don't like people using the word 'physical' for access. It requires local installation, like most malware. And once you have that malware, you bet that information is no longer confined to your local device (unless your firewall settings are nice and intrusive).
This could be a huge problem.. i doubt it will be though
My first thought when clicking this video is flashbacks of sorting through unsolicited bug bounty emails at work. So this was nice to not hate watching.
The way this side channel vulnerability takes advantage of the difference between operation speed in branch prediction, reminds me of a bug mentioned in EVE Online lore.
There is a way to use a ship equipment module called a data analyzer to gain information regarding when a player owned space station becomes vulnerable to being attacked and destroyed by other players. The description of this module mentions branch prediction vulnerabilities in something called a recursive computing module, which basically is the Eve Online version of a CPU for a space station.
In this specific case, the idea of constant time programming is on the implementation of the encryption algorithms, not on the CPU runtime of the instructions. The underlying issue is that the data that the implementation uses can be understood by the CPU as memory addresses, so based on the side-channel attach, another process can know that the implementation (at some point) produced some data that has that specific shape.
The proposed solution in the paper and by other CPUs that have the same optimisations is adding an instruction that prevents this behaviour, even when in my (highly subjective) opinion something like CHERI would be a better solution.
So, if someone steals your Apple Mn (1
I mean, if they can login to your account for whatever reasons, then you’ll have bigger problems.
Quite sure you need to be logged in already to run the script in a terminal to do any exploitation. If someone has my computer and is logged in, then I'd less worried about this vulnerability, and more about how they got my computer and managed to log in.
I mean the golden rule of cybersecurity is to not leave your device in public places or to plug any strange IO into your computer. Even if this vulnerability didn't exist, there are a plethora of other vulnerabilities in every computer that can be exploited to log in; nothing is unhackable.
Another tenet of cybersecurity: security is constrained by practicality and circumstance. This vulnerability probably won't mean that much to the general public, like the Spectre or Meltdown bugs, because they're so complex that average person isn't going to go through that effort when it much easier to use social engineering (e.g. record you typing in your password). The only entities that might use this exploit would likely be government agencies like the FBI, but if they're after you then you have bigger things to worry about.
No you have to have your computer unlocked
@TooSlowTube Bruh did you listen to the video at all? That's not at all what this vulnerability is.
I have zero technical knowledge and happened to stumble upon this video out of sheer curiosity. There seems to be some questions of ethics regarding unified processes regarding the nature of efficiencies and privacy. The attack also seems novel due to the ability to derive information from the CPU through natural leakages and use this information to build the identity of the CPU such that privacy is violated. It seems like a 'low level' existential attack! Interesting af video. Thanks for uploading!
Why does this need physical access to a machine, surely this is the same class as Spectre and Meltdown in that you just need to be able to execute code on the target machine, right?
Any code running locally can exploit this bug, that's my understanding.
LLL said access, not _physical_ access. I assume he meant access to the machine as in access to the processor, as in ability to execute code on it.
Silly question: If move operations are in constant time, regardless of cache hit or not, then what's the point of having a cache?
This was eye opening. Thanks
Doesn't it invalidate the utility of the cache for an operation to take the same amount of time whether or not it misses? What is the performance cost of requiring operations to run in constant time?
Yes, but I imagine he means that cryptographic operations are implemented in such a way that the cache hit or miss does not affect the running time of the algorithm. I do not think the cache does or should know anything about what its being used for.
I wish the video was more clear on that, I think it was so confusing because he was talking about the chip performing encryption, which is something that is commonly done not in software, but in hardware (for the most popular algorithms)
M1 chip was released two years after Spectre and Meltdown...
I can imagine the design was probably finished already when these came out, the timelines for new chips are loooong.
wowww
so like, it isn't a bunch of tiny guys in there making the pictures show up on my screen?
The year of Asahi Linux??!? /silly
Based
I'm torn. Research like this is interesting and totally valid, but in the end, just causes harm. Attackers will never use this bug for the same reason other sidechannel attacks like Meltdown and Spectre will never be used in the wild: there are 1000 easier ways for an attacker to achieve their goal.
They just don't need bugs like these to achieve their goals. So the outcome: performance is reduced in exchange for no reduction in risk.
well, a local bug like this one is good when you need to recover data from your machine when you need to fight the laptop's security.
“Where there is a will, there is a way”
I'm such a noob at programming, but I love watching your videos. Almost everything you say goes over my head and you legit seem like a wizard to me. So the fact that this flies even over your head, I can't even fathom it. I am usually inspired to get better when I see better programmers than me, but sometimes when I see people who are such badasses on a whole other level, it kind of demotivates me because it doesn't seem like it's possible for me to ever get even half as good. Anyways, this bug sounds insane!
Know that it is possible! You've just got to keep going. I mean he even says, he doesn't understand all of it! No one finds it easy and everyone has been exactly where you have been with programming. Just keep going and focus on the fundamentals
I have a masters in cmpe from a top 5 university. My senior seminar prof's parting wisdom was to always be suspicious of cache, regardless of whether youre validating it or producing something to be sold on the market. Sure this is a novel attack, but there are plenty that are barely hidden and definitely not accidents. Theres a reason why most ms cmpe grads get a S or TS SCI. A lot of modern spy craft happens when you design computer hardware and understand tricks you can do with the already clever designs.
Cash is king after all.
Cache is king*
It was right there man
So for password hash checks, have a pass/fail flag, and a dummy flag, and set them both to true. Loop through the entire hash, checking it against the has of whatever was entered. For each match, clear the dummy, and for any mismatch clear the pass/fail flag. Code these two paths so that they take the exact same amount of time. Always check the entire hash, even if the first character of the hash fails the check. Return the pass/fail flag. The result is that password checks always take the same amount of time, no matter how closely or badly the password matches.
Really incredible security research. Love these kinds of things!
I've noticed in the past (Windows 7) that if I type my password in *nearly* correct, it's more likely to take a while to process before it tells me it's wrong. If I enter it completely wrong, it almost always tells me right away.
CIA backdoor was found too fast, lmao.
That method of checking the timing of a result or function is called a timing attack, and is not inherently a side channel attack.
When keeping it "It's not a bug but a feature" becomes real
Plenty of apples features bug be enough not to buy.
That prefetch sounds really clever, but damn, how do you not notice that it would be a treasure trove for cache-timing side-channel attacks?! Unless you're using some kind of fancy provenance-based pointer validation setup like ARM has, it would be way too easy to forge pointers and wreak havok in exactly this way.
At first it will check the length of the strings and then compare each char
That's still not enough, you can find out how long the expected password is by trying out different lengths first, and then pad out your attempts every time, which will still be linear complexity
Thanks for explaining something that was incomprehensible previously