Always enjoy your videos here and appreciate your desire to help people consider continuing best security practices. I've zero issue with taking a slight performance hit to mitigate such things but i'm a fan of Qubes OS and not exactly looking for speed over security anyways. I suppose i'm on the opposite side, gleefully slowing down my PC for security. :)
Chris Titus Tech recently uploaded a glib, unwise video (that probably will not age well) advocating desktop users can safely disable mitigations for performance gain. Fortunately, I don't accept advice from bobble-heads.
Hey DJ, I have found that convenience generally trumps security even for people who should know better. No put down intended. I think it’s just part of human nature.
One aspect of dealing with security in the post-spectre/meltdown world that seems often overlooked to me in the conversations is proper segmentation of your systems. The performance damage for mitigations can be severe in certain scenarios and you may have scenarios where you don't want any impact on performance whatsoever, but does the scenario you need high performance with 0 compromises ALSO require security? If so, then you're stuck with the penalty. But I would assume that really isn't the case for most home users, in which case you can split up your systems by security needs. Keep your performance sensitive tasks on your powerful PC with the mitigations disabled, but don't store any security sensitive information there whatsoever. Get yourself a used laptop, SFF PC, or raspberry pi and make that your secure system where you store your sensitive data and take every security precaution possible (encrypted disk, mitigations, software only installed from official repos, firewall, block general web browsing, put it in an exclusive subnet etc). You can switch between systems easily by just switching your monitor display source and your KB&M (or get a 2nd KB&M, they're cheap). This is what every business does with their secure systems, and you can easily replicate that type of setup yourself. This way you can enjoy full performance when you need it for gaming and general web browsing etc, and enjoy full security on a different PC when you need it for managing your servers and doing your banking etc. Best of both worlds can be yours with proper handling of your data!
"You cannot prove a negative" - in a limited volume you actually can, sometimes, but if the volume is larger than some 10 rooms, the proof procedure would probably be impractically extensive.
5:00 ..when I am asked: "do you have children?" (I am a guy) .. I always say: "as far as I know no" ..well as a guy you never can be 100% sure unless you still saw the girl for a period of approx 9 month after your broke up with her.
Maybe it is naive, but I would expect that mitigations needs to be in sandbox implementations (browsers, VMs, containers) for mostly single-user systems. And there should be a way to allow some of the processes skipping additional mitigation.
just asking a question, how would you determine which ones to enable and which ones to ignore? Would take some rearchitecting at the OS level too I would think
@@CyberGizmo I would probably opt-out for processes like window compositor, terminals, bunch of system services, Gemini browser. But will keep mitigation on when switching to web browser, mail client. I'm not aware how mitigation done on OS level. But I don't expect (for Linux) it to require some re-architecturing that was not done before. At least possiblity to enable/disable mitigation globally during boot suggests that it is not changing how significant part of kernel operates.
That Hitchen's razor and Sagan standard are great. But real life is not like that. This doesn't work when person reaches "obvious truth" and uses it as a reasoning. If I try to synthesize example for this topic: Isn't it obvious?! If there would be some use of such effective exploits, then surely we'll get news about it.
@@virkony I think this is basic scientific method and it works in the case as well. It was proven that speculative attacks are possible, vulnerability exists. We need to patch when vulnerability is discovered not exploit.
@@maciej-36 many things are vulnerable to side-channel attacks be it timings, sound, EM/thermal emission. Through understanding how it can be exploited and porbability of that we apply adequate measures. We do not put cages around every computers and its operator locking out of interaction with external world. Prototype exploits do help to go further than speculations. This is vulnerability that undermines ability of OS or Hypervisor or VMs to isolate processess/VMs, as I understand. It can be seen as if we got back to DR/MS DOS times. If your usage/expectations didn't changed since then, why would you care?
If I understand correctly, it is possible to have host system with mitigation and VM inside of it without mitigation to avoid overhead on process switching within that VM by sacrificng isolation guarantees between processes in VM. E.g. have database and surrounding processes running in single VM without mitigation. Will it be less overhead to run under Host(mitigated)+VM(unmitigated) than under Host(mitigated)? Given that those databases tested by Phoronix probably are not single-process ones.
Yes that is true, Proxmox for example allows you to do just that. Since the physical CPU is mitigated, you should be taking the hit on performance anyway unless I am completely missing something. I think the reason they have that option is so the VM doesn't become the vehicle for Specter Meltdown malware from gaining a foothold in the network. I think Phoronix tested Core-i7's and Core-i9s (for the time period) it wasn't Xeon based machines although I think he did those later.
@@CyberGizmo I was referring to OS-level mitigation that tries to increase isolation between processes. I suspect something like CPU cache flushing before giving control to user process. And I assume reducing amount of kernel memory mapped into process leaving only small "gate" for sys-calls and interrupts be able to bootstrap kernel context. Having non-mitigated VM should allow to remove overhead from OS-level mitigation since switches between processes within it skip that. E.g. if you have continous time period for VM within which it switches between processeses and/or kernel, it will still be exposed to issue, but not contained within that VM.
I want to ditch my computer Gen 6 Intel, but after listening to this video I'm not sure that it is necessary for me to ditch Gen 6 intel. Perhaps I should ditch speculative processing
I’m calling BS, the performance drop has been well over 50% for my 4690k making it unusable. I used it as a home media server and DVR, but with the patches installed it can take up to 30 seconds to register mouse clicks, Where operation is instant without them…. And yes, I’ve reinstalled to compare. These exploits and patches destroyed all these perfectly good cpu’s
That CPU was released in 2017, Intel said that the older CPUs would be hit harder performance wise. I didnt cover them that far back, the oldest machine I have is a Intel Core i7 Gen 6, so would be very hard for me to test that one
If you want to turn off mitigations - be ok with being pwned. But you shouldn't really be poking around online in chrome and installing steam and games and other unverifiable and untested code from unknown and untrusted sources on a machine that u don't want pwned in the first place
Thanks for explaning this. And 4 years? Time surely flies...
Oof, I did catch some of the scuttlebutt over disabling the mitigations and almost fell for it. Thank you DJ Ware for setting it straight.
Always enjoy your videos here and appreciate your desire to help people consider continuing best security practices. I've zero issue with taking a slight performance hit to mitigate such things but i'm a fan of Qubes OS and not exactly looking for speed over security anyways. I suppose i'm on the opposite side, gleefully slowing down my PC for security. :)
Chris Titus Tech recently uploaded a glib, unwise video (that probably will not age well) advocating desktop users can safely disable mitigations for performance gain. Fortunately, I don't accept advice from bobble-heads.
LOL, yeah watch out for the bobbleheads, thanks for the chuckle midplanewanderer
Hey DJ,
I have found that convenience generally trumps security even for people who should know better. No put down intended. I think it’s just part of human nature.
Thanks DJ👍
One aspect of dealing with security in the post-spectre/meltdown world that seems often overlooked to me in the conversations is proper segmentation of your systems. The performance damage for mitigations can be severe in certain scenarios and you may have scenarios where you don't want any impact on performance whatsoever, but does the scenario you need high performance with 0 compromises ALSO require security? If so, then you're stuck with the penalty. But I would assume that really isn't the case for most home users, in which case you can split up your systems by security needs.
Keep your performance sensitive tasks on your powerful PC with the mitigations disabled, but don't store any security sensitive information there whatsoever. Get yourself a used laptop, SFF PC, or raspberry pi and make that your secure system where you store your sensitive data and take every security precaution possible (encrypted disk, mitigations, software only installed from official repos, firewall, block general web browsing, put it in an exclusive subnet etc). You can switch between systems easily by just switching your monitor display source and your KB&M (or get a 2nd KB&M, they're cheap). This is what every business does with their secure systems, and you can easily replicate that type of setup yourself. This way you can enjoy full performance when you need it for gaming and general web browsing etc, and enjoy full security on a different PC when you need it for managing your servers and doing your banking etc. Best of both worlds can be yours with proper handling of your data!
"You cannot prove a negative" - in a limited volume you actually can, sometimes, but if the volume is larger than some 10 rooms, the proof procedure would probably be impractically extensive.
5:00 ..when I am asked: "do you have children?" (I am a guy) .. I always say: "as far as I know no" ..well as a guy you never can be 100% sure unless you still saw the girl for a period of approx 9 month after your broke up with her.
I'm not going there
@@CyberGizmo sorry I do not get it (non-native speaker) how to understand 'there' in this sentence?
An apt reminder to us all. People's memories can be short - and then it smacks you round the back of the head!
IIRC the famous Linux developer Greg Kroah Hartman simply said "OpenBSD was right" regarding these things.
Maybe it is naive, but I would expect that mitigations needs to be in sandbox implementations (browsers, VMs, containers) for mostly single-user systems.
And there should be a way to allow some of the processes skipping additional mitigation.
just asking a question, how would you determine which ones to enable and which ones to ignore? Would take some rearchitecting at the OS level too I would think
@@CyberGizmo I would probably opt-out for processes like window compositor, terminals, bunch of system services, Gemini browser. But will keep mitigation on when switching to web browser, mail client.
I'm not aware how mitigation done on OS level. But I don't expect (for Linux) it to require some re-architecturing that was not done before. At least possiblity to enable/disable mitigation globally during boot suggests that it is not changing how significant part of kernel operates.
I always thought that if someone wants to prove that something exists the burden of proof is on the person that says that thing exists.
That Hitchen's razor and Sagan standard are great. But real life is not like that. This doesn't work when person reaches "obvious truth" and uses it as a reasoning.
If I try to synthesize example for this topic: Isn't it obvious?! If there would be some use of such effective exploits, then surely we'll get news about it.
@@virkony I think this is basic scientific method and it works in the case as well. It was proven that speculative attacks are possible, vulnerability exists. We need to patch when vulnerability is discovered not exploit.
@@maciej-36 many things are vulnerable to side-channel attacks be it timings, sound, EM/thermal emission. Through understanding how it can be exploited and porbability of that we apply adequate measures. We do not put cages around every computers and its operator locking out of interaction with external world.
Prototype exploits do help to go further than speculations.
This is vulnerability that undermines ability of OS or Hypervisor or VMs to isolate processess/VMs, as I understand.
It can be seen as if we got back to DR/MS DOS times. If your usage/expectations didn't changed since then, why would you care?
"Absence of evidence is not evidence of absence."
A great aphorism, thanks Marcus
If I understand correctly, it is possible to have host system with mitigation and VM inside of it without mitigation to avoid overhead on process switching within that VM by sacrificng isolation guarantees between processes in VM.
E.g. have database and surrounding processes running in single VM without mitigation.
Will it be less overhead to run under Host(mitigated)+VM(unmitigated) than under Host(mitigated)?
Given that those databases tested by Phoronix probably are not single-process ones.
Yes that is true, Proxmox for example allows you to do just that. Since the physical CPU is mitigated, you should be taking the hit on performance anyway unless I am completely missing something. I think the reason they have that option is so the VM doesn't become the vehicle for Specter Meltdown malware from gaining a foothold in the network. I think Phoronix tested Core-i7's and Core-i9s (for the time period) it wasn't Xeon based machines although I think he did those later.
@@CyberGizmo I was referring to OS-level mitigation that tries to increase isolation between processes. I suspect something like CPU cache flushing before giving control to user process. And I assume reducing amount of kernel memory mapped into process leaving only small "gate" for sys-calls and interrupts be able to bootstrap kernel context.
Having non-mitigated VM should allow to remove overhead from OS-level mitigation since switches between processes within it skip that. E.g. if you have continous time period for VM within which it switches between processeses and/or kernel, it will still be exposed to issue, but not contained within that VM.
I want to ditch my computer Gen 6 Intel, but after listening to this video I'm not sure that it is necessary for me to ditch Gen 6 intel. Perhaps I should ditch speculative processing
I’m calling BS, the performance drop has been well over 50% for my 4690k making it unusable. I used it as a home media server and DVR, but with the patches installed it can take up to 30 seconds to register mouse clicks, Where operation is instant without them…. And yes, I’ve reinstalled to compare.
These exploits and patches destroyed all these perfectly good cpu’s
That CPU was released in 2017, Intel said that the older CPUs would be hit harder performance wise. I didnt cover them that far back, the oldest machine I have is a Intel Core i7 Gen 6, so would be very hard for me to test that one
If you want to turn off mitigations - be ok with being pwned.
But you shouldn't really be poking around online in chrome and installing steam and games and other unverifiable and untested code from unknown and untrusted sources on a machine that u don't want pwned in the first place