DEF CON 31 - Private Keys in Public Places - Tom Pohl

แชร์
ฝัง
  • เผยแพร่เมื่อ 2 ม.ค. 2025

ความคิดเห็น • 37

  • @cardude1957
    @cardude1957 ปีที่แล้ว +48

    This has not gotten enough views for the amount of great content that was in this presentation. Great job, Tom.

  • @akmadsen
    @akmadsen ปีที่แล้ว +7

    I'm not even 3 minutes in and I already love Tom. This is going to be great!

  • @mo3k
    @mo3k ปีที่แล้ว +35

    Great presentation and super entertaining and well-presenting presenter! Understood all his methods and even yelled out the answer to most of his questions lol. Thank you DefCON and Tom!

  • @stubstunner
    @stubstunner ปีที่แล้ว +16

    Another excellent talk. I’ve been in this field for a little over 10 years and I love talks like these.

    • @Rubeneides246
      @Rubeneides246 ปีที่แล้ว

      Any advice for those that are extremely curious but have no formal training but are wanting to learn more?

  • @micahnightwolf
    @micahnightwolf ปีที่แล้ว +22

    These manufacturers will never learn to stop leaving their private keys under the proverbial doormat.

  • @DeetexSeraphine
    @DeetexSeraphine ปีที่แล้ว +17

    Wait... this is still relevant.
    _sharp inhale_
    Ima thinkin I'm gonna take a peek and maybe give a poke at my company's server cabinet.

  • @drakas110
    @drakas110 ปีที่แล้ว +32

    i do hope he turned around and said at the "only guy at a hacker convention you use their real name" "its a hacker convetion you think that would stop them"

    • @bobmcbob4399
      @bobmcbob4399 ปีที่แล้ว +11

      A pseudonym is going to be Security Through Obscurity though right?

    • @darrelbryan4358
      @darrelbryan4358 ปีที่แล้ว

      @@bobmcbob4399yes if you’re doxxed then you can be traced on all socials via OPINT.

    • @makedonija5587
      @makedonija5587 7 หลายเดือนก่อน +1

      😂

  • @ryanwolfe2219
    @ryanwolfe2219 11 หลายเดือนก่อน +3

    Wonderful talk, wonderful presenter! Tom seems like a great guy

  • @Dwonis
    @Dwonis 7 หลายเดือนก่อน +5

    When did "responsible disclosure" come to mean not publicly disclosing things yourself after a period of time?
    There's supposed to be an implied "public disclosure" in these terms---"full disclosure" doesn't mean just fully disclosing to the vendor, it means full *public* disclosure. How did "responsible disclosure" get warped to omit the public disclosure at some point?
    I've been out of the loop from the security industry for some time, but when I first learned the term, it didn't mean vendors could hide problems. You just gave them some reasonable time to patch things before you went public.

  • @ICountFrom0
    @ICountFrom0 7 หลายเดือนก่อน +2

    Sings, "You've got key in wrong places, and your IT crew keeps making faces, it pains the crew, you know it's true"

  • @Mtylgd
    @Mtylgd 2 หลายเดือนก่อน +2

    It probably took them 3 years to come up with a fix for fortinet because they needed a solution that would still allow the NSA to get the private keys without you knowing.

  • @SALTINBANK
    @SALTINBANK ปีที่แล้ว +11

    Great man, great talk ! cheers

  • @aaronr.9644
    @aaronr.9644 ปีที่แล้ว +8

    sad state of affairs :( the hardcoded AES key with IV of zeroes is just crazy!!!

  • @lerneninverschiedenenforme7513
    @lerneninverschiedenenforme7513 9 วันที่ผ่านมา +1

    awesome!

  • @oskar1504
    @oskar1504 ปีที่แล้ว +10

    What a great talk. Thanks

  • @delco2035
    @delco2035 2 หลายเดือนก่อน +2

    grep PRIVATE lmao that's good

  • @shelvacu
    @shelvacu ปีที่แล้ว +2

    the netgear story seems unfair, routerlogin is what they use for, well, router login. Including the private key in the binary is no less secure then every other router which must either do that or have no SSL at all. What was the "fix"?

  • @HenryLoenwind
    @HenryLoenwind ปีที่แล้ว +2

    And I'm still waiting for the talk that explains how a program can authenticate itself to another service in a way, that a human who can log into the same account the program runs under cannot.
    The only halfway working way I've seen so far is using a tamper-protected SOC with the data in its read-only storage that only runs signed code (and isn't controlled by software running outside the SOC). The second best solution is using a hardware encryption module, as this requires the attacker to funnel their attack through that module.
    Also, I blame web browser manufacturers for not having come up with a method a browser can pair with a device on the local network to replace the need to ship that device with an SSL cert to avoid the browser trying its best to prevent the user from accessing that "insecure" device...

    • @1337GameDev
      @1337GameDev ปีที่แล้ว +5

      You do the same as any other API.
      You have a public key for the API, AND you have a device ID as the requester.
      Then you have an INTERNAL check for that device ID, and then handle things based on a client specific process. In the case of push notifications, you merely add a db record, if that device is registered to an account, that the server will then be queued to send to THAT account ONLY.
      Sure, they can DDOS or send a ton of pushes for that account, but you can rate limit and temp ban / increase delay if activity is suspicious

    • @1337GameDev
      @1337GameDev ปีที่แล้ว +1

      For browser / local device API, they'd need to have a standard cert identifier that states the cert is self-signed and used for THAT device communication ONLY (and routing to any other device will reject that packet).
      But they have this actually.
      A self-signed cert can be issued by the device, for THAT session and then normal https takes place.
      BUTTTTTTTTTT because that cert isn't in the cert authority chain, it is seen as "insecure" because a man-in-the-middle attack can be done -- if your browser doesn't know a trusted public key ahead of time before it tries to negotiate. But then THAT public key needs to be in cert trust chain... ehh
      There's no way to guarantee a request can ONLY go to a device, even if local, and be 100% sure that's not guaranteed unless something is in your trust chain.
      What they need to do is give a user an EASY way, for certain devices, to add the self signed cert, which is issued by that device only, to their trusted list.
      But then how do you prevent man in the middle attacks where users just click "add to trusted list" and continue..... when they want to go to facebook or etc.
      You really can't.

    • @Pystro
      @Pystro ปีที่แล้ว

      For the browser-device linking, can't they do the same as with the short codes that websites email you to verify that you have access to that email?
      Let me try my best at coming up with a protocol (and risk becoming part of the "everyone can come up with an encryption that they themselves can't break" fallacy).
      The fact that I could come up with a protocol that I _THINK_ works, means that it's _probably_ possible to do something like this, even if there's a thing or two that I've overlooked.
      Here's my protocol:
      1. Browser and device exchange private keys and sync their clocks.
      2a. The browser displays a 3-digit code and the user enters it on a keypad on the device.
      2b. Alternatively, the browser can generate 3 countdowns of 10-20 seconds and have the user push a single the button 3 times, and the device then concatenates those 3 "digits" (with fractional second accuracy) to make the code.
      (Note that all further messages are sent encrypted with the exchanged keys, unless explicitly noted otherwise.)
      4. The device (4.1) encrypts the code with a private "session" key and (4.2) sends it to the browser. (Encrypting it first with the key from 1 and _then_ with the temporary key from 4, so that a MITM who has inserted themselves at 1 can't decrypt and re-encrypt it with their own keys.)
      5.1 The browser acknowledges receipt of the encrypted code.
      5.2 The device waits for 5 seconds.
      6. The device sends its public "extra" key to the browser.
      7. The browser decrypts the code from 4.2 with the public key from 6 and verifies that it's the exact same code sent in 2 (or alternatively, that it's not too far off).
      8. The "extra" key pair of the device becomes part of the encryption of all following traffic from the browser to the device.
      9. The browser (9.1) can send the device a longer second code (thusly encrypted with the device's public key), and the device (9.2) decrypts it and (9.3) returns it decrypted.
      A MITM ("eve") has two choices at 4 (as long as message 6 hasn't been sent out by the browser yet):
      If eve sends her own code encrypted with her own "extra" key, that code won't match at 7.
      If eve forwards the device's encrypted code at 4.2, she won't have the device's private "extra" key and can't decrypt any of the traffic at step 8/9 and thus fails at 9.3.
      Could the device send _both_ 4.2 and 6 _before_ the browser expects 4.2? Well, step 2b only works if the clocks are synchronized and step 2a can be declared as failed if the user didn't enter the code (as evident by the fact that the browser received 4.2) within 5 seconds (or whatever the delay of step 5.2 is). Both variants of step 2 can do double duty as verification of the synchronization and thus the proper order of 4b and 6 can be ensured.
      However, note that this only verifies to the browser that the device is who it claims, the device still has no guarantee that the browser is who it claims. But that can probably even be solved by the browser telling the user to push the button on the device a final time once it has verified it's identity (or alternatively to do a reset on the device if anything is wrong).

    • @HenryLoenwind
      @HenryLoenwind ปีที่แล้ว

      @@PystroSounds a bit complicated, especially for the user. I'm thinking more about something like:
      (1) Browser detects that it is connecting to a device that (a) has an address from one of the 3 private ranges and (b) is in the same broadcast domain. (There's a config setting to disable the second one for people who know what they are doing.)
      (2) Browser disables all knows CAs other than user-added ones for that connection.
      (3) If the device is known, browser does a cert check and applies cert-pinning rules.
      (4) Browser connects to the device and asks it to do a handshake. If that fails, or the device has a valid cert (as of no 2), it continues as it would now
      (4.1) Browser asks user for pairing permission
      (5) Browser silently generates a self-signed client cert. Device silently generates a self-signed server cert .
      (6) Browser and device store each others certs.
      For enhanced (or proper) security, as step 5.1, both devices generate a second key pair, but they send the private key to each other. When reconnecting to each other, they exchange simple token-response messages using those keys to prove they know each other. This ensures manufacturers cannot dumbly put the same static cert onto all devices they make.
      Additionally, browser blocks all requests to paired local devices that originate in any way from another device or the internet.
      This closes most attack vectors I can see: (a) attackers getting access to local devices through cross-site requests in the browser, (b) browser sending data to a copy of a local device in another network, (c) MITM after initial connect in the real local network (initial connect is still vulnerable, but tbh, if your local network already has a MITM you're done for anyway), (d) user being trained to ignore insecure connection and cert fails, (e) manufacturers putting globally signed static keys onto devices to avoid users not being able to connect and returning devices as defective because of insecure connection and cert warnings.

  • @sjoer
    @sjoer ปีที่แล้ว +5

    I use XZ, just about everyone does right?

    • @sjoer
      @sjoer ปีที่แล้ว

      So sad that from all the talks there are usually no more than one or no questions!

    • @jtw-r
      @jtw-r ปีที่แล้ว

      i’ve used it too

    • @sjoer
      @sjoer ปีที่แล้ว +2

      @@jtw-r used once or use it daily?
      Pretty sure XZ is the default compression for the kernel and I use it to bundle precompiled binaries.

    • @xanderplayz3446
      @xanderplayz3446 หลายเดือนก่อน

      ZSTD (and lz4 over ssh) for me. It’s so fast, and has really good compression ratios.

  • @zephyr1181
    @zephyr1181 ปีที่แล้ว +2

    25:44 - oooh someone de-mosaic that plz :P

  • @Linuxfy
    @Linuxfy ปีที่แล้ว +2

    i love him

  • @HiDeguild
    @HiDeguild 2 หลายเดือนก่อน +2

    Dude slow down and back up. What is a router

  • @brashcrab
    @brashcrab 10 หลายเดือนก่อน

    0039 1:10

  • @itsecurity6471
    @itsecurity6471 ปีที่แล้ว

    Boring!!!