0.0.0.0 Day: An 18 Year Long Web Browser Exploit
ฝัง
- เผยแพร่เมื่อ 10 ก.ย. 2024
- The web is an absolute mess, and this exploit is a fantastic example of that, 18 years of port scanning alongside new discoveries that you can use this to potentially execute code on services running on localhost from an external website
==========Support The Channel==========
► Patreon: brodierobertso...
► Paypal: brodierobertso...
► Liberapay: brodierobertso...
► Amazon USA: brodierobertso...
==========Resources==========
0.0.0.0 Wikipedia: en.wikipedia.o...
Oligo Security Post: www.oligo.secu...
=========Video Platforms==========
🎥 Odysee: brodierobertso...
🎥 Podcast: techovertea.xy...
🎮 Gaming: brodierobertso...
==========Social Media==========
🎤 Discord: brodierobertso...
🐦 Twitter: brodierobertso...
🌐 Mastodon: brodierobertso...
🖥️ GitHub: brodierobertso...
==========Credits==========
🎨 Channel Art:
Profile Picture:
/ supercozman_draws
#Webbrowser #Firefox #Chromium #Safari #OpenSource #FOSS
🎵 Ending music
Track: Debris & Jonth - Game Time [NCS Release]
Music provided by NoCopyrightSounds.
Watch: • Debris & Jonth - Game ...
Free Download / Stream: ncs.io/GameTime
DISCLOSURE: Wherever possible I use referral links, which means if you click one of the links in this video or description and make a purchase I may receive a small commission or other compensation.
I should note that due to this being such a long running issue, extensions like uBlock Origin will typically block this
And when Google has managed to turn people away from running adblock... can the Manifest V3 blockers still block this? I guess they will? Will the people that find out that adblockers doesn't work on the sites they visit even bother using one? I guess most will use the new versions anyway since it'll at least do some blocking but... we are talking about human sheep here. "Why keep this trash extension, it doesn't work for me!".
Any kind of unnecessary attack surface should be blocked off, how they managed to leave this open for 18 years I'll never understand lol. Was the Wayland devs in charge of this!? :P
thanks for covering this
I also remember firewall rules blocking 0.0.0.0 used to be semi common back in the late 90s to early 2000s. Now i know why. Friends of mine uswd to say it was just a good idea to block it.
Also palemoon has patched this.
I have been looking for LLMs that don't use local host. even local privacy focused llms, still use local host.
there is a mistake in the subtitles. At 8:49 it says 'cores' when what you meant is probably CORS
I just tested this myself and apparently u block Origin blocks requests made to 0.0.0.0, which means it's an effective protective measure against this attack!
Ublock is the fucking GOAT
@@no_name4796Ublock Origin, not Ublock
@@no_name4796Ublock is completely unaffiliated to uBlock origin.
But does uBlock Lite? How much disservice does Google do you by blocking blockers?
Not by default, you need to enable it. In the filter lists, the respective filter is called "Block Outsider Intrusion into LAN"
TempleOS is once again unaffected. Coincidence?
No browser - no problem!
@@spartv1537 Yes, and any system can have all browser removed without a lot of trouble (except Windows). It all comes down to the use cases. I don't know how BetaMaster2 has managed to post his comment, for example, from TempleOS. (He hasn't.)
The Lord protects it.
Yes, again TempleOS is holy! We bow down!
I'll stay on my Hannah Montana Linux
Hannah will protect me 😎
Print spoolers have a long history of RCE vulnerabilities, and they've all got local web UIs nowadays.
CUPS has a local web ui and my Epson ET-20xx-series printer does, too.
5:22: the issue was " signed in triplicate, sent in, sent back, queried, lost, found, subjected to public inquiry, lost again, and finally buried in soft peat for three months and recycled as firelighters"
And when this is eventually fixed you can bet that there's the one guy who goes full "But my workflow!" who has set up an elaborate way to control a whole company wide selenium cluster bypassing the internal company firewalls and VPN.
9:40 CUPS for example. So if you dare to use a printer on Linux, guess what is running too.
If your CUPS server can accept connections from outside its LAN/VLAN your network has larger issues.
@@nobodyimportant7804 we are talking here about a case where a client-side script initiates a connection to your own system
As far as CUPS is concerned, it doesn't come from outside, it doesn't even touch a network card.
@@nobodyimportant7804 But... in this case the request appears to come from localhost. And since CUPS will happily execute PostScript, which is a Turing complete language, there is a whole lot of fun things you can do.
@@nobodyimportant7804 I'm pretty sure the CUPS server running on my machine needs to accept connections from my machine.
@@nobodyimportant7804 The attack allow you to connect to localhost open ports, you can then make request maybe gain admin privileges (uses UNIX auth so a local user with a easy password can be enough to exploit) and then allow to login from outside of localhost.
Hint hint, Brave Browser apparently blocked 0.0.0.0 since 2 years ago.
UBlock Origin has blocked 0.0.0.0 requests since ages ago too, and works on other browsers like Firefox. And hasn’t had multiple sketchy monetary issues in the past.
Not really, they started working on it, but nothing reached mainstream browsers (again). They haven't blocked 0.0.0.0 anywhere. The browsers still enable accessing localhost by accessing 0.0.0.0
Yeah
16 years late ...
@@davidioanhedgesbetter than not at all, i guess
>In Linux a program may specify 0.0.0.0 as the remote address to connect to the current host.
wtf?
>Other nonstandard uses include a way to route a request to a nonexistent target, eg for adblocking.
double wtf. So, a service inside your browser like an adblocker can route http requests to 0.0.0.0, /and/ on Linux a program can say that 0.0.0.0 is a valid address for anything running on the localhost. Having both of those behaviours at once is Terrible.
The concept of an application representing "host me on every IP on each interface" with "0.0.0.0/0" makes FAR more sense, as it means "this host on the network" for every network, i.e. pretty close the purpose of the actual spec. Using it to route to local host? Well... I mean, that's technically this host on the network, specifically the network 127.0.0.0/8?
But as an address for "nowhere", like blocking something? That's extremely bad.
Interestingly, on Linux and macOS, [::] also works as the localhost destination in addition to [::1]
@@OhhCrapGuy Is it really? 0.0.0.0 is a reserved IP address on every possible valid network and in principle could never belong to a host. So not exactly insane either. More like this is the problem any time people feel the need for magic numbers and special states that standards leave unspecified.
Binding to 0.0.0.0 serves on every IP adapter in the system, and makes sense. But connecting to 0.0.0.0 is news to me, and makes no sense.
I had a gut feeling 0.0.0.0 could be exploited, turns out it was discovered already long time ago
This is yet another good reason to use an extension like NoScript so that you can restrict the JavaScript code running in your browser to a minimum. It doesn't solve the fundamental issue however it does mean that you won't automatically run JS attempting to make a connection to one of these addresses on a malicious website just by visiting it. It also makes it possible to allow JS on a trusted website while restricting JS from third party domains.
I've tried doing that and was way too tedious. Just about every site uses script and disabling them breaks the site.
In chrome browsers, you don't even need an extension, it's built in
@@dmiracle74So just enable on sites you trust?
@@tablettablete186 How do you know if the trusted site doesn't have a malicious script?
@@dmiracle74 You don't know, but this is better than allowing on everything.
But you can inspect the code, it just takes a lot of time
>Private Network Access
I thought: "What? VPN ad all of sudden?"
I didn't even consider the name overlap lol
Private Internet Access
AAAND both of them aren't really private in a most cases today.
I think that anything that runs a HTTP server, remote or not, should secure it. That said, I know that a lot of software thinks "localhost" means "safe". I hate that attitude, but it's real, so sadly a workaround for this in browsers is needed.
@@giusdb Not sure what you are getting at, but by "securing a HTTP server" I don't mean anything outlandish, just requiring authorization. Basically, I just think that when you make a local service, you should take the same steps you take when you make a public website. Is the request coming from an authorized source? No? Well, don't handle it!
I do think it's a good security goal to treat all networks, or even all users and processes as untrusted, unless specifically authorized. I don't think I'm alone in that, since we have the principle of least privilege and MAC. It's not as impossible as you think - after all, we treat the internet as untrusted, and it's not that hard (we have firewalls).
If you say your network is trustworthy, well, this still gives you the advantage of an additional layer of security if it ever gets compromised.
I don't know what you mean by "safe", but if that includes setting up https for a localhost connection just to test out some browser APIs that require https for no actual reason other than to push its adoption (e.g. the gamepad api)... NO THANKS.
@@gamechannel1271 TLS brings no security benefits to localhost connections. As opposed to any connection that goes over the wire, they can't be hijacked (without root - and if you are root, you are in total control anyways). You still need to have proper authentication and authorization for a local service to be secure, however.
Password managers listen on a local port with an API.
They shouldn't be locally hackable for passwords anyhow, except by brute forcing, but still they might reveal the presence of a user ID, right?
Shouldn't be a problem because due to CORS, data can only be sent, not retrieved
14:08 "Thinking back on it, I don't know how many times I forgot that final zero" after saying it five times ahahah
Thanks for this analysis. I'd read about this but couldn't really put it in context. You're breakdown really clarified the issues and the risks so very well. Cheers.
Nice tautology. "Vulnerable services are vulnerable." I paused it so I could write that down. Good stuff.
uBlock has a list for this, but it's not enabled by default.
But why solve this in the browser? Because you could simply compile the browser without that block enabled. Shouldn't this be handled a layer below that?
A note for those that aren't network inclined. 0.0.0.0 when it comes to routing is not an undefined arbitrary identifier it does have a use case. It is used to say ANY IPv4 address, the equivalent would be ::/0 for IPv6. This is used in networking to define a default route to your next hop address. In a home use case this is saying if you want to go to the internet you need to go to your modem, and your modem would have its own 0.0.0.0/0 route in its routing table pointing to its next hop address.
Commonly one builds a route 0.0.0.0/0 out interface X where X is your WAN interface. In an operating system instead the default route points to the gateway of the network that specific host is in. In linux it is also used within application configurations to state hey listen on any interface that you have available to you, this can be localhost this can be a wifi adapter or even an ethernet interface. Once defined it will listen on a specific port on all of the available interfaces.
That actually surprised me. I have very low expectations about browser security or things like browser sandboxes.
But I actually never combined these low expectations with the fact that a browser runs on my computer in my network and provides full access to the network from any website on the internet.
This is so bad and so obvious, that I don't even feel bad about my lapse of judgement. Together with me, every corporate IT security department failed. Does this not make all browsers malware? Actual, real malware...
Maybe I missed something but this sounds like a pivot that allows JavaScript to connect to any unfirewalled port on *localhost.* So the only way an attacker could reach your network would be if you had a proxy running on the same machine.
Actually, their are a lot of things in place to prevent this, which includes security zones on Windows/IE since IE4 in 1997.
My suggestion would be to watch: Dan Kaminsky (yes, of DNS fame): DNS Rebinding And More Packet Tricks at 24C3 (CCC conference 2007)
@@loc4725 this specific exploit, but their have been many and many also still work.
@@loc4725 The browser program runs on your computer and has generally access to your local network. JavaScript code downloaded from a website can access network services available to your computer, bypassing any protection.
When such code wants to access your local filesystem, your microphone or your webcam, the browser asks for permission or pops up a file chooser. Consequently users feel like there is some level of protection of local resources.
On the other hand, the browser just opens connections to private networks without any warning or consent.
This is necessary if you want to access websites on your local network, but then at least you type in the URL with that private address. A public website has no valid reason to access a private network (here I mean non-routed networks and especially 0.0.0.0).
But as it is, any website can access local network services that might not be secured because the local network is often considered safe, because it's behind the firewall.
The only practical way for users would be to only run browsers in well configured and restricted virtual machines. I can do that. Most users don't know what that means.
This is a complete catastrophe. I think this is one of the worst security scandals of all time.
I'm flabbergasted that this is not making the news big time. I'm not even upset at myself that I never thought about this possibility, because I had to assume that if access to the filesystem is restricted, access to private networks must be too.
There is no excuse for this issue.
@@loc4725 Firefox does apparently not restrict JS access to private networks. Google did, but left out 0.0.0.0. Mozilla is aware of this issue since 18 years.
Even if it was as you said, and only machines with a local proxy would be affected, this would still be a catastrophe for a large number of users fitting that profile.
A proxy is not the only problem though. Any local service (at least those speaking http) would be affected. Some Electron app that provides full file system access via a tcp port would count. If that works, maybe file:///etc/passwd works too?
The problem is this: The base assumption of the web is, that a server delivers data to the client. The client requests data or uploads data with the user's consent. When the server can read my files, connect to network services, use my microphone or camera while I am not publishing these data sources without any protection on the internet, then the server is using malware. Firefox then becomes malware.
Not glowing at all ☢
An other day, an other vulnerability
Yay
Except that this vulnerability is older than a quarter of the people watching this video.
@@GSBarlev I doubt the audience here is so young on average.
@@SystemAlchemist biggest audience is 25 to 34
And all these years I thought the internet got better... why is everyone ignoring this specific IP? Just standardize it, officially deem it double-private, and make browsers follow suite. Should be a straightforward process in my mind.
Nice catch. My mid was on CORS all the time, but it's true that some apps may act on non-authenticated requests. The fingerprinting issue is also a concern.
I've been scanning networks this way for years. One can actually measure the time it takes to do asynchronous HTTP operations and infer if something is listening or not.
Notice how the list they have for 'Private Network Access' does NOT include enough IPv6 addresses to be complete.
I hope they also patched IPv6 counterpart, or that IPv6 counterpart doesn't exist.
for most standard usages in industry its used as a catch all address to mean any address, eg. when setting routes if you want for it to route any adress there you use 0.0.0.0
another example if you use docker if you want to expose port to any interface on the host system you do 0.0.0.0:port again used essentially as a wildcard kinda like a * in ls or on console in general
When setting a route it's the /0 that matters. You could use 0.0.0.0 or 123.123.123.123 or 123.45.67.89 and the behavior is the same, because /0 means ignore the address and route everything
@@robmckennie4203 /0 can only go with 0.0.0.0 if validation is written correctly as 0.0.0.0 is only possible id ip for subnet of all possible ips by standard
@@robmckennie4203no, myhost/0 means host "myhost" specifically (1 host) on any network (/0), or explicitly network "0".
On the other hand 0.0.0.0/0 means any host (0.0.0.0 as a wildcard) on any network (/0 as a wildcard for any mask).
They may seem similar but any non 0.0.0.0 in the address will limit to that host/ip specifically, while all zeros will mean any host IP in place of the 0s.
14:07 -- You overcompensated a bit there, that was five zeroes!
Got to make up for the ones I missed
@@BrodieRobertson No you don't :) 0.0.0 is the same as 0.0.0.0, just like 0.0 or just 0. IPv4 notation doesn't require all 4 octets to be given.
@@christophsarnowski9849 yeah but 127.888.888.888 is funnier than 127
@@christophsarnowski9849 I wonder if the developers fixing this bug actually know
8:59 just block that with a firewall rule in the INPUT table on the workstation machine, they're not routers, they don't need to receive traffic to 0.0.0.0 from itself
Brilliant eye for details, much thanks from USA
I did not even know they considered that behaviour a vulnerability in the browser to begin with...
This had been the state of the game ages ago
Brodie, there is an oversight here. Maybe a bit intentional from Google's perspective; PNA will block proxy services from providing web applications with features that web browsers don't support.
Yeh, maybe there should be a way of fine tuning this, yet without opening the security hole for hostile websites. Perhaps a custom local IP address which is communicated to the website so it knows how to refer to those services.
maybe as a permission that can be granted to the website?
@@Interpause It's not up to us. The great Google decides all.
My point is that PNA is a Trojan horse.
That sounds like a poor implementation. Could you provide a concrete example?
@@Lord_zeel postman's now default web version uses a local server to make requests it otherwise cant under browser restrictions
They are some scenarions, sometimes app with web interface (ERP like) must contact with device (fiscal printer, weight) using proxy soft on client (local) machine, but i can agree such
scenarios are rare.
security policy: dont run anything critical on the random browsing device. keep browsing separate.
Nonsense! PNA hasn't even rolled out yet. The only PNA protection currently enforced by Chrome is "Only HTTPS/secure contexts can make private network subresource requests".
Preflight requests to check for private network server permissions aren't enforced yet. Navigation fetches (s, popups) and web workers aren't yet protected. Potentially trustworthy origins will always be exempt from PNA restrictions, so localhost/app1 could include a script (from a public site) that attacks localhost/app2. PNA is not a silver bullet for insecure local services.
brodie you should ban all types of heart,cat,flowers emojis so bots will be confused 💖
And series of those crying laughing emoji. In my entire life I have never seen content containing chains of that emoji that was not a waste of bandwidth.
@@SaHaRaSquad In my social media reading experience, commonly they are taunts, but some may be sympathy expressions.
Windows: they shot at me!
Linux, Mac: IM HIT I'M HIT OH GAWD
I think browsers should send telemetry for sites that do this so as new "angles" are discovered and investigated, known hackery sites can be checked for the next negative behavior(s).
Limited use, as they can morph, but better than nothing perhaps.
The problem with that, is that some web browsers will twist what you said, so that ALL telemetry is sent, just for the sole excuse of "YoUr SaFetY And MuH ProduCt!".
Just look at google. They got a lawsuit filed against them, and lost, because they lied when they said they aren't a monopoly, when they clearly are.
With today's age where privacy is getting reduced more and more into a atom, some companies will try everything they can to squeeze out that last bit of privacy, and sell ALL of the remaining user private data, to some 3rd party entity somewhere located in china, india, etc.
The ONLY telemetry they should send, should be EXTREMELY limited, or NONE at all. None is preferable, but, if not, it should be extremely limited.
The problem is, sometimes there are reasons to access local network devices from a website. For example, NAS websites, and a few pieces of software have a config page that connects to a local host port. Also, apps like Figma that have local helper apps
Wouldn't those configurations tools be running locally?
Use a VPN, don't open up remote access, ever
Signed,
The world
@@StephenMcGregor1986 This isn't at all about remote access, it's local access by an unauthorized party (the website). You could even say it's the reverse, since people will focus so much on preventing remote access (using VPN and such), so they forget local access isn't trusted per se either.
@@BrodieRobertson This is about the backend running locally, but the frontend being on a website. It is certainly a valid use case, and ideally PNA would be behind a permission, not blocked outright for it to still be possible.
@@GrzesiekJedenastkaThe only use case for this is development. Is it possible to bypass this? Yes, Chromium has some command-line flags that can do it. But they should not be enabled by default. Secondly, developer should not be an idiot and be implementing things in that fashion -- they should put the FE and BE on the same system (ex. localhost), or on a trusted internal domain and use CORS appropriately. External Internet facing website referencing localhost = bad bad bad.
1:11 Rare Windows W?
Ok, but now how do I auth a local webapp against a webservice? The callback needs to call my local webhost to confirm the auth call, doesn't it?
In Linux there is an http server working, for the printer (CUPS). It runs on 127.0.0.1:631
I don't know, if it can be exploited easily this way.
Not really. It's hard to do anything with IPP without allowed CORS, and the only thing you get by exploiting it is the ability to waste someone's ink.
@@GrzesiekJedenastka i disagree, hence, i always `apt autopurge -y cups-browsed`
If a software opening a network port on an arbitrary computer does not consider itself public, then idk what should.
Gopher forever. Screw browsers, I'm going back to 1991.
Just use the elinks browser 🤣
Try Gemini - it's the new and nicer Gopher.
Use gemini
@@happygofishing Both good.
@@OsvaldoGago Nah, I want away from www browsers.
bruh wtf that's such a stupid exploit, there's no way that it still works today right... right?
only if http servers running on your host system aren’t secure. if they are written with the same security measures as public sites, this is not a problem, but because some devs incorrectly believe local-only sites inherently have authentication as you cannot access them without already being connected to the network (ignoring that other websites could have client-side requests to these private services) they skip the security measures that exist to stop these exact issues
this would work only IF IF servers dont have any CORS logic
@@bigpod CORS cannot apply to the first request, that's the point of PNA, unfortunately they missed 0.0.0.0 in PNA. (It's only a 15min video, is actually watching it really to big of an ask?)
so if i understood well, they can post so they could send a payload for rce, revshell or anything like that, but they wouldnt be able to get the response for that request because it would be blocked by CORS?
Yes.
What's a "$4 tiery"
It's a teary eyed tier. Very emotional.
Windows not being affected due to it using that address differently is a very big bullet dodged, because running something as an HTTP server with an API is VERY common under Windows. That way you skip at least two levels of software design and jump straight into the JSON data structure. You have your privileged service running as System and the user part is just an HTTP client in disguise. That's something smaller underpaid developers do all the time.
Brave has an option to block localhoat access.
Wouldn't using the public IPv6 client address also work for this?
Eighteen years. My head hurts.
Oh god I bet when they fix this it will break something in my dev setup
It only works in POSIX systems which maps 0.0.0.0 to 127.0.0.1
I wouldn't be worried at all, it's been there for 18 years, at this point it's a feature
the problem about RCE is that at times "remote" means "block non localhost from accessing this" - eg: docker web config, jupyter notebook, php webadmin pages, sql server lite, etc ...
Another use for 0.0.0.0 is to say "listen on all network interfaces". In at least two pieces of server software I've used, specifying "0.0.0.0" as the bind address means open the listening port on all interfaces, and just using "*" (where available) or "localhost" doesn't actually bind it to an address that I can connect to when developing webapps, due to some encapsulation or whatever.
You misunderstood the video, their are many uses for the 4 zeros address, some point out to the internet like the default route, but the one used here, actually points to localhost, the service doesn't need to listen on all ports, the 4 zeros address will just point to localhost directly (!)
@@autohmae I didn't miss it, actually. I was just pointing out the most common usage of it that i've seen that wasn't mentioned in the video.
@@coladict ahh ok
09:31 cups (service that talks to printers) has admin panel localhost:631 but it will http error 400 if it does't like http "host" header
So for once Windows was safer? 😂
Incredibly rare Windows W
i rather Shout everything in the open instead of relying on a third party to "encrypt" the shit and have no control over it anymore and certainly in times where free speech is threatened
PNA sounds like they still forgot the local network with public ipv6 prefix.
If they just stopped supporting ads on different domains, that would already be a higher security.
Yes, I was thinking the same for IPv6. I think you underestimate how many things can be loaded from other domains, hosts, etc.
Hm I don’t understand how get the mentioned example through cors filter? Are those (image) get requests or form posts?
CORS relies on the response
@@BrodieRobertson yes but why would those servers respond to Cors if they don’t want to be reachable
Did they mention any of the mobile phone browsers?
Most of us run cusp on port 631
Maybe there is exploit there, probably you can make your printer very busy.
Or maybe there is a way to make it use a driver that would have access to your PC
@@nobodyimportant7804 Because of 0.0.0.0 Day exploit ?
Just went to 0.0.0.0:631, it gave me a "bad request" instead of a "server not found". In other words... I'm vulnerable. (:
@@nobodyimportant7804 No one owes you the rudimentary knowledge to set up a firewall, alright! Such entitlement! Jeez!!!
@@waltherstolzing9719that was unnecessary
@@swagmuffin9000 I mean I did forget to add the '/s' ... so I suppose *that* was 'necessary'.
"only affects macos and linux"
me, who recently switched from windows to linux:
This is breaking the sandbox and should never happen.
These kinds of bugs and issues come up every couple of years, for decades now.
What about mobile devices including phones?
Wonder if CUPS is affected. It runs a web interface on Linux, don't know if it does so too under macOS.
I am surprised the linux community did not address this matter. reminds me of the xz issue.
Any kind of possible attack vector is a bad thing. I don't care if it's currently considered minor, how can we be absolutely sure somebody can't find additional ways to exploit it? There's 0 reason to not fix it lol.
0.0.0.0 reason not to fix it
@@MNbenMN good one lol.
It's almost like these companies and organizations don't really want security but to spy on the user no matter what measures are taken.
I found an exploit where the browser can be used to sniff your local Ip address which has since been used in an exploit chin
Not only is it still in the browser but I didn’t get a bug bounty for it
Imagine there is a default Ubuntu, etc distribution package that has a local host RCE vulnerability. Any server running a web scraper immediately gets infected. Haha this is bad bad.
I'm answering here because my answers disappear.
Being sure that the local connection will not be rejected is more superficial than being sure that it will be rejected.
This is because there are various types of local connections, the various localhost IPs, the various IPs of the system's network interfaces and the like.
So it is not at all certain that that network connection will not be rejected, you should try.
In my opinion it's is a governamental backdoor
Would it be safe to block 0.0.0.0 in all ways using UFW?
I did some testing and 0.0.0.0 seems to be translated to 127.0.0.1 before it even enters the firewall part of the network stack. So no blocking 0.0.0.0 will not do anything.
this is gonna break local server based authentication tho...
Well, you win some you lose some. CrowdStrike nuked a ton of Windows machines, but at least we aren't vulnerable to this 😆
It's more of a problem for anyone running a server that isn't CIGNAT'd all to hell.
As in a production server with a steady connection and open ports.
I run a lot of weird network stuff on my end but none of it faces the Internet and to my knowledge doesn't have any way to do so.
It would be a miracle if something like this made it possible to get around that problem.
it's not a miracle, because it's been going on for decades, at least 1 exploit/security fix was so old, it got re-introduced (security fix removed).
You might want to check your DNS for 'rebinding' attacks, for example in Unbound, it's the private-address option
No I literally can't configure the gateway, it's locked out of port triggering, wifi radio on/off toggle, and the Xfinity app is a turd that instantly errors out on anything port forward or DMZ related. These options have been chipped AWAY from the user/admin for years and this trash has taken over 100%. No bueno.
9:34 Example of an application that localhosts its ui. Syncthing
I loved the Windows users gloating that Windows blocked this .... all browsers that run on windows don't ... so you are relying on Windows blocking it, and windows Firewall is so ultra reliable ... not ...
Windows not being vulnerable has nothing to do with a firewall blocking access to the attack vector. This vector _doesn't exist_ on Windows (nor on FreeBSD) so the attack isn't _possible._
That's not to claim that Windows is more secure than Linux - or less secure - only that it's immune to _this specific vulnerability._
People do self-host websites, even meant for public use. I was doing so up until about a year ago.
That's not what's being talked about, the issue isn't where the site is hosted. If someone connects to your website and then you try to connect to the users localhost that's where the problem is
@@BrodieRobertson I think they meant that they self hosted a website on their main box, and not a dedicated server box? At least that's how I read it. In that case, wouldn't browsing the web on the same machine make that website vulnerable if there are extra endpoints you can only hit from local host? I've done similar things as I used to not be able to afford having 2 separate machines to isolate boxes properly. Or am I completely misunderstanding this?
Game servers are another example of this which you may be running and only intending to open to LAN or VLAN
@@BrodieRobertson I meant exactly what @Firestar-rm8df said. Sorry if that wasn't clear enough. It was in response to your (admittedly rhetorical) question at 9:26 "... under what situation would a random person just be running an HTTP server?" My point was that it was once quite common and I doubt I'm the only one who actually was doing so until fairly recently. I had been self-hosting sites practically since the first domain registrar opened their doors, though, so maybe I'm just a maverick.
@@BrodieRobertson I meant exactly what Firestar said. Sorry if I wasn't clear enough. This was in response to your (admittedly rhetorical) question "... under what situation would a random person just be running an HTTP server?" My point was that it was once quite common and I doubt I'm the only one who'd been doing so until fairly recently. I've been self-hosting websites almost since the first domain registrar opened their doors, though, so maybe I'm just a maverick.
wait what no zoom out as the intro? :
I frogot
@@BrodieRobertson understandable. ribbit.
You should be blocking all private/local addresses using ublock origin or lite.
Ain't no way that works
If only that was true
UB, but for Browsers!
I thought 0.0.0.0 meant your server listens on any ip address
It means many things
@BrodieRobertson read in the Game of Thrones "It is known" voice, it's hilarious. 🤭
Websites being able to access local servers is a perfectly fine thing. there are innocent reasons to do that. but because there are enough devs making insecure sites on the assumption local services are unsafe, it’s not secure- it really should be, but it’s not. A solution to this would be to make all requests preflight regardless of the method or the headers- this means the request cannot go through until a preflight request confirming the request is allowed is done first.
problem is that so many services nowdays still either run with cors disabled not implemented or jsut simply allowing everything
@@bigpod cors is on by default- but even though the client side code cant see the response, the request is still sent. the solution to this is preflight requests, which make a separate request with the “OPTIONS” http method which essentially asks the server if a request should be allowed to be made. this is still only done on some requests though, and notably some POST requests are still allowed to be sent without preflight, which can be a problem
A local server should just have the website on it in the first place. Using an external web server to access something on your *local* network is a recipe for anti-consumer behavior, those products shouldn't even be purchased let alone used.
@@chlorobyte their are lots of legitimate use cases, Dell and Lenovo uses something called a 'service bridge'.
Congrats, now it's impossible to offer a local companion app to enhance your website without shipping a web extension which the browser vendor is free to reject.
It's not safer because you can still ask the user to download a native executable
It's less convenient because you need BOTH an approved web extension and a native app to have native code accessible to a website
It's just another set of third party vendors who get to decide whether your project can live for no reason and to no one's benefit whatsoever.
I wonder if this has any implication on consoles...isn't the ps4/ps5 basically just a locked down linux machine? Might be fun to break the old ps4 out of the basement 🤔
The ps4 can be jail broken, as long as it has a web browser, however, you will loose access to the PSN servers at that point, because sony detected that your console was jailbroken. (this was a similar issue with PS2's, which is why they added the DNAS servers, which, were shut down at around 2009 or 2010 i believe. It was basically a sort of authenticator, to see if your console was tampered in anyway, and if so, it'd prevent you from playing online. However, since the authentication servers were shut down a long time ago, there's no way to play online, officially. Well... unofficially, it is possible, since there are ways to bypass the DNAS check, but you get my point)
It might be possible to jailbreak it via USB ports, but im not sure.
What i know, is that the ps5 doesn't have a web browser, unlike the other ones, meaning it can't be jailbroken.
So many 0000000000000000000000000...
It is very noughty.
@@SeekingTheLoveThatGodMeans7648 But to quote the Aero Bar advert "it's the bubbles of nothing that make it really something.".
idk, but when I see 0.0.0.0 allowed I delete it lol
Browsers are dying anyways, let's just stop using web browsers.
What are you going to use to make this comment
@csongorszecska im not them, but i think they might mean "just use an ai for info (like chatgpt or gemini) and apps for the rest.
Which im gonna say to them:
1) Good job, now all info is dependent on an LLM system that needs to be constantly updated, consumes alot of power, and is thrive for explotaitation by how they work to get that info and right now are prone to imagining stuff that doesnt exist
2) Apps can be exploited just as much by this potentially.
So yea, bloody stupid
Typical open-source behavior. "Of course the browser should allow websites to make requests to services on localhost, just like it should tell websites what operating system you're running, your local username, your screen resolution, etc, etc, etc. What's the problem?"
Fu*k this exploit I can't do new drivers license BC this
This is another reason not to use Firefox, such a joke product
OK neck beard
that is why CORS exists
Tell me you didn't watch the video without telling me you didn't watch the video.
CORS will block the response but not the request being sent
@@BrodieRobertson and honestly thats not browsers fault if your service is kinda trash and doesnt check early enough for CORS violationt in the pipeline
@@bigpod this is inherent to CORS
@@bigpodHow exactly would a service prevent a request from being sent before it receives the request? If you figure that out, the next application of the technology would be to report next week's lottery numbers today
Welcome to the world in the current time frame where everybody realised that their fucking code is shit.
but 0.0.0.0 is a network identifier ffs... how can this... what the fuck...
That's fucked up on so many levels...
14:10 Interesting that you say that *there* after adding an extra 0 that doesnʼt belong. I donʼt think you ever only had 3 in the video, though.