In games we used UDP with reliability layers and security layers at the app levels over it for years. With this we did exactly what QUIC does, use channels with different priorities, for some you don't even care about security or reliability and all of this over the same socket. This always resulted in greater performance and lower costs. Very happy that QUIC has such a quick adoption rate.
We started playing with QUIC about 5 years ago when Google started supporting it. It was useful to us in the satellite world where we were used to having to do all sorts of optimisations to speed up HTTP page loads due to the long round trip time for all those SYN/Ack and TLS packets. It was also useful in early 4G networks where the round trip could be relatively long. QUIC gave a glimmer of a sort of world🎉 where long delay times could be overcome.
At first thought, I was cringing at the thought of making UDP into a new TCP. But after some thinking and watching during this video, it actually kind of makes sense to have one general purpose protocol like TCP, for the general situations. The ones where we need simple data ordering, data integrity and congestion control, but it wouldn't make sense to develop a whole new transport layer protocol. And then leave UDP as some sort of blank slate protocol to build on top of for specialized applications such as web, just like is done with QUIC. You could build even more different optimized specialized TCP-like protocols on top of UDP for other applications such as e-mail or video streaming. As is said in the video, this ensures that the transport layer is kept compatible with existing software and hardware.
Yes, the ossification problem is real. But it 40+ years of architecture have taught us that it is often useful to offload work from the end hosts to something in the middle, and some of that work requires more specialized knowledge and inspection of the data going back and forth. I'm not even talking about content inspection for malware etc, but policing basic flow of network packets for things that might be disruptive to that flow (e.g., synflood, connection assasinations). QUIC can claim that hiding itself inside UDP solves ossification, but really it just means that either: A) the load of all of that functionality currently being done by middle boxes will have to shift back to the end hosts, or B) the middle boxes will start understanding QUIC & making assumptions about how the applications are using it, which will just cause the network to ossify again, this time around QUIC.
The middle boxes could try to understand QUIC and that is a good point. But beyond the initial handshake it is encrypted. If the middle boxes want to ship UDP anyway though they would have to just put up with arbitrary non QUIC data. It will be interesting to see how it develops.
I might be misremembering here (it's been a while since I studied QUIC in detail), but isn't a bunch of those snoop-worthy details hidden behind always-on encryption? As I remember it, QUIC is designed to hide away the details of what's going on from those middle boxes, exactly because its designers have foreseen that the middle boxes would otherwise indeed return to their dastardly meddling ways.
The beauty of the design and concept behind QUIC is that, for one, it realises that modern end devices are powerful enough to not need much offloading assistance, and two, that instead of locking in behind hardware accelerators which will hinder any attempts at change for years and years to come, by defining a lot of the handling of the stack in software instead, we can make changes as necessary much, much more easily.
Video game developers were using UDP because that's what UDP was designed for. Data which doesn't require 100% packet reliability. Sound,video, relative coordinates... Sorry they were no magic future readers :)
We did this at Tivoli over 30 years ago when we did not want to tie up servers for minutes waiting for TCP to time out when clients were not up (very limited in the number of sockets and memory back then). When trying to manage a bunch of computers in an office when communications should be milliseconds, we did not need long TCP waits tying up the server. Essentially this moves TCP up to user space where it can be enhanced with the minor overhead of using UDP/IP instead of just IP underneath.
@@richardclegg8027 TCP is very old, it was developed when computers had kilobytes of memory for processes (think PDP 11). It is very small and simple, not much code. So, we were able to do what TCP did in user space, handle a bunch of connections at once and not worry about timeouts and filling up kernel capacity. Similar to what was described in this video. We had megabytes of memory at the time.
@@mike123abc I know the history of TCP, I teach it -- I was surprised by the "user space" reimplementation you mentioned. On a modern system you could just tweak the timeout from config.
@@richardclegg8027 30 years ago, the network stack for Windows was... well, there wasn't such a distinction back then between user space and kernel space :) but it was certainly a user-supplied implementation, at least. And there were plenty to pick from.
As a former firewall tester, and someone who still needs to think about network security and how things behave in large systems with tens of thousands of clients (potentially maxing out those 65535 ports), this just gives me all sorts of nightmares.
To me, it feels like spaghetti programming, using UDP this way. Make a branch here, do also that, here we squeeze in some of that data, there we split it up, here we (try to) collect,... It's not simply unordered TCP.
@@coreC.. - I kind of agree. But adding new protocol types on top of IP is a huge undertaking, and I can easily understand using UDP to develop something before then going through the effort to make it an official protocol on top of IP.
10:00 Hyrum's Law: "With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody."
good video as ever with Richard. When I looked at QUIC I came to the conclusion that it was essentially TCP re-engineered in user space at the application layer. Most exciting thing that's happened for network devs since the 90s!
If you want a better application-layer protocol that doesn't rely on UDP, then there is SCTP: it's a better general purpose protocol that combines advantages of both UDP and IP, is also standarized, exists since the year 2000, supported by many operating systems out of the box - but near impossible to get it working out in the wild on the internet because too many crappy routers or network admins are don't suppoirt it or are overblocking everything that's not UDP/IP.
To those not well-informed on networking protocols, UDP isn't outright bad. The fact that UDP doesn't have the SYN-ACK handshake and the ACK after each packet means that it has a SIGNIFICANTLY higher bandwidth, and is therefore good for things like music and video streaming where high bandwidth is more important than perfect data integrity (your eyes/ears won't notice a packet dropped or out of order, whereas you will definitely notice the missing packet if it's for a file transfer for a program).
UDP is great yes. Simple and efficient. For bandwidth - well to fully utilise a link you need to put some form of "find bandwidth" algorithm on top of it and that is going to look like congestion control.
You forgot the main advantage of UDP over TCP: its application interface is actually packet based. TCP sacrifices message boundaries in order to do its magic and you need to either be stream oriented yourself or to implement it all manually (or to use some library for an extra application level protocol on top of TCP but below your own protocol). If you want a serial connection emulation over the network then TCP does exactly what it takes to get it but it makes simple low bandwidth custom TCP-based protocols much harder to design and implement than UDP-baded ones while the simple protocols actually tend to all be packet based. They generally look like "client-(datagram)->server, server-(ack)->client, client(sample_idx++)" so you really need your message boundaries and synchronisation. And that is on top of the fact that TCP itself is much hardet to implement than UDP and it becomes a real pain for low computing power custom embedded network devices which can not run a full on OS so all the pain sometimes just leads to more pain and it is all for the sole reason of not correcting your protocol specs while you still could and leaving the TCP curse there. Also did I mention that multicast and other high bandwith utilization efficiency network level tricks are only possible with UDP and not with TCP while for some apllications it is critical to have some of them? Sometimes TCP is not just wasteful but completely unsuitable for an application.
I think everyone who's dabbled in network programming has thought about adding reliability etc. to UDP, and then usually rejected it as too hard/redundant/a ghastly hack. I think where this is winning is in the adaptation to the application protocol rather than any specific re-architecting of the network.
I only watch these videos to see people writing on dot matrix printer paper 🤣. I seriously love that aspect of the videos and I don't know why. Great explanation, thanks for imparting your knowledge!
This is great, thank you. Now please do one on how TCP/IP does not fit into the OSI model! It is such a common misconception, people love to bring up OSI layer numbers.
I'm not saying QUIC is a bad thing, but it's evident that it originated from application developers, not network engineers/architects. And it's not just that the existing middleware boxes can't handle QUIC, it's that some of their functions were created to address specific problems which don't go away just because applications can use QUIC. If it remains opportunistic and optional, we may be OK, but I wouldn't be surprised to see something like peering link congestion when networks can't traffic engineer QUIC.
I’m going to go with ‘it’s horrible’. My main reasoning is that things like volumetric DDoS is a lot harder to defeat with UDP - address spoofing is easy, generating large volumes of traffic will be trivial and separating good traffic from bad will be a lot harder.
Which is why UDP-based protocols like QUIC have some kind of authentication so they know they can drop the packets. Also things are improving when it comes to BGP routing with slowly more and more deployment of RPKI and ROA for BGP Origin Validation.
How is building application-level reliability on top of UDP a hack? Services like RakNet do precisely that and it's not like UDP prevents what you can do with it. It's just sending packets.
cuz it's supposed to follow the clean OSI / TCP/IP layered architecture. For example, your OS is responsible for congestion control in TCP and thusly you don't need to modify each application to change it.
Yeah exactly as VA Demon says. If it was just reliability I would not have a problem. People have done that for decades. QUIC reimplents all TCP mechanisms a layer up. That is what is hackish to me. In order to get a small tweak they implement everything TCP does, now running out of kernel and in userspace to get some tweaks. It is like you wanted to tweak your car engine to be a bit more efficient so you built a second engine in the backseat.
Separation of layers violation. It means that every application that uses the protocol needs to have a full implementation of the protocol within it. Usually you'd get that from a library, but it's the resource footprint - you get all the memory overhead in every application. It also makes patching any bugs a lot harder, because so much of the code is in each application. In an ideal world, the OS would handle the transport protocol - as it does with TCP - and the application would access it with just a few API calls.
I think this is a natural development of the tech. We as humans don't have to respond to every single thing in a sentence, we can just sometimes say "alright", "ok...", etc.
good video, i just wish there was less reverb in the audio so that it's easier to listen to. he seems to wear a lav mic but it doesn't sound like it's audio was used.
I love the way you explained the OSI Layers 😂 I had my Networking 101 lecture from M. Tuexen (Author of SCTP) and his introduction was something around "You all might already have heard about the OSI/ISO model.. well.. that's not the way to do it.. most people don't give a sh*t above L4 in real world.." 😂
Through this hack, QUIC can do what SCTP tried to do a decade or more before it and more. The biggest deployment of SCTP is probably Telecom and in the browsers as part of WebRTC. But I think SCTP in WebRTC will be replaced by QUIC as well, it might be part of Webtransport though.
So what will happen is that those middle boxes will now also inspect udp packet because it could be a quic packet. Which in turn makes quic a protocol that can’t be changed because middle boxes depend on it.
What will also happen is that all the home routers with crap UDP implementation will start creating new connection issues for end users. We know these routers exist, we just don't know how widespread the issues are.
03:25 The moment where a video from nine months ago is shown, in which you wear the same clothes as in today's video. And then there's always someone like me who is going to check other videos of you to see if you ever change your wardrobe or just have one set of clothes. Glad I did, because: a) You do change (love the cufflinks btw, a distinguished detail in clothing) and b), it gave me a chance to revisit your previous appearances on this channel, which are surely worth a rewatch.
Its the TCP+TLS stack thats the ugly one. Combining the security and syn/ack handshake makes much more sense to me. It feels to me like it was UDP's unintentional destiny ;)
If I understand this correctly: since QUIC data is encrypted and running over UDP, only a middlebox with application-level inspection of packets AND SSL offloading/acceleration would break?
Middleboxes just usually pass on UDP as "not our business" because it is a minority of traffic. Apart from QUIC I don't know anything that uses TLS (or SSL) and UDP.
The challenge here is a similar challenge, architecturally, that gets hit with large OO hierarchies with mutative private data; when you encapsulate too much in one layer, people begin making assumptions about that layer. In the future, refactoring that layer becomes intractable, and after enough time, people start clamoring for a rewrite. Avoiding that is a matter of composing lego pieces. Of course, with Quic itself being a monolith, it's only a matter of time before new security concerns, and new middleware layers make new assumptions, and everyone is back to square 1, and rebuilding FASTR or something.
I don't know how much to read into what you said but if your putting the transport layer into application space then get ready for a nightmare of incompatible 'improvements' over the years.
These improvements would be application specific, though, there is no need for compatibility between unrelated applications. Why would your app care how my app handles information exchange if neither clients ever connect to the other's host?
Absolutely get ready for improvements. Once this approach is taken you can tune a transport layer for other applications or roll your own QUIC improvements if the server and client both support it.
@@richardclegg8027 This is, pretty much, the HTML/Browser kerfluffle all over again, now on the Transport Protocol, instead of just the Application. Even decades after that whole thing died, we still have to be aware of multiple coexisting solutions for the same thing that need to be coded "just in case the user is on browser X": I still remember cellpadding/cellspacing on HTML previous to 4.0, and let's not forget the newer issues on different JS versions/implementations
@@yoshienverde Oh god. I remember it so well. Lots of browsers implementing different standards. Awful. I think this can avoid that problem. It's only real power users who need to concern themselves. Even if you want to switch it on on your webserver (say) you don't really need to know the protocol version and how it works. A lot of your internet browsing changed to use QUIC and, as a regular user, you didn't notice (or maybe you noticed that thngs ran a little faster). As a sysadmin you notice if and only if you want to turn on QUIC for your webserver. If it did branch to many many versions with different APIs (very different protocols can share an API) people writing web browsers or web servers would have a problem.
I feel like there are some areas where it could have been explained better why QUIC is a hack and what would be the "clean design". e.g. package retransmission and flow control, it can only happen when it is already too late, on TCP since it was embedded in the middle boxes, they would have take care of the problem beforehand. A new standart has to be created, until then QUIC (or whatever new successor might come) is the option.
Wow, that's a good explanation, thanks! I would like to add another point. Today, one of the bottleneck is the kernel. So, moving this heavy work to the application layer (through UDP/quic) is going to bypass this bottleneck. Of course that's already solutions for this problem, like the Intel data planning, but for sure the quic is the easiest one to deploy, since it's a software/library implementation. Once again, thanks for the talk. The diagrams helped me a lot.
Would it not make more sense to develop the standards for an entirely new transmission protocol now and just not implement it before 10-15 years down the line when all the firewalls and «middle boxes» are updated to handle it?
While I also like the idea, I think it would just take too long, just look at the IPv6 adoption. Also, a new standard would need to support encryption to really save some unnecessary round-trips. Until the new standard is widely adopted the supported encryption methods may already be outdated again. I'm not sure if it would be possible to define the protocol in a generic way such that future TLS versions would definitely be supported.
No-one running those middle boxes would have a reason to support the new protocol, because there's no-one using it to demand that investment. That's why IPv6 adoption took /decades/ to make any progress at all. Why would any ISP waste money and training time on implementing a protocol when everything works fine without it?
UDP was always lighter than TCP, so it was easier to compress data and make your own implementation of security, congestion control etc, since you can compress this data too. A lot of videogames use it as basement for their protocols. The only hard thing about this, is that you need both the server and the client sides to be aware of your implementation.
@@rich1051414 The maximum size of UDP payload is 65507 Bytes, the MTU of ethernet is usually 1400-ish. If we are talking about minimum reassembly buffer size, its the same for all IPv4 (576 bytes) and doesn't depend on what protocol you are using. Given that TCP uses more techincal info, AND with constant buffer size, it actually leaves less space for payload in that case, so "safe payload" would be less than 508 for TCP. Also, the safe payload by the definition cannot be bigger than the MTU for a single packet, which is 1500-20 for the ethernet (if we don't fragment packets). Using "safe payload size" in this sense is almost meaningless, given that almost every channel these days have 1400+ MTU. Edit: grammar
@@thewhitefalcon8539 Warframe netcode developers managed to get 82% compression with both payload and congestion control over UDP, making it to 55% of weigth of original UDP packets without compression
QUIC was designed to prevent it, because it already had encryption build it, they encrypt everything so that the middle boxes can't see anything anymore.
@@autohmae The first packet from a client is not encrypted. A smart router can use it in order to distinguish the QUIC packet from other UDP packets and record the IP addresses of the client and the server or just block the connection. Anything else would require a MITM attack with access to a legit CA certificate keys and it would take quite a powerful hardware SSL accelerated router to implement. No other metadata are openly transmitted.
@@Kirillissimus it's true it takes a (few) first packages to establish the encryption, just like previous versions of TLS/SSL. Every packet after that of a QUIC connection is recognizable as a QUIC packet and has a connection ID. One detail is important: that connection ID will be different when multipath/QUIC loadbalancer is involved, so it can't be correlated if it's from the same connection or not. QUIC works despite NAT and NAT can change IP and port numbers, so it's clear that those can be changed, as long as it's changed and restored for all packets in a connection in the same way (for example: change the IP-address of the sender for client to server packet and restore the client IP address of the recipient for the packet from server to client).
@@autohmae I even saw somewhere that QUIC supports connection resuming without a full reestablishment procedure with key exchange and everything. I don't know for sure if it is a part of the latest specification or not but if it is then it will make the job of monitoring such connections even harder since you need to share all the MITM related data with other routers and maybe even other ISPs and you need to store them for who knows how long. P.S. Thanks for the info about connection IDs, I completely forgot about them.
@@KirillissimusYes, QUIC supports 0RTT resume. If you reconnect within a few minutes (not sure of the precise time) you just "connect" with no handshake. You can see it in wireshark if you try. Take a sample to a website you know uses QUIC. Close the webpage and then connect again you will get a "no handshake" connection.
The new problem it creates is that IDS/IDP software can't inspect webpage content for malware (or ads to block) at the firewall/gateway because all the different content component streams are inside an encrypted session. Before, all the different components of a webpage were separate streams and the URL of each component could be inspected even if the payload was encrypted. Now, it's all hidden. This forces IDS/IDP inspection to the user endpoint instead of the company gateway. A real challenge when guest networks are provided where IDS/IDP software can't be mandated. We've sacrificed security for speed.
Hey at-least the new protocol on top of UDP can could fix the TCP issue where malicious packets can close any connection no matter if there is TLS traffic going over TCP.
@@richardclegg8027 It will unless it is manually blocked by a router's firewall somewhere near the endpoints of your connection. Basically connection will still not be guaranteed but the chance of success should get significantly higher.
@@richardclegg8027 You get rid of NAT... (And most firewalls can pass unknown / specific protocol numbers) (NAT needs to understand protocols - it needs to mess with port numbers)
@@GertvandenBerg yes. If you got rid of NAT, loadbalancers, traffic shapers, management boxes and so on the internet would be back closer to its original design spec and you could implement new transport protocols.
Something that has puzzled me-- Protocols like DVDs had error correction. They were probabilistic and as soon as you had enough samples to cover a set of data you could ship it. In return they took a hit on the data rate. IP had no redundancy. For TCP you had to wait for every packet and UDP you just didn't care. Couldn't there be something in-between? A protocol that could handle occasional packet loss and the level of redundancy was dynamically tuned based on the line quality? Further, if you send an image over TCP, it is guaranteed to come in pixel perfect which is a waste. It would be nice to be able to mark sections of data to be more UDP-like so they can come in faster. If this is exactly what the new protocol is doing, sorry for reinventing the wheel.
The bigger problem with this is that, if you're being probabilistic about what data is being piped to someone, and they can afford some parts of it to not be accurate...what do you do when that inaccurate data is executable code?
@@paulpinecone2464 DVDs aren't for computing code? *Stares at PS2 consoles, Xbox 360 consoles, and PC DVD autorun.exe solutions* Someone should tell them that.
I guess the obvious question is: what happens when the "middle boxes" start to make assumptions about the quick packets? If firewalls start to look into this, how will the next changes be implemented? When the problem becomes "We cant change quic because of the middle boxes", what will happen?
HTTP/3 looks suspiciously like what came before it (HTTP/2) though, nothing really changed, only how the data is transported (a different type of streams, which uses QUIC with TLS/1.3).
I have a habit of looking at the bookshelves of people in TH-cam videos, cherry-picking ones that look interesting (in this case: Origami 3, Big Queues), and adding them to my Amazon wishlist. Yes, I have a problem (admitting it is the first step to recovery)...
A person could implement TCP as a fun little project. Implementing QUIC, since it contains TLS1.3, would be rather time consuming. Presumably one could try to just ignore the TLS part? I don't know enough details to tell if that even makes sense.
You can just bring in a standard open source TLS1.3 if you are just implementing TCP for fun. Actually it is probably easier as now you are implementing in user space so you don't need to fiddle with the kernel to implement.
You can't, sorry. One of the fundamental aspects of QUIC is a hard requirement for always-on encryption - ie., TLS 1.3 is absolutely necessary. That being said, if you wanted to play around with it just for the fun of it, there's plenty of high-quality libraries that helps you implement the really nasty details of TLS - namely encryption primitives. Look into NaCl/libsodium, for example.
@@richardclegg8027 Thank you for answering my comment :) Very good video as well. Its so common to have to do some workaround, very interesting to learn that html3 is basically that!
What's going to prevent QUIC from becoming another "middle box" problem or ossification problem? It reminds me of the "god class" in OOP. Ultimately not a great idea, just a hacky novelty workaround that comes with its own set of brand new problems.
Brilliant! And a hack. I absolutely love it when smart people think their way out of a dilemma using available resources. I took a course on networking at Uni but haven’t been remotely involved in it since so I understand enough to get it. Brilliant! But a hack. 😂
QUIC may know more things about the application. TCP will ensure that you get all data. So it asks for missing packets and will wait until it gets them. In a video call this may be a 1s delay between to frames. You will notice this. QUIC may assume that you will notice a missing frame and will not wait for the missing IP packet or ask for a new one. This may not sound like a huge thing for a single user. But if you connection is bad and ask for 5% new packets this means at least 5% more traffic for resent packets. 5% more traffic is not much on the client size. But in a data center with high load this is really expensive. You need more bandwith, more CPU and RAM for the boxes, more cooling, .... The networks cannot handle these 5% extra traffic any longer. They are at 98% of the actual maximum. They have either to reduce the traffic or to sit and wait for new hardware. Brotli compression and HTTP3 will help them a lot. And maybe this will reduce more traffic than new cloud apps and higher resolutions will add.
Using UDP has the benefit of saving the resources required for maintaining session information for TCP. How does QUIC compare since it is now handling things? Is the load on a busy web server higher, lower or about the same?
CPU load is higher. Remember QUIC is doing everything TCP does -- it is maintaining session state -- it's a reimplementation of TCP at the application layer with some extra bits. However, it's implemented in user not kernel space so it's using a bit more resource.
This man's disrespect for UDP is disgusting. One has to be blinded with prejudice to not realize that it was left there before and in anticipation of ossification specifically to allow you to build your own protocols on top of it once arbitrary traffic ceases to be an option. Man spent half the video explaining the problem solved with UDP, a quarter describing the solution using UDP and another quarter venting his prejudice against UDP.
Since almost 15 years application that need low latency and high reliability are using framework such as Apache Mina, Netty or Grizzly to build software configured network stack. The reason is simple : legacies on the networks. The network guys do not want to evolve the network stack. This ends up in app developper using UDP to do anything on top to workaround this. Nothing new, but just another application to go down the same road with its own protocol. Here it is HTTP and the apps are web browsers and web servers.
TCP is perfectly fine. But you can't really improve on that. So if you know you are in a realm that can grant optimizations while doing some assumptions, QUIC + UTP is the answer.
HTTP 2 added support for multiplexing. Meaning you send multiple datastreams over a single connection. But TCP simply isn't well-suited for that as it does not allow for reordering. Maybe some packets got lost for one data stream but not the other. You're totally out of luck. You have to wait for the earlier data for the other data stream to be re-transmitted.
I think it's because one TCP connection only supports a single stream so doing lots of concurrent operations (e.g. to retrieve the many bits of a website) either requires lots of parallel TCP connections or introduces a lot of latency (if lots of requests are performed sequentially). Lots of TCP connections requires a lot of server resources and doing operations sequentially makes websites take longer to load. QUIC supports multiplexing of streams so a single QUIC connection can support many operations at the same time.
I mention it quite briefly. Lots of small things. Getting the connection open quickly by combining TLS with handshake. Allowing receipt by application of out of order packets. None of these are in standard TCP.
@@richardclegg8027 so normally where we were taught that UDP has the disadvantages over TCP, it said 1. Packet delivery is not ensured, that's on you, the developer to ensure packet was delivered and what to do if it's dropped. 2. Packets may come out of order, that's on you to reorder and define timeout parameters for (1) Based on these then the solution I would make would be a version of sliding window protocol where the receiver has a window size > 1 Is that everything that quic is? A standardised way of doing just that? So you can; instead of writing all that into your apps; just let quic do the work for you? Or is there more things like traffic congestion etc... That it does on top of this? I didn't really understand this because if it does that kind of control, that would mean its aware of a connection or packet? Although in my networks course they didn't touch multicasting and such so I don't know what protocols are used there or how it's generally transmitted. Apparently from above comments it seems quic was targetted mostly for multicasting, some issues for TCP arise so they used UDP. Is it to standardise performance? Add a layer of abstraction to make it easier for Dev's to use UDP? All of the above?
Faster train on the same tracks. Replacing the tracks is going to be inevitable, but this buys is time. I'll naively call this beautiful while deluding myself into thinking this will get dropped immediately when it's no longer needed and won't cause any issues in the process.
The tracks will definitely stay until they naturally rot away and if something is not compatiable then noone apart from some small batch of nerds will use it. This is how businesses and human civilizations in general operates. It will take many decades until it will be possible to use different protocols apart from TCP and UDP and IPv6 everywhere. But we need faster connections right now and not when our slopoke system is finally ready for it.
It's not even such a hack. TLS is already a transport protocol running on top of a transport protocol. I mean, basically, they've just taken DTLS (TLS-over-UDP) and stuck retransmissions and multiplexing (aka a header with a stream number and a packet number) on top of it.
The idea behind separating the stack into distinct layers was to allow the layers above to change without any affect on the layers below. So if you are building at the physical layer then you don't care what the network layer looks like, they are just bits. At the network layer you should be able to ignore the transport layer, it's just payload. Shouldn't matter. So new transport layers should be able to be added without any change to the network layer since it is just a change in IP payload. The problem comes when the network layer equipment isn't content treating all payloads the same and they start reaching into the next layer on the stack to try and make optimizations or assumptions to make their job easier. Once they do that then the clear separation of layers is gone and changes to the above layer could cause things to break. For instance, if your router is running QOS it is snooping the transport layer to see what type of traffic it is to decide who to prioritize. If it were to receive a packet with a transport layer it doesn't recognize then it is just going to throw it away (probably assuming it is malformed). If every IP packet was treated identically by every hop in between you and the server, regardless the protocol, then you could throw any sort of new protocol on you wanted and know it would work. That is not what happened though so now we are basically stuck with what we have now or waiting 20+ years for full adoption of a new protocol.
6:25 > _"root-ers or r-out-ers (en-us)"_ ohkay, thanks for clearing up! root-ers it is for me from now on. routers pronunciation - european vs american
I tried implementing a TCP like protocol over UDP and what I found is that windows sucks at scheduling UDP from a user mode application. my destination device had a limit as to how fast the UDP packets could be received without losing packets. it would be nice if there was a way to send UDP packets at a scheduled time.
@@richardclegg8027 the limit was more with the device I was having the PC connect to. The problem I ran into was that I had to do a 1ms sleep between each UDP packet that was sent.
All the TCP header mechanisms are in the UDP packet data. SYN, ACK, FIN etc. It does what TCP does, using similar mechanisms with the header in the UDP data and the end points in the application layer not transport layer.
I would say it's a beautiful workaround. To say it's horrible, you have to know your ways in the monolith itself, to show us the other way it can be done. Otherwise, carrying planes on ships, or HTTP on QUIC [on UDP] is just a beautiful workaround to me
Excellent video, but I still think the web needs a massive rethink. Even the way web pages are written. It's all a bodge on a bodge. We could do with a web 2.0.
I think we would be on web 4.0. :-) People call this a "clean sheet" design, I mention it in the video -- delete the internet and start again removing all the bodges.
Played with QUIC implementation of Nginx and it worked just fine (installation wasn't straightforward though). It's still beta so may not be suitable for production environments just yet.
If TCP is still being used as a fallback why not develop TCP 2.0 and treat TCP as a fallback? Is is just that the TCP 2.0 would hardly ever work with the current network hardware in the wild?
In games we used UDP with reliability layers and security layers at the app levels over it for years. With this we did exactly what QUIC does, use channels with different priorities, for some you don't even care about security or reliability and all of this over the same socket. This always resulted in greater performance and lower costs. Very happy that QUIC has such a quick adoption rate.
File-sharing networks also have been using udp for many years. eDonkey, Kad (Kademlia), bitTorrent, and several other p2p networks/protocols use udp.
What games have reliability or security layers? lol the most I've ever seen is a ping, or even just a timer expecting to be constantly receiving data
@@JohnRunyon literally every game, otherwise the players would desync horribly
I could listen to Richard all day, what a great explainer!
but soooooo slooooooooooow
We started playing with QUIC about 5 years ago when Google started supporting it. It was useful to us in the satellite world where we were used to having to do all sorts of optimisations to speed up HTTP page loads due to the long round trip time for all those SYN/Ack and TLS packets. It was also useful in early 4G networks where the round trip could be relatively long. QUIC gave a glimmer of a sort of world🎉 where long delay times could be overcome.
Hadn't even thought of that but for satellite it is a real godsend.
At first thought, I was cringing at the thought of making UDP into a new TCP.
But after some thinking and watching during this video, it actually kind of makes sense to have one general purpose protocol like TCP, for the general situations. The ones where we need simple data ordering, data integrity and congestion control, but it wouldn't make sense to develop a whole new transport layer protocol. And then leave UDP as some sort of blank slate protocol to build on top of for specialized applications such as web, just like is done with QUIC. You could build even more different optimized specialized TCP-like protocols on top of UDP for other applications such as e-mail or video streaming.
As is said in the video, this ensures that the transport layer is kept compatible with existing software and hardware.
RTMFP
Yes, the ossification problem is real. But it 40+ years of architecture have taught us that it is often useful to offload work from the end hosts to something in the middle, and some of that work requires more specialized knowledge and inspection of the data going back and forth. I'm not even talking about content inspection for malware etc, but policing basic flow of network packets for things that might be disruptive to that flow (e.g., synflood, connection assasinations). QUIC can claim that hiding itself inside UDP solves ossification, but really it just means that either: A) the load of all of that functionality currently being done by middle boxes will have to shift back to the end hosts, or B) the middle boxes will start understanding QUIC & making assumptions about how the applications are using it, which will just cause the network to ossify again, this time around QUIC.
QUIC has connection numbers and IP/port combinations, you can do those things you want if you really want to.
The middle boxes could try to understand QUIC and that is a good point. But beyond the initial handshake it is encrypted. If the middle boxes want to ship UDP anyway though they would have to just put up with arbitrary non QUIC data.
It will be interesting to see how it develops.
You could though easily block a QUIC synflood (say).
I might be misremembering here (it's been a while since I studied QUIC in detail), but isn't a bunch of those snoop-worthy details hidden behind always-on encryption? As I remember it, QUIC is designed to hide away the details of what's going on from those middle boxes, exactly because its designers have foreseen that the middle boxes would otherwise indeed return to their dastardly meddling ways.
The beauty of the design and concept behind QUIC is that, for one, it realises that modern end devices are powerful enough to not need much offloading assistance, and two, that instead of locking in behind hardware accelerators which will hinder any attempts at change for years and years to come, by defining a lot of the handling of the stack in software instead, we can make changes as necessary much, much more easily.
While those guys were saying it was disgusting, video game developers had been building reliability on top of UDP already.
For literal decades no less. Like since at least Quake 3 IIRC.
Video game developers were using UDP because that's what UDP was designed for. Data which doesn't require 100% packet reliability. Sound,video, relative coordinates...
Sorry they were no magic future readers :)
"Reliably"- please the last thing I want to hear is reliably and UDP at the same time
Yes. When latency becomes an issue in real time coms you generally need a bespoke solution… which is one reason why udp exists.
@@jancizuletek670 dummy he said reliability BUILT ON TOP OF which is exactly what quic does
We did this at Tivoli over 30 years ago when we did not want to tie up servers for minutes waiting for TCP to time out when clients were not up (very limited in the number of sockets and memory back then). When trying to manage a bunch of computers in an office when communications should be milliseconds, we did not need long TCP waits tying up the server. Essentially this moves TCP up to user space where it can be enhanced with the minor overhead of using UDP/IP instead of just IP underneath.
Oh you had a complete TCP user space 30 years ago. I did not know of that.
@@richardclegg8027 TCP is very old, it was developed when computers had kilobytes of memory for processes (think PDP 11). It is very small and simple, not much code. So, we were able to do what TCP did in user space, handle a bunch of connections at once and not worry about timeouts and filling up kernel capacity. Similar to what was described in this video. We had megabytes of memory at the time.
@@mike123abc I know the history of TCP, I teach it -- I was surprised by the "user space" reimplementation you mentioned. On a modern system you could just tweak the timeout from config.
@@richardclegg8027 30 years ago, the network stack for Windows was... well, there wasn't such a distinction back then between user space and kernel space :) but it was certainly a user-supplied implementation, at least. And there were plenty to pick from.
@@JohnRunyon also depends where you were. In 93 in the UK at least we used other protocols that since died like coloured book. A bit wild west.
As a former firewall tester, and someone who still needs to think about network security and how things behave in large systems with tens of thousands of clients (potentially maxing out those 65535 ports), this just gives me all sorts of nightmares.
IPv6 will take care of that with its global unicast addresses eliminating the need for a NAT at all
@@ashleigh. IPv6 isn't fully here just yet though.
@@unicodefox Yes it is, just some ISPs are holding back so they can take advantage of their IPv4 monopoly
@@ashleigh. I would still want a NAT though, because I'd rather not have random threat actors be able to directly send packets to my device.
I don't think using UDP in this way is a horrible hack. It might be nice to have new protocols on top of IP though.
To me, it feels like spaghetti programming, using UDP this way.
Make a branch here, do also that, here we squeeze in some of that data, there we split it up, here we (try to) collect,...
It's not simply unordered TCP.
@@coreC.. - I kind of agree. But adding new protocol types on top of IP is a huge undertaking, and I can easily understand using UDP to develop something before then going through the effort to make it an official protocol on top of IP.
10:00 Hyrum's Law: "With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody."
good video as ever with Richard. When I looked at QUIC I came to the conclusion that it was essentially TCP re-engineered in user space at the application layer. Most exciting thing that's happened for network devs since the 90s!
Love the explanation. Also love the Dr. Clegg's self portrait to the left him.
If you want a better application-layer protocol that doesn't rely on UDP, then there is SCTP: it's a better general purpose protocol that combines advantages of both UDP and IP, is also standarized, exists since the year 2000, supported by many operating systems out of the box - but near impossible to get it working out in the wild on the internet because too many crappy routers or network admins are don't suppoirt it or are overblocking everything that's not UDP/IP.
If SCTP deployment was a success QUIC would probably never have existed.
To be fair, UDP isn't _that_ horrible, even if it's an unnecessary complication. It's a very bare-bones specification with a simple packet header.
To those not well-informed on networking protocols, UDP isn't outright bad. The fact that UDP doesn't have the SYN-ACK handshake and the ACK after each packet means that it has a SIGNIFICANTLY higher bandwidth, and is therefore good for things like music and video streaming where high bandwidth is more important than perfect data integrity (your eyes/ears won't notice a packet dropped or out of order, whereas you will definitely notice the missing packet if it's for a file transfer for a program).
UDP is great yes. Simple and efficient. For bandwidth - well to fully utilise a link you need to put some form of "find bandwidth" algorithm on top of it and that is going to look like congestion control.
You forgot the main advantage of UDP over TCP: its application interface is actually packet based. TCP sacrifices message boundaries in order to do its magic and you need to either be stream oriented yourself or to implement it all manually (or to use some library for an extra application level protocol on top of TCP but below your own protocol). If you want a serial connection emulation over the network then TCP does exactly what it takes to get it but it makes simple low bandwidth custom TCP-based protocols much harder to design and implement than UDP-baded ones while the simple protocols actually tend to all be packet based. They generally look like "client-(datagram)->server, server-(ack)->client, client(sample_idx++)" so you really need your message boundaries and synchronisation.
And that is on top of the fact that TCP itself is much hardet to implement than UDP and it becomes a real pain for low computing power custom embedded network devices which can not run a full on OS so all the pain sometimes just leads to more pain and it is all for the sole reason of not correcting your protocol specs while you still could and leaving the TCP curse there.
Also did I mention that multicast and other high bandwith utilization efficiency network level tricks are only possible with UDP and not with TCP while for some apllications it is critical to have some of them? Sometimes TCP is not just wasteful but completely unsuitable for an application.
@@Kirillissimus message boundaries can be useful for sure and QUIC makes full use of that.
@@Kirillissimus but I would say that QUIC is more heavyweight to implement than TCP. You need to implement all TCP mechanisms plus some new ones.
@@richardclegg8027 and TLS is also mandatory
I think everyone who's dabbled in network programming has thought about adding reliability etc. to UDP, and then usually rejected it as too hard/redundant/a ghastly hack. I think where this is winning is in the adaptation to the application protocol rather than any specific re-architecting of the network.
Tons of systems add reliability and features to UDP: bittorrent, gaming, VPNs, etc. are all built on top of UDP.
I only watch these videos to see people writing on dot matrix printer paper 🤣. I seriously love that aspect of the videos and I don't know why. Great explanation, thanks for imparting your knowledge!
This is great, thank you. Now please do one on how TCP/IP does not fit into the OSI model! It is such a common misconception, people love to bring up OSI layer numbers.
So QUIC is basically implementing the features of TCP that make it better than UDP, but at the application layer and over UDP? Interesting approach...
An old Irish saying, "If you're wanting to get to there, I wouldn't be starting from here". Or something like that...
This video was so well explained and fantastic!
I'm not saying QUIC is a bad thing, but it's evident that it originated from application developers, not network engineers/architects. And it's not just that the existing middleware boxes can't handle QUIC, it's that some of their functions were created to address specific problems which don't go away just because applications can use QUIC. If it remains opportunistic and optional, we may be OK, but I wouldn't be surprised to see something like peering link congestion when networks can't traffic engineer QUIC.
This is perfectly fine in my book. We had trading systems doing this in London in the late '80s.
50FPS 👀 strikes me as particularly legit for content made in Europe. I’m not sure why it garners respect from me, but it does.
I’m going to go with ‘it’s horrible’. My main reasoning is that things like volumetric DDoS is a lot harder to defeat with UDP - address spoofing is easy, generating large volumes of traffic will be trivial and separating good traffic from bad will be a lot harder.
yeah but UDP exists anyway
Which is why UDP-based protocols like QUIC have some kind of authentication so they know they can drop the packets.
Also things are improving when it comes to BGP routing with slowly more and more deployment of RPKI and ROA for BGP Origin Validation.
More of Richard G Clegg
Well, if we can't build or change the bottom of the stack, it makes sense to work on the top!
How is building application-level reliability on top of UDP a hack? Services like RakNet do precisely that and it's not like UDP prevents what you can do with it. It's just sending packets.
cuz it's supposed to follow the clean OSI / TCP/IP layered architecture. For example, your OS is responsible for congestion control in TCP and thusly you don't need to modify each application to change it.
Yeah exactly as VA Demon says.
If it was just reliability I would not have a problem. People have done that for decades. QUIC reimplents all TCP mechanisms a layer up. That is what is hackish to me. In order to get a small tweak they implement everything TCP does, now running out of kernel and in userspace to get some tweaks. It is like you wanted to tweak your car engine to be a bit more efficient so you built a second engine in the backseat.
@@richardclegg8027 It's lovely to see you in comments, it happens so rarely with authors ❤️
Separation of layers violation. It means that every application that uses the protocol needs to have a full implementation of the protocol within it. Usually you'd get that from a library, but it's the resource footprint - you get all the memory overhead in every application. It also makes patching any bugs a lot harder, because so much of the code is in each application. In an ideal world, the OS would handle the transport protocol - as it does with TCP - and the application would access it with just a few API calls.
@@richardclegg8027 I wouldn't call it a hack, more like a *blasphemy* ;-)
I think this is a natural development of the tech.
We as humans don't have to respond to every single thing in a sentence, we can just sometimes say "alright", "ok...", etc.
I'm curious. Does Richard moonlight as an orchestra conductor?
good video, i just wish there was less reverb in the audio so that it's easier to listen to. he seems to wear a lav mic but it doesn't sound like it's audio was used.
No mic can completely filter bad room reverb... Sadly.
My office is just a bit echoy I am afraid.
I'm amazed he got through this whole explanation without once mentioning VPNs.
I started with HTTP in 1994, wrote the http-gw for the firewall toolkit. PCs did not come with a network stack. Remember Trumpet Winsock, KAQ9 etc
I love the way you explained the OSI Layers 😂
I had my Networking 101 lecture from M. Tuexen (Author of SCTP) and his introduction was something around "You all might already have heard about the OSI/ISO model.. well.. that's not the way to do it.. most people don't give a sh*t above L4 in real world.." 😂
Second City Transmission Protocol? 😎
Through this hack, QUIC can do what SCTP tried to do a decade or more before it and more.
The biggest deployment of SCTP is probably Telecom and in the browsers as part of WebRTC.
But I think SCTP in WebRTC will be replaced by QUIC as well, it might be part of Webtransport though.
@@autohmae AFAIK Netflix does also use it within its CDN backend 👀
@@liminos did not know and I can't find stuff to confirm it, only small hints
So what will happen is that those middle boxes will now also inspect udp packet because it could be a quic packet.
Which in turn makes quic a protocol that can’t be changed because middle boxes depend on it.
QUIC has a trick up it's sleeve ! Because everything a middle box would maybe want to look at: is encrypted, because QUIC has encryption build in.
@@autohmae oh yeah, ofc forgot that part. Nice.
What will also happen is that all the home routers with crap UDP implementation will start creating new connection issues for end users. We know these routers exist, we just don't know how widespread the issues are.
@@Ownermode it's by design, they could have left some TCP-like parts unencrypted but the choose to encrypt all of it
But QUIC is "just" UDP. If the middlebox allows UDP it allows QUIC and any UDP based variant.
03:25 The moment where a video from nine months ago is shown, in which you wear the same clothes as in today's video. And then there's always someone like me who is going to check other videos of you to see if you ever change your wardrobe or just have one set of clothes. Glad I did, because: a) You do change (love the cufflinks btw, a distinguished detail in clothing) and b), it gave me a chance to revisit your previous appearances on this channel, which are surely worth a rewatch.
Thanks. I do like the cufflinks. Lecturer pay does not quite run to a new outfit every video. ;)
Its the TCP+TLS stack thats the ugly one. Combining the security and syn/ack handshake makes much more sense to me. It feels to me like it was UDP's unintentional destiny ;)
unintentional destiny protocol ?
I've heard it said that anyone who seeks to replace TCP will find themselves reimplementing TCP. :)
If I understand this correctly: since QUIC data is encrypted and running over UDP, only a middlebox with application-level inspection of packets AND SSL offloading/acceleration would break?
Exactly. In company I work for we block QUIC. As far as I know there is no commercially available inline ssl inspection system for http/3.
Middleboxes just usually pass on UDP as "not our business" because it is a minority of traffic. Apart from QUIC I don't know anything that uses TLS (or SSL) and UDP.
@@richardclegg8027 D-TLS is a generic UDP version of TLS which be used by any UDP application to use TLS
@@richardclegg8027 WireGuard
@@Knirin @automahe hah - thought there probably would be some. :)
The challenge here is a similar challenge, architecturally, that gets hit with large OO hierarchies with mutative private data; when you encapsulate too much in one layer, people begin making assumptions about that layer. In the future, refactoring that layer becomes intractable, and after enough time, people start clamoring for a rewrite.
Avoiding that is a matter of composing lego pieces. Of course, with Quic itself being a monolith, it's only a matter of time before new security concerns, and new middleware layers make new assumptions, and everyone is back to square 1, and rebuilding FASTR or something.
This is why almost everything about QUIC is encrypted, so middleboxes can't interpret it.
I don't know how much to read into what you said but if your putting the transport layer into application space then get ready for a nightmare of incompatible 'improvements' over the years.
These improvements would be application specific, though, there is no need for compatibility between unrelated applications. Why would your app care how my app handles information exchange if neither clients ever connect to the other's host?
Absolutely get ready for improvements. Once this approach is taken you can tune a transport layer for other applications or roll your own QUIC improvements if the server and client both support it.
@@richardclegg8027 This is, pretty much, the HTML/Browser kerfluffle all over again, now on the Transport Protocol, instead of just the Application.
Even decades after that whole thing died, we still have to be aware of multiple coexisting solutions for the same thing that need to be coded "just in case the user is on browser X": I still remember cellpadding/cellspacing on HTML previous to 4.0, and let's not forget the newer issues on different JS versions/implementations
@@yoshienverde Oh god. I remember it so well. Lots of browsers implementing different standards. Awful. I think this can avoid that problem. It's only real power users who need to concern themselves. Even if you want to switch it on on your webserver (say) you don't really need to know the protocol version and how it works. A lot of your internet browsing changed to use QUIC and, as a regular user, you didn't notice (or maybe you noticed that thngs ran a little faster). As a sysadmin you notice if and only if you want to turn on QUIC for your webserver.
If it did branch to many many versions with different APIs (very different protocols can share an API) people writing web browsers or web servers would have a problem.
sounds like a good way to bypass firewalls and security and all the benefits that middle boxes give. How did they get around that?
I feel like there are some areas where it could have been explained better why QUIC is a hack and what would be the "clean design".
e.g. package retransmission and flow control, it can only happen when it is already too late, on TCP since it was embedded in the middle boxes, they would have take care of the problem beforehand.
A new standart has to be created, until then QUIC (or whatever new successor might come) is the option.
Wow, that's a good explanation, thanks!
I would like to add another point. Today, one of the bottleneck is the kernel. So, moving this heavy work to the application layer (through UDP/quic) is going to bypass this bottleneck. Of course that's already solutions for this problem, like the Intel data planning, but for sure the quic is the easiest one to deploy, since it's a software/library implementation.
Once again, thanks for the talk. The diagrams helped me a lot.
Yes, let's build reliable transmission and multiple streams on top of UDP. We can call it TCP I mean HTTP3.
Oh, I was wondering when the protocol to open the gates of hades got an RFC...
it's over 9000
Would it not make more sense to develop the standards for an entirely new transmission protocol now and just not implement it before 10-15 years down the line when all the firewalls and «middle boxes» are updated to handle it?
While I also like the idea, I think it would just take too long, just look at the IPv6 adoption.
Also, a new standard would need to support encryption to really save some unnecessary round-trips. Until the new standard is widely adopted the supported encryption methods may already be outdated again. I'm not sure if it would be possible to define the protocol in a generic way such that future TLS versions would definitely be supported.
No-one running those middle boxes would have a reason to support the new protocol, because there's no-one using it to demand that investment. That's why IPv6 adoption took /decades/ to make any progress at all. Why would any ISP waste money and training time on implementing a protocol when everything works fine without it?
I disagree, all the major players are invested in updating networking.
Have about 20% more understanding after this. I enjoyed the trip but network stuff has always been my week point
UDP was always lighter than TCP, so it was easier to compress data and make your own implementation of security, congestion control etc, since you can compress this data too. A lot of videogames use it as basement for their protocols.
The only hard thing about this, is that you need both the server and the client sides to be aware of your implementation.
@@rich1051414 The maximum size of UDP payload is 65507 Bytes, the MTU of ethernet is usually 1400-ish.
If we are talking about minimum reassembly buffer size, its the same for all IPv4 (576 bytes) and doesn't depend on what protocol you are using.
Given that TCP uses more techincal info, AND with constant buffer size, it actually leaves less space for payload in that case, so "safe payload" would be less than 508 for TCP. Also, the safe payload by the definition cannot be bigger than the MTU for a single packet, which is 1500-20 for the ethernet (if we don't fragment packets).
Using "safe payload size" in this sense is almost meaningless, given that almost every channel these days have 1400+ MTU.
Edit: grammar
That is why Google pioneered it. Chrome browser and ownership of prominent websites was the dream "we have the server and the client" scenario
This compression thing is nonsense
@@thewhitefalcon8539 Warframe netcode developers managed to get 82% compression with both payload and congestion control over UDP, making it to 55% of weigth of original UDP packets without compression
How long until the middle boxes are going to interpret QUIC and we're back at the same ossification problem?
QUIC was designed to prevent it, because it already had encryption build it, they encrypt everything so that the middle boxes can't see anything anymore.
@@autohmae The first packet from a client is not encrypted. A smart router can use it in order to distinguish the QUIC packet from other UDP packets and record the IP addresses of the client and the server or just block the connection. Anything else would require a MITM attack with access to a legit CA certificate keys and it would take quite a powerful hardware SSL accelerated router to implement. No other metadata are openly transmitted.
@@Kirillissimus it's true it takes a (few) first packages to establish the encryption, just like previous versions of TLS/SSL. Every packet after that of a QUIC connection is recognizable as a QUIC packet and has a connection ID. One detail is important: that connection ID will be different when multipath/QUIC loadbalancer is involved, so it can't be correlated if it's from the same connection or not. QUIC works despite NAT and NAT can change IP and port numbers, so it's clear that those can be changed, as long as it's changed and restored for all packets in a connection in the same way (for example: change the IP-address of the sender for client to server packet and restore the client IP address of the recipient for the packet from server to client).
@@autohmae I even saw somewhere that QUIC supports connection resuming without a full reestablishment procedure with key exchange and everything. I don't know for sure if it is a part of the latest specification or not but if it is then it will make the job of monitoring such connections even harder since you need to share all the MITM related data with other routers and maybe even other ISPs and you need to store them for who knows how long.
P.S. Thanks for the info about connection IDs, I completely forgot about them.
@@KirillissimusYes, QUIC supports 0RTT resume. If you reconnect within a few minutes (not sure of the precise time) you just "connect" with no handshake. You can see it in wireshark if you try. Take a sample to a website you know uses QUIC. Close the webpage and then connect again you will get a "no handshake" connection.
Can QUIC solve the problem of ads constantly popping into a webpage while I try to click a button from my phone?
The new problem it creates is that IDS/IDP software can't inspect webpage content for malware (or ads to block) at the firewall/gateway because all the different content component streams are inside an encrypted session. Before, all the different components of a webpage were separate streams and the URL of each component could be inspected even if the payload was encrypted. Now, it's all hidden.
This forces IDS/IDP inspection to the user endpoint instead of the company gateway. A real challenge when guest networks are provided where IDS/IDP software can't be mandated.
We've sacrificed security for speed.
👍👍!
But isn't that a problem with HTTPS and E2E encryption in general?
Also the speed let us some uncensorship. I'm not saying everything should be uncensored but it's nice to have freedom and trusted links on things.
Hey at-least the new protocol on top of UDP can could fix the TCP issue where malicious packets can close any connection no matter if there is TLS traffic going over TCP.
Or we've sacrificed IDP for privacy 😉
There is of course SCTP, which is awesome, except for all the @$!#$! NAT boxes breaking it... IPv6 solves that problem, but adoption is slow....
IPv6 rise is about 5% per year, it's at 40% now, so 10 years to get to 90%
I don't think IPv6 will get SCTP to pass all middleboxes though. May be wrong.
@@richardclegg8027 It will unless it is manually blocked by a router's firewall somewhere near the endpoints of your connection. Basically connection will still not be guaranteed but the chance of success should get significantly higher.
@@richardclegg8027 You get rid of NAT... (And most firewalls can pass unknown / specific protocol numbers) (NAT needs to understand protocols - it needs to mess with port numbers)
@@GertvandenBerg yes. If you got rid of NAT, loadbalancers, traffic shapers, management boxes and so on the internet would be back closer to its original design spec and you could implement new transport protocols.
This video treats UDP as much worse than it is. What happened here? Computerphile is above this!
Not to mention that calling HTTP/3 a hack, because UDP is unreliable, is like calling TCP a hack because IP is unreliable.
Something that has puzzled me-- Protocols like DVDs had error correction. They were probabilistic and as soon as you had enough samples to cover a set of data you could ship it. In return they took a hit on the data rate.
IP had no redundancy. For TCP you had to wait for every packet and UDP you just didn't care. Couldn't there be something in-between? A protocol that could handle occasional packet loss and the level of redundancy was dynamically tuned based on the line quality? Further, if you send an image over TCP, it is guaranteed to come in pixel perfect which is a waste. It would be nice to be able to mark sections of data to be more UDP-like so they can come in faster.
If this is exactly what the new protocol is doing, sorry for reinventing the wheel.
The bigger problem with this is that, if you're being probabilistic about what data is being piped to someone, and they can afford some parts of it to not be accurate...what do you do when that inaccurate data is executable code?
@@ZT1ST Then you don't use a probabilistic protocol? DVD equals music? Not computer code? What the heck are you talking about.
@@paulpinecone2464 DVDs aren't for computing code? *Stares at PS2 consoles, Xbox 360 consoles, and PC DVD autorun.exe solutions* Someone should tell them that.
Very good explanation.
I like his style of teaching.
The way he moves his hands reminds me of the great Sir Martyn Poliakoff.
I guess the obvious question is: what happens when the "middle boxes" start to make assumptions about the quick packets?
If firewalls start to look into this, how will the next changes be implemented?
When the problem becomes "We cant change quic because of the middle boxes", what will happen?
New http version! Yay!
HTTP/3 looks suspiciously like what came before it (HTTP/2) though, nothing really changed, only how the data is transported (a different type of streams, which uses QUIC with TLS/1.3).
@@autohmae Upgraded security with lower latency are two things that are always worthwhile.
"Is it beautiful? Is it ugly?" No, it is necessary.
QUIC really is a big deal.
It's a dirty hack but sometimes that's what you need to fix a problem.
Curiously when working with Unreal, they basically implemented their network stack in a very similar way.
Whats with the super ornate picture of strongbad in the background?
I have a habit of looking at the bookshelves of people in TH-cam videos, cherry-picking ones that look interesting (in this case: Origami 3, Big Queues), and adding them to my Amazon wishlist. Yes, I have a problem (admitting it is the first step to recovery)...
A person could implement TCP as a fun little project. Implementing QUIC, since it contains TLS1.3, would be rather time consuming. Presumably one could try to just ignore the TLS part? I don't know enough details to tell if that even makes sense.
You can just bring in a standard open source TLS1.3 if you are just implementing TCP for fun. Actually it is probably easier as now you are implementing in user space so you don't need to fiddle with the kernel to implement.
You can't, sorry. One of the fundamental aspects of QUIC is a hard requirement for always-on encryption - ie., TLS 1.3 is absolutely necessary.
That being said, if you wanted to play around with it just for the fun of it, there's plenty of high-quality libraries that helps you implement the really nasty details of TLS - namely encryption primitives. Look into NaCl/libsodium, for example.
Thats what id call a workaround. Standard practice in IT ;)
Of course. Workaround, hack... It is not what the original design intended is my point. You abuse the system and make it work.
@@richardclegg8027 Thank you for answering my comment :) Very good video as well. Its so common to have to do some workaround, very interesting to learn that html3 is basically that!
The network model is 8 layers , the usual 7 and the one that causes the most problems , layer 8 ........ the user .
layer 9 and 10 are Budgetary and Political layers, office policies and religion, etc. depending on who you ask their definitions will differ.
What's going to prevent QUIC from becoming another "middle box" problem or ossification problem? It reminds me of the "god class" in OOP. Ultimately not a great idea, just a hacky novelty workaround that comes with its own set of brand new problems.
Brilliant! And a hack. I absolutely love it when smart people think their way out of a dilemma using available resources.
I took a course on networking at Uni but haven’t been remotely involved in it since so I understand enough to get it. Brilliant! But a hack. 😂
If the TLS is handled by QUIC then isn't what comes out the top regular old HTTP?
why does UDP allow the application layer to consumer some more decision making whereas TCP would not? Great vid, thanks!
QUIC may know more things about the application.
TCP will ensure that you get all data. So it asks for missing packets and will wait until it gets them. In a video call this may be a 1s delay between to frames. You will notice this. QUIC may assume that you will notice a missing frame and will not wait for the missing IP packet or ask for a new one.
This may not sound like a huge thing for a single user. But if you connection is bad and ask for 5% new packets this means at least 5% more traffic for resent packets.
5% more traffic is not much on the client size. But in a data center with high load this is really expensive. You need more bandwith, more CPU and RAM for the boxes, more cooling, ....
The networks cannot handle these 5% extra traffic any longer. They are at 98% of the actual maximum. They have either to reduce the traffic or to sit and wait for new hardware. Brotli compression and HTTP3 will help them a lot. And maybe this will reduce more traffic than new cloud apps and higher resolutions will add.
@@crumblethecookie6118 What a brilliant answer. Thanks!
Using UDP has the benefit of saving the resources required for maintaining session information for TCP. How does QUIC compare since it is now handling things? Is the load on a busy web server higher, lower or about the same?
CPU load is higher. Remember QUIC is doing everything TCP does -- it is maintaining session state -- it's a reimplementation of TCP at the application layer with some extra bits. However, it's implemented in user not kernel space so it's using a bit more resource.
This man's disrespect for UDP is disgusting. One has to be blinded with prejudice to not realize that it was left there before and in anticipation of ossification specifically to allow you to build your own protocols on top of it once arbitrary traffic ceases to be an option. Man spent half the video explaining the problem solved with UDP, a quarter describing the solution using UDP and another quarter venting his prejudice against UDP.
Nice video! Could you do one about connection pooling?
Since almost 15 years application that need low latency and high reliability are using framework such as Apache Mina, Netty or Grizzly to build software configured network stack. The reason is simple : legacies on the networks. The network guys do not want to evolve the network stack. This ends up in app developper using UDP to do anything on top to workaround this. Nothing new, but just another application to go down the same road with its own protocol. Here it is HTTP and the apps are web browsers and web servers.
You took the long way round to get to it.
You can enable QUIC in Chrome through the flags page. If you don't know what that is, then maybe don't mess with it.
How do I turn off the captions in Italian?
I still don't understand why quic is needed when we have tcp? Was TCP not good enough?
TCP is perfectly fine. But you can't really improve on that. So if you know you are in a realm that can grant optimizations while doing some assumptions, QUIC + UTP is the answer.
HTTP 2 added support for multiplexing. Meaning you send multiple datastreams over a single connection. But TCP simply isn't well-suited for that as it does not allow for reordering. Maybe some packets got lost for one data stream but not the other. You're totally out of luck. You have to wait for the earlier data for the other data stream to be re-transmitted.
I think it's because one TCP connection only supports a single stream so doing lots of concurrent operations (e.g. to retrieve the many bits of a website) either requires lots of parallel TCP connections or introduces a lot of latency (if lots of requests are performed sequentially). Lots of TCP connections requires a lot of server resources and doing operations sequentially makes websites take longer to load.
QUIC supports multiplexing of streams so a single QUIC connection can support many operations at the same time.
I mention it quite briefly. Lots of small things. Getting the connection open quickly by combining TLS with handshake. Allowing receipt by application of out of order packets. None of these are in standard TCP.
@@richardclegg8027 so normally where we were taught that UDP has the disadvantages over TCP, it said
1. Packet delivery is not ensured, that's on you, the developer to ensure packet was delivered and what to do if it's dropped.
2. Packets may come out of order, that's on you to reorder and define timeout parameters for (1)
Based on these then the solution I would make would be a version of sliding window protocol where the receiver has a window size > 1
Is that everything that quic is? A standardised way of doing just that? So you can; instead of writing all that into your apps; just let quic do the work for you?
Or is there more things like traffic congestion etc... That it does on top of this? I didn't really understand this because if it does that kind of control, that would mean its aware of a connection or packet?
Although in my networks course they didn't touch multicasting and such so I don't know what protocols are used there or how it's generally transmitted. Apparently from above comments it seems quic was targetted mostly for multicasting, some issues for TCP arise so they used UDP. Is it to standardise performance? Add a layer of abstraction to make it easier for Dev's to use UDP? All of the above?
Faster train on the same tracks. Replacing the tracks is going to be inevitable, but this buys is time. I'll naively call this beautiful while deluding myself into thinking this will get dropped immediately when it's no longer needed and won't cause any issues in the process.
The tracks will definitely stay until they naturally rot away and if something is not compatiable then noone apart from some small batch of nerds will use it. This is how businesses and human civilizations in general operates. It will take many decades until it will be possible to use different protocols apart from TCP and UDP and IPv6 everywhere. But we need faster connections right now and not when our slopoke system is finally ready for it.
It's not even such a hack. TLS is already a transport protocol running on top of a transport protocol. I mean, basically, they've just taken DTLS (TLS-over-UDP) and stuck retransmissions and multiplexing (aka a header with a stream number and a packet number) on top of it.
it's a good thing that they thought of creating UDP, but it's probably not a coincidence
Today's hacks are tomorrow's standards. It's sort of always been that way.
What is that picture scroll?
I didn’t get the ossification problem; is there any good resource to read about it?
The idea behind separating the stack into distinct layers was to allow the layers above to change without any affect on the layers below. So if you are building at the physical layer then you don't care what the network layer looks like, they are just bits. At the network layer you should be able to ignore the transport layer, it's just payload. Shouldn't matter. So new transport layers should be able to be added without any change to the network layer since it is just a change in IP payload.
The problem comes when the network layer equipment isn't content treating all payloads the same and they start reaching into the next layer on the stack to try and make optimizations or assumptions to make their job easier. Once they do that then the clear separation of layers is gone and changes to the above layer could cause things to break. For instance, if your router is running QOS it is snooping the transport layer to see what type of traffic it is to decide who to prioritize. If it were to receive a packet with a transport layer it doesn't recognize then it is just going to throw it away (probably assuming it is malformed).
If every IP packet was treated identically by every hop in between you and the server, regardless the protocol, then you could throw any sort of new protocol on you wanted and know it would work. That is not what happened though so now we are basically stuck with what we have now or waiting 20+ years for full adoption of a new protocol.
@@zenmaster76 that's insightful thank you.
Oooohhh ... a new source of nightmares for futur debugging deployments nights ^^
6:25 > _"root-ers or r-out-ers (en-us)"_
ohkay, thanks for clearing up! root-ers it is for me from now on.
routers pronunciation - european vs american
I tried implementing a TCP like protocol over UDP and what I found is that windows sucks at scheduling UDP from a user mode application. my destination device had a limit as to how fast the UDP packets could be received without losing packets. it would be nice if there was a way to send UDP packets at a scheduled time.
You can implement how you schedule UDP in the application if you so choose. Are you sure the limit at your client was not simply the network?
@@richardclegg8027 the limit was more with the device I was having the PC connect to. The problem I ran into was that I had to do a 1ms sleep between each UDP packet that was sent.
@@LaserFur oh dear. You're never going to get any performance with a huge 1ms sleep between packets for sure.
@@richardclegg8027 it's been awhile. maybe windows supports uS sleep intervals now. Windows also removed raw socket support.
If it's stupid and it works, it isn't stupid.
Wehoooo Queen Mary University of London
How can the sender know if the packets didn’t reach the destination if there are no acks?
There is an ack (if needed) it is just handled at the application layer (QUIC) instead of the transport layer
All the TCP header mechanisms are in the UDP packet data. SYN, ACK, FIN etc. It does what TCP does, using similar mechanisms with the header in the UDP data and the end points in the application layer not transport layer.
Please do full ipv6 video
I would say it's a beautiful workaround.
To say it's horrible, you have to know your ways in the monolith itself, to show us the other way it can be done.
Otherwise, carrying planes on ships, or HTTP on QUIC [on UDP] is just a beautiful workaround to me
Excellent video, but I still think the web needs a massive rethink. Even the way web pages are written. It's all a bodge on a bodge. We could do with a web 2.0.
I think we would be on web 4.0. :-) People call this a "clean sheet" design, I mention it in the video -- delete the internet and start again removing all the bodges.
Nothing about HTTP3 was in the video
Played with QUIC implementation of Nginx and it worked just fine (installation wasn't straightforward though). It's still beta so may not be suitable for production environments just yet.
As usual, blame OpenSSL, if OpenSSL had implemented the changes needed earlier we would have had it in Linux distributions, etc. much earlier.
If TCP is still being used as a fallback why not develop TCP 2.0 and treat TCP as a fallback? Is is just that the TCP 2.0 would hardly ever work with the current network hardware in the wild?
This video's brilliant! Thanks 🙏
Chad UDP saving the day