@EgonFreeman From experience... had they spoken up, they would almost certainly have been met with one of "who asked you?", "shut up, we're doing this!", or just ignored. Dozens of others didn't notice any issues, so who's going to listen to "that guy"?
Despite all this, I still very much love Cloudflare especially because of their transparency. They always go into great depth explaining what happened, what they did, and how they resolved it. Many companies can learn a thing or 2 from them in that regard. Customers tend to have more faith in a company that just owns up to it's mistakes rather then trying to have a PR department cover it up in nice words.
I hate cloudflare because its trying to become monopoly on internet ethics. Its not your job to pass judgement what is allowed on the internet. Banning something that is illegal is fair. But banning because its immoral according to them... yea I hate cloudflare.
@@StkyDkNMeBlz I really want to be tracked by Google across the internet with their corpo issued cryptographic IDs. You do understand what you are shilling?
I was actually at Cloudflare in the room for Cloudbleed and this issue, in SF for Cloudbleed and happened to be in London for this one. The real story is much better than this. We were at lunch doing a tech talk in the lunchroom when someone grabbed the mic and announced we were having a P0. We stampeded back to our desks and got to work fixing it. The issue was obviously related to the WAF from the start and it was just a matter of cleaning up. Keep up the videos they are great
@@NayayomIt’s actually really awesome. Tons of really smart, kind, curious people. Everything internally is about transparency, execution, and learning. Definitely engineering-centric, but also super product-focused in that the customer is always considered during meetings/talks/decisions.
I work in a cybersecurity administration space where regex is used all the time as a necessity. This is a story we tell people all the time to make sure they understand how important it is to make efficient regex.
@@WyvernnnnI disagree. The pattern didn’t make much sense. It was clearly missing something between the initial wildcard and non-capturing group. There’s never any reason to put two wildcards next to each other like that.
@@Zei33 Yeah that was weird, but it should still get O(n) in the end, that's the whole point of regular expressions (as long as you don't have capturing groups that can be re-used within the regex aka backtracking)
I can understand how it happened, even an experienced programmer can struggle to parse a regex by eye and it's easy to make something that's a resource hog without realising. It's certainly a lesson to test your regex thoroughly before release.
@@Croz89 oh yeah I’m constantly making mistakes. No one is perfect, that’s why debugging and beta testing exists. After over a decade of programming, I don’t make a lot of mistakes, but when I do they’re usually obscure cases or subtle logic errors. When you’re working with tens of thousands of lines of code and looking at them for 8 to 20 hours a day, mistakes are gonna happen.
It's important to note that re2 actually has other downsides compared to other regex engines, such as being unable to handle lookaheads and lookbehinds. This isn't just an implementation issue either: adding these operations actually makes regex strictly stronger than a finite state machine (instead it becomes a pushdown automaton). There's also a lot of fun math with finite state machines, where it turns out they're strictly equivalent to generating functions, which are basically power series where you don't care about convergence!
I think "look-around" assertions could still be implemented to run in linear time. As far as I know, back references is the only feature that can make the run time go exponential. In fact, matching regexes with backrefs is proven NP-hard.
Lacking some of the more advanced PCRE features in order to make guarantees about the maximum runtime seems like the right compromise to make for a high-volume security frontend that sits between the global population and a large swath of the internet.
@@hoo2042 That's actually why Russ Cox developed RE2 in the first place. He made it for google code search (now defunct). You can't really be Google and expect tech people to only input well-behaved regexes. He has a very interesting series of articles named "Implementing Regular Expressions". I really recommend every developer to read them.
@@hoo2042The problem now is that they use it on every Google product. re2 is the regex engine of BigQuery and I’m stuck with this limitations. It doesn’t make sense in a data warehouse.
Worth mentioning that Cloudflare isn’t just a CDN. It’s predominantly used by most websites as a web proxy responsible for the majority if not all requests to the origin.
@@emeraldbonsai Typically a CDN serves static or at least mostly static data. A CDN may be implemented as a caching web proxy, but a web proxy can do a lot more than what usually falls under the definition of "CDN". In CloudFlare's case, they basically offer both and blur the line about which is which, which is fine since it's a blurry line, but the person you are replying to isn't wrong.
@@VeggieRice DNS has nothing to do with what's being discussed here (aside from being an earlier step in the chain that would take you to the page's configured web proxy or CDN, of course, but equivalently so to saying "the user can elect their own browser").
The distinction is useful here as the outage is much more impactful if your web pages wont even load themselves (because the web proxy is down) rather than just CDN assets not loading (which could be only large assets).
"like re2, which work by converting regex to a state machine, or fancy computer science flowcharts" Damn, I wish I can say this line to the professor who teaches compiler course in my university lol
This is why as a general rule I NEVER use .* in my regexes. If I want to match everything before an equals sign, I'd use [^=]*= rather than .*= because it's always better to be as explicit as possible.
But that would match just the first '=' , not all of them. If you have a lot of parameters on a URL , you will have a lot of '=' and you will want to search all of them for certain things.
@@framegrace1 That's why you don't anchor the expression to the end of the string in this case. We don't care what else is at the end of the URL if we find a "bad thing" near the start. Also, most regex engines have a shortcut implementation for regexes ending in ".*"/".*$", so the one at the end is of no concern. And BTW, the issue was mostly the ".*.*", not so much the ".*=". Backtracking the latter isn't so expensive---it doesn't really matter if the engine has to search for the = from the start of the end of the remaining string. It most likely has a shortcut for "fixed character after match all" anyway. There's a good chance that ".*?=" is faster than "[^=]*?="/"[^=]*=" as it can scan the string using a simple "equals" comparison and be done. This, however, all goes out the window once there are multiple ways to match, like the infamous ".*.*". So when using this optimisation on purpose, it makes sense to manually commit after the "=" (e.g. with "(*COMMIT)").
Same here dude. I can't understand how people are still relying on regex for such important aspects of the code. It's just mind-blowing that a firewall rule is managed with that in 2023.
lol just write your own parser. You are acting as if that's a hard problem to solve and as if customers are not important. You just want to make your lives as "programmers" easier. Have some responsibility for the unnecessary amount of code that runs on users machines.
Someone wrote a paper ages ago about backtracking vs non-backtracking regex engines and the state of software slowness... The title is "Regular Expression Matching Can Be Simple And Fast (but is slow in Java, Perl, PHP, Python, Ruby, ...)" written by a Russ Cox in 2007. I bet he's feeling vindicated
It's amazing how casually things are actually handled behind the scenes in the IT world. I once wrote some software for a bank, did a 3 hour audit of the code with 5 of their top developers, after which they installed a pre-compiled earlier test version on their prod system. smh
I once fixed a globally crashing iOS app by hacking the backend to send out technically incorrect data. The app passed all tests because the test suites didn't include any data to reveal division-by-zero bugs. This was especially bad since the time to get Apple to review and deploy an updated version could take a week or more, IIRC. After conversing with the dev responsible, I asked him how he habdled fractional numbers, and he was sure that fractional numbers were always displayed as integers, so I changed the API to send instances of 0 as 0.001, effectively circumventing the bug while displaying calculated numbers (and 0s) correctly in the app. I think it's the most hacky fix I've ever deployed. It felt terrible and exhilarating and awesome all at the same time 😂 I'm actually a little proud 😇
I work as a sysadmin, yet I wish I had this much insight into how all the technologies I use daily work this deeply. I never finished my college, so I only ever heard of DFA, but despite that, your video explained it very well, and showed how much of an issue a simple regex can be, when executed thousands a times a second. Please make more videos, I cannot wait for more
It is interesting to note that any regex can be represented as a nondeterministic finite automata (NFA) and any NFA can be converted into a DFA using a simple algorithm. The only downside is that the DFA may end up with exponentially more states than the NFA which can take up a lot of memory.
@@Nayayombro same. I hated it. But understanding them is pretty good. We can see how google's devs used automata and formal grammar theory to develop a useful practical application with regex.
@pav431 even without some formal education, I'd say anyone working as sysadmin should know something about algorithms and complexity theory. Especially when writing code that systems used by others depend on. And shell scripts _are_ code. Knowing that there are regular expressions, and highly irregular expressions that are just _called_ something like "Perl compatible regular expressions" or "extended regexp" or whatever, is important. So is not writing script that unnecessarily nest three or more loops, and work fine on small testdata, but take "forever" with realistic sizes of data. Know and understand the various o-notations. Just because quicksort is usually quick, doesn't prevent it from having O(n²) worst case complexity. You may be fine with that, but you will want to know why you can live with it. There has to be a metric ton of good books on this, so it's possible to learn. Enjoy!
@@lhpl As someone who's inherited, and then had to completely rewrite from scratch, core automation scripts for clusters that were written by novice sysadmins, I concur that learning these things is important. Some aysadmins learn that awk and grep exist and that's the end of their training. Rewriting from scratch saved me hours upon hours it would have cost me to try to maintain the poorly made code inherited from novice sysadmins.
Absolutely crazy videos you're pumping up. Love your comedic editing style too! Every video of yours makes me feel like to the entire internet could break at any moment lol
I dont blame you for doing Cloudflare again, their RCAs are always excellent. This is such an excellent channel, you deserve far more subs! These are exceedingly entertaining and interesting for software engineers (and probably most other folks too!)
Yeah, as much as I love Cloudflare for smaller stuff there's a reason a lot of large enterprises use Akamai. A little overpriced for a simple growth phase startup and not as transparent as Cloudflare when something breaks on their end, but that massive bucket list of features available with Ion Premier, Cloudlets, and many more, especially Datastreams and their web security analytics portal, is an absolute lifesaver. Hell, it helps us debug all sorts of broken stuff upstream of it too, although I wouldn't be surprised if Cloudflare offered something like Akamai Reference IDs for easy, enterprise-friendly tracing. Specifically, Akamai is really particular about having identical Staging and Production sections with really fast rollback when production error rates increase even a little.
All those rules are stored locally to each node, and you cannot rollback to a machine that is dead or so high on CPU that can't even handle a connection. I presume they globally disabled WAF and restarted the nodes, so when up, they didn't try to apply the WAF rules and were free to be rolled back/forward. Then they re-enabled WAF (very slowly, I presume :) ) and all was back to normal.
Non-capturing groups (unlike lookahead and lookbehind) do get included in the match result (think $0), they just don't create an additional sub-match. Eg at 6:35, that would match $0: $1: $2: while removing the ?: would make it $0: $1: $2: $3:
Regex is great like shell scripts: works everywhere and does it jobs... up until a certain script size when the chance of bugs starts increasing and you should think of using another tool instead or in conjunction. Also this sounds like GitOps to the extreme: when you can only change your state via your repo and all the triggers that come with it you might as well replace your CD with a single bash script (see above).
I absolutely lost it when I saw the "Cloudflare Analytics Dashboard" - and then remembered I use real software that has a dashboard that looks very similar, not even as pretty but grey 1998 vibes
I had a regex blow up on me like that once. Not **quite** as silly as .*(?:.*=.*), but pretty close. The regex library we were using implemented backtracking with recursion, so instead of eating CPUs like a bag of chips it would instead masticate for a while before eventually running out of stack, whereupon it would puke Pringles. This was an especially fun one to fix because if you google “regex stack overflow” you’ll find that there are zillions of questions on stackoverflow about regexes that have nothing to do with stack overflows. And yes, in shame I must admit the regex in question did not fit on my screen all at once. In my defense, however, that was because a year or so earlier I had torn the line noise apart and put 3 to 5 characters of actual regex on each line, followed by a comment. Only two lines had // I have no idea, this shouldn’t do anything, but it doesn’t work without it.
It's kinda funny that the Internet was designed to be a `web` that hopefully would prevent failures of a single node taking down the whole system, but nowadays we heavily rely on a handful of service providers just to run the Internet.
@11:50 You don't need a DFA to guarantee time linear in the input length because you don't actually need to follow every path, you just need to keep track of the set of states that the NFA is in. This does make it more expensive proportional to the size of the NFA, but it's still linear in the input length.
Ken Thompson is crying really hard. His work has been around for decades. As a CS guy who has specialized in algorithms, this hurts in the middle of the heart.
I stumbled across your videos yesterday, and i find them really entertaining and interesting to watch! Thank you explaining these topics in a clear way that even if I know nothing about regex or cloudfare, i can still follow along and understand the video :)
The reason we predominantly use NFA regular expression engines is not just because they're usually faster if we don't throw non-degenerative expressions at them, but also because they support expressions that exceed the capabilities of a regular grammar, such as back references to a specific capture group that has been seen previously.
@@MH_VOIDfor a normal case the performance is generally similar, but the difference is that these linear engines like RE2 are more predictable and less likely to blow up in your face. If you don't have control of the pattern and the input, they are *much* safer, and losing features that depend on backtracking is generally not a big deal. If it's really performance critical just don't use regex at all if you can avoid it.
NFAs and DFAs are computationally equivalent and recognise only exactly the regular languages. So a NFA-backed RE engine would have to implement additional functionality as languages with backreferences are not regular.
I've learned in the meantime that the biggest speed advantage is actually due to unrelated technologies such as a JIT compiler in PCRE2, which is in fact a top-down parser that happens to accept regex-like expressions. The only thing that is definitely faster about NFA is compiling regular expressions.
Love these videos Kevin! Your amazing storytelling, editing, animations, and everything else comes together in an amazing way! Love watching every video you put out, keep it up :)
Awesome and informative video! A small correction: NFA matching is still linear in the input string. You just have to store the configuration as a set of NFA states, rather than a single state. You don't get exponentially many paths in the way you describe in the video because paths ending at the same state are merged in this set representation.
Except it's wrong. You do non-capturing for performance reasons, to consume characters in some group (this is simply parentheses syntax reuse). In this particular case this was so obviously wrong I can't imagine anyone familiar not to spot this, but in general you shouldn't capture what's not required after the match is done.
I mean they could have just done .*?=.* but I guess RE2 is safer long-term. Still this screams "I don't understand regex, it's just magic to me" on the part of that developer.
which is fair honestly, regex is basically just magic and once you understand the syntax you dont question it's ways. though im surprised nowhere else along the development process was anybody concerned over it. Apparently nobody that looked over it had any idea what it was doing.
Well, from the thumbnail image, the regexp (.*=.*) says "find the LARGEST chunk of text possible before a literal = sign, then find the largest chunk after it, including other = signs if they exist", and it will walk the entire chunk of data many times to ensure it gets ALL of them. They probably meant to do (.*?=.*?), which would have found the SMALLEST chunks of text around literal = signs, and would stop as soon as it found even a single = sign.
Cloudflare's free tier has the most value of any other free tier on the internet. They give you access to almost everything that the big companies have access to, just with certain limitations like max rule count. Amazing company
God I love watching these vids, I love the duality of high quality, digestible, information coupled with a nice sprinkling of "don't be a dipsh*t" commentary over issues and causation. Developer wise, nothing brings me more joy about my job than someone pointing out how much of an imbecile I *could* have been on that one day.
You are a gem, thanks for the detailed information. You didn't only explain complex information, you also explained how WAF companies work. Thanks again. Salam!!
Didn't expect to hear about theoretical computer science (which is a subject I take this year) in this video but nice work. It's nice to see actual real-world usage of converting e-NFA to DFA's. I wish our prof would have included this video in his lecture...
I still have questions as to why there are so many non-capturing groups in that regex and isn't the second ".*" before the "=" redundant? Could not this regex have been simplified? Edit: Also considering parameter names are typically shorter than their values, even if you had to do this I would assume (.*?=.*) would be more efficient on average.
The following regex would return the exact same match count: = If the purpose of that regex was to consume an entire line containing "=" (another thing it does, badly) then we could do something similar to what ya wrote: (?:.*?=.*) Though, I'm unsure if the non-capturing group is necessary, so maybe the version provided is fine. If we run the original through a regex tester? It doesn't matter how long the string is, it plain fails in every way. There is no use case for that regex except DoSing thyself.
Honestly the algorithm for turning a regex into a DFA is pretty simple. You process it into an NFA by creating a trivial one (just the regex itself on the single edge) and then expand edges based on what operations were applied (for instance, an aa(ab)*b edge becomes an aa edge into a new state with a looping ab edge with b edges leading out. Once all edges are a, b, or epsilon, we can traverse to create a DFA. Each state of the DFA will correlate to a set of states in the NFA, so now we follow the outgoing edges for each input, and note what group of NFA nodes we reach. If we haven't been in that group before, we add a state to the DFA. Then we connect it to the DFA group-state we were in with the input we followed. Repeat until all paths in the NFA are followed to a state we already checked.
The line at the end could be revised; it isn't that "convoluted" regex should be avoided, in fact writing regex that appears more complicated often ends up being better in that it's more specific. Really bad regex happens when people want to write something quick and dirty that will consume all valid cases, but without considering constraints, which was part of the issue here. I know this is probably what you meant by saying convoluted, but one thing missing here is a bit talking about what you should consider writing instead, and people unfamiliar with good vs bad regex might not come away with the right idea.
7:17 is it just me or is this process total insanity? Check all combinations??? I’m not very knowledgeable about these things, but I can’t help but think so
Regex matching cannot terminate unless: 1. You reach a success case 2. Every single search is a failure case Kevin illustrated a depth first search implementation of regex, and all tree search algorithms have a worst case where your destination is the last place you look. I believe he picked depth first search because it causes the most dramatic worst case time waster
@@gileee Very true. It's so easy to overlook that a computer can not just glance at something and see at once that it doesn't need to look at it like a human. We're so used to our brains' ability to do massive parallel computing on the images our eyes see that we often forget that a computer cannot do that but has to sequentially look at each element.
While technically all NFA can be converted into DFA, the algorithm to do so (subset construction algorithm) has an exponential worst case time runtime. This is probably why people try to approximate the DFA.
11:35 hurt a lot. Nfa's do not need to 'split off' new instances; rather state can be represented as a set of currently occupied nodes, which is, as the name nondeterministic finite automata would suggest, bound by some constant in size. A given nfa, then, can be evaluated in time linear to the length of the string. Just observing your example, for what purpose would you keep multiple instances of the same node? A node is either currently active or not. Furthermore, there is no garuntee on the size of the reduced dfa better than exponential in the number states in the nfa, so working with the deterministic version isn't necessarily always better.
I always thought that the = was the stopping state. Why would you want to backtrack if the next character is the next character that you are looking for and the previous expression matched already?
I'm starting to hope another outage somewhere on the internet occurs just to see a Kevin Fang video with hundreds of explosion effects in my recommendations yet again
I'm currently working on my thesis where I'm also dealing with regular expressions and their internal NFA representation quite a bit. And I recently encountered some papers about these risks of the naive backtracking implementation most engines use. Very interesting to now see one of these problems occur in practice. This happening in practice also gives quite some validity to the approach Rust is taking, which ensures good asymptotic characteristics. Only thing I want to point out is that you saying "the increase in steps can potentially be exponential" is fairly misleading. Since it makes it sound like this particular case has an exponential asymptotic runtime, while it only has a quadratic one.
Define "Most". All unix utilities use DFA's . The problem are interpreted languages, they use PCRE so they can use the ~= operator dynamically, as the recursive regex doesn't need compilation and the implementation is easier. (even bash uses PCRE for the =~ operator) Using compilation caching, for example, all of them could use the normal regex library, but I guess it increases complexity.
@@framegrace1 That's fair. Most regex implementations I've encountered 🙂 But I'm a windows user plus I mostly use higher level interpreted languages, so adds up with what you're saying. The DFA version combined with capture groups just gets much more complex, and I don't know of any implementation that supports lookarounds with this either yet.
Do most implementations of regex parsers use depth first search to parse the possibilities? Or is that just for drama and ease of explanation since backtracking is more intuitive to understand?
They need to, as that is how the "*" operator is defined. There are some common shortcuts, e.g. for ".*" followed by a fixed character, the engine will keep track of that character while gobbling up the string, so it can limit backtracking to those locations instead of going character by character. But in general, the engine has not much choice as there is only one correct way of matching a regex to a specific string. For example, with ".*(.*=.*)", the only correct way to match "A=B,c=d,=" is to match "A=B,c=d," to the first ".*", nothing to the other two ".*s" and the last "=" to "=". That, however, is why there's a general recommendation to avoid ".*" until you really want the engine to start looking at the end of the string first. In almost all cases ".*?" (as many as possible, but start with 0 and add characters" instead of "as many as possible, start with everything and backtrack") is what you actually want. Also, often you don't really want ".*X" but "everything but X, then X", which is "[^X]*?X", or even "[^X]+?X" ("at least one character that is not an X, but as many as are needed, then an X").
If I had a nickel for every time quadratic complexity passed testing but blew up in prod I'd be rich
What if every time it quadratic complexity passed testing, you got a nickel for every time quadratic complexity passed testing?
If I had a nickel for every time I got a nickel...
If I got a nickel for every time someone says "If I got a nickel" I would be a gabbagillionaire!
This is why we have and should have computer science and not just computer engineering - math is important.
This is exponential, not quadratic. If it were quadric, there wouldn't be any problems.
"Some programmers run into a problem and think, 'I will use regex to solve this!' Now they have two problems."
- Zawinski
2:51 I like how the 1000x engineer just foreshadows all the events that about to happen, and then approves the change.
par for the course tbh
Yeah average senior software engineer moment tbh
@EgonFreeman From experience... had they spoken up, they would almost certainly have been met with one of "who asked you?", "shut up, we're doing this!", or just ignored. Dozens of others didn't notice any issues, so who's going to listen to "that guy"?
a true 1000x enginerd would not use regex lol
@@jfbeam sounds like the culture in your organisation is arse. A let’s be careful should be taken serious coming from a day 1 intern.
Despite all this, I still very much love Cloudflare especially because of their transparency. They always go into great depth explaining what happened, what they did, and how they resolved it.
Many companies can learn a thing or 2 from them in that regard. Customers tend to have more faith in a company that just owns up to it's mistakes rather then trying to have a PR department cover it up in nice words.
@@KabodankiCaptcha? Google the "Privacy Pass" extension. It lets you skip the tests by doing tests beforehand.
@@StkyDkNMeBlz they don't have captcha anymore, they use turnstile
I love my state sponsored man in the middle
I hate cloudflare because its trying to become monopoly on internet ethics. Its not your job to pass judgement what is allowed on the internet. Banning something that is illegal is fair. But banning because its immoral according to them... yea I hate cloudflare.
@@StkyDkNMeBlz I really want to be tracked by Google across the internet with their corpo issued cryptographic IDs. You do understand what you are shilling?
I guarantee all the engineers who reviewed it didn't even look at the regex. you don't go poking someone elses regex
Or even your own if you wrote it more than 1 week ago.
Lol I hate that I’m relating to this😂🤣😂
2:37 laughed my ass off at "delete master after the pull request is merged"
and there are people that would click that checkbox.
Chill, "master" is the dev branch here while "main" is the master branch of course.
@@Blast-Forward Let's just agree that it's a funny easter egg in the video and laughing i justified.
Glad to see others had seen that too 😂❤
Yeah that was such a nice touch!😂
I was actually at Cloudflare in the room for Cloudbleed and this issue, in SF for Cloudbleed and happened to be in London for this one. The real story is much better than this. We were at lunch doing a tech talk in the lunchroom when someone grabbed the mic and announced we were having a P0. We stampeded back to our desks and got to work fixing it. The issue was obviously related to the WAF from the start and it was just a matter of cleaning up. Keep up the videos they are great
So how's working with cloudflare like ? :)
@@NayayomIt’s actually really awesome. Tons of really smart, kind, curious people. Everything internally is about transparency, execution, and learning. Definitely engineering-centric, but also super product-focused in that the customer is always considered during meetings/talks/decisions.
@@0xggbrnr sounds like a good place to work! Glad to hear that
Ok, but is not BETTER than the video story lol
What happened to the employee who made the regex?
I work in a cybersecurity administration space where regex is used all the time as a necessity. This is a story we tell people all the time to make sure they understand how important it is to make efficient regex.
The regex is fine there was no reason for the engine to backtrack on it
@@WyvernnnnI disagree. The pattern didn’t make much sense. It was clearly missing something between the initial wildcard and non-capturing group. There’s never any reason to put two wildcards next to each other like that.
@@Zei33 Yeah that was weird, but it should still get O(n) in the end, that's the whole point of regular expressions (as long as you don't have capturing groups that can be re-used within the regex aka backtracking)
I can understand how it happened, even an experienced programmer can struggle to parse a regex by eye and it's easy to make something that's a resource hog without realising. It's certainly a lesson to test your regex thoroughly before release.
@@Croz89 oh yeah I’m constantly making mistakes. No one is perfect, that’s why debugging and beta testing exists. After over a decade of programming, I don’t make a lot of mistakes, but when I do they’re usually obscure cases or subtle logic errors. When you’re working with tens of thousands of lines of code and looking at them for 8 to 20 hours a day, mistakes are gonna happen.
Love the little details, like the upside down cloudflare icon in Australia. Good job editing!
0:36 if anyone missed it
2:37 'Delete master after the pull request is merged' xD
It's important to note that re2 actually has other downsides compared to other regex engines, such as being unable to handle lookaheads and lookbehinds. This isn't just an implementation issue either: adding these operations actually makes regex strictly stronger than a finite state machine (instead it becomes a pushdown automaton). There's also a lot of fun math with finite state machines, where it turns out they're strictly equivalent to generating functions, which are basically power series where you don't care about convergence!
I think "look-around" assertions could still be implemented to run in linear time. As far as I know, back references is the only feature that can make the run time go exponential. In fact, matching regexes with backrefs is proven NP-hard.
Lacking some of the more advanced PCRE features in order to make guarantees about the maximum runtime seems like the right compromise to make for a high-volume security frontend that sits between the global population and a large swath of the internet.
@@hoo2042 That's actually why Russ Cox developed RE2 in the first place. He made it for google code search (now defunct). You can't really be Google and expect tech people to only input well-behaved regexes. He has a very interesting series of articles named "Implementing Regular Expressions". I really recommend every developer to read them.
shouldn't regex expanded like this be called cfex instead, since it's, well, no longer a regular expression
@@hoo2042The problem now is that they use it on every Google product. re2 is the regex engine of BigQuery and I’m stuck with this limitations. It doesn’t make sense in a data warehouse.
Worth mentioning that Cloudflare isn’t just a CDN. It’s predominantly used by most websites as a web proxy responsible for the majority if not all requests to the origin.
the webproxy is a cdn last i checked
users can elect their own dns service
@@emeraldbonsai Typically a CDN serves static or at least mostly static data. A CDN may be implemented as a caching web proxy, but a web proxy can do a lot more than what usually falls under the definition of "CDN". In CloudFlare's case, they basically offer both and blur the line about which is which, which is fine since it's a blurry line, but the person you are replying to isn't wrong.
@@VeggieRice DNS has nothing to do with what's being discussed here (aside from being an earlier step in the chain that would take you to the page's configured web proxy or CDN, of course, but equivalently so to saying "the user can elect their own browser").
The distinction is useful here as the outage is much more impactful if your web pages wont even load themselves (because the web proxy is down) rather than just CDN assets not loading (which could be only large assets).
"like re2, which work by converting regex to a state machine, or fancy computer science flowcharts"
Damn, I wish I can say this line to the professor who teaches compiler course in my university lol
I had straight vietnam flashbacks when the statemachine came up lmao
State machines were the most useless thing I learned 2nd year in "computational theory" class. Whole class was academic fluff.
@@skyhappy FSMs are used everywhere, they're the basic building block of most digital protocols and embedded systems. Definitely not "useless".
I call them spaghetti meatballs. Feel free to use that one.
@@skyhappy average framework enthusiast with no understanding of computer science.
These stories are so cathartic. Thanks for applying your storytelling to these niche topics!
This is why as a general rule I NEVER use .* in my regexes. If I want to match everything before an equals sign, I'd use [^=]*= rather than .*= because it's always better to be as explicit as possible.
But that would match just the first '=' , not all of them. If you have a lot of parameters on a URL , you will have a lot of '=' and you will want to search all of them for certain things.
@@framegrace1 That's why you don't anchor the expression to the end of the string in this case. We don't care what else is at the end of the URL if we find a "bad thing" near the start. Also, most regex engines have a shortcut implementation for regexes ending in ".*"/".*$", so the one at the end is of no concern.
And BTW, the issue was mostly the ".*.*", not so much the ".*=". Backtracking the latter isn't so expensive---it doesn't really matter if the engine has to search for the = from the start of the end of the remaining string. It most likely has a shortcut for "fixed character after match all" anyway. There's a good chance that ".*?=" is faster than "[^=]*?="/"[^=]*=" as it can scan the string using a simple "equals" comparison and be done. This, however, all goes out the window once there are multiple ways to match, like the infamous ".*.*". So when using this optimisation on purpose, it makes sense to manually commit after the "=" (e.g. with "(*COMMIT)").
@@framegrace1 You can still get the last '=' by being more explicit: /([^=]*=)*=/ or /=[^=]*$/
upside down cloudflare logo for australia was gold
my brain automatically shut down when you start explaining the regex...
Same here dude. I can't understand how people are still relying on regex for such important aspects of the code. It's just mind-blowing that a firewall rule is managed with that in 2023.
@@TMRick1 what alternative is there that is universally supported and has the same level of flexibility for how "compact" it is?
@@TMRick1 you just don't know the pain of using anything else to do what regex can do. What do you suggest? awk?
lol just write your own parser. You are acting as if that's a hard problem to solve and as if customers are not important. You just want to make your lives as "programmers" easier. Have some responsibility for the unnecessary amount of code that runs on users machines.
@@Игор-ь9щI use arch btw lol
They cannot be DDOSed, but they can DDOS themselves...
Someone wrote a paper ages ago about backtracking vs non-backtracking regex engines and the state of software slowness...
The title is "Regular Expression Matching Can Be Simple And Fast (but is slow in Java, Perl, PHP, Python, Ruby, ...)" written by a Russ Cox in 2007. I bet he's feeling vindicated
That article, and several others, should be mandatory reading for anyone using regular expressions.
In other words, overengineered frameworks are bad for computing
3:50 Putting cloudlfare upside down above Australia was a hilarious touch!
It's amazing how casually things are actually handled behind the scenes in the IT world. I once wrote some software for a bank, did a 3 hour audit of the code with 5 of their top developers, after which they installed a pre-compiled earlier test version on their prod system. smh
😂
I once fixed a globally crashing iOS app by hacking the backend to send out technically incorrect data. The app passed all tests because the test suites didn't include any data to reveal division-by-zero bugs.
This was especially bad since the time to get Apple to review and deploy an updated version could take a week or more, IIRC.
After conversing with the dev responsible, I asked him how he habdled fractional numbers, and he was sure that fractional numbers were always displayed as integers, so I changed the API to send instances of 0 as 0.001, effectively circumventing the bug while displaying calculated numbers (and 0s) correctly in the app.
I think it's the most hacky fix I've ever deployed. It felt terrible and exhilarating and awesome all at the same time 😂 I'm actually a little proud 😇
@@DanielSmedegaardBuus That's awful, I love it.
this is my new favorite channel. explaining everything clearly, and being humorous with small jokes here and there
And explosions, lots of explosions.
Same. And the upside-down cloudflare logo on Australia killed me. 😂
I work as a sysadmin, yet I wish I had this much insight into how all the technologies I use daily work this deeply. I never finished my college, so I only ever heard of DFA, but despite that, your video explained it very well, and showed how much of an issue a simple regex can be, when executed thousands a times a second.
Please make more videos, I cannot wait for more
It is interesting to note that any regex can be represented as a nondeterministic finite automata (NFA) and any NFA can be converted into a DFA using a simple algorithm. The only downside is that the DFA may end up with exponentially more states than the NFA which can take up a lot of memory.
Good thing you enjoyed it, i have not so fun memories doing DFAs and NFAs by hand on college. :(
@@Nayayombro same. I hated it. But understanding them is pretty good. We can see how google's devs used
automata and formal grammar theory to develop a useful practical application with regex.
@pav431 even without some formal education, I'd say anyone working as sysadmin should know something about algorithms and complexity theory. Especially when writing code that systems used by others depend on. And shell scripts _are_ code.
Knowing that there are regular expressions, and highly irregular expressions that are just _called_ something like "Perl compatible regular expressions" or "extended regexp" or whatever, is important. So is not writing script that unnecessarily nest three or more loops, and work fine on small testdata, but take "forever" with realistic sizes of data. Know and understand the various o-notations. Just because quicksort is usually quick, doesn't prevent it from having O(n²) worst case complexity. You may be fine with that, but you will want to know why you can live with it. There has to be a metric ton of good books on this, so it's possible to learn. Enjoy!
@@lhpl As someone who's inherited, and then had to completely rewrite from scratch, core automation scripts for clusters that were written by novice sysadmins, I concur that learning these things is important.
Some aysadmins learn that awk and grep exist and that's the end of their training. Rewriting from scratch saved me hours upon hours it would have cost me to try to maintain the poorly made code inherited from novice sysadmins.
"Which you may notice is not linear."
This is one of those comp. sci. campfire horror story jumpscares.
Right? I just saw 22, 33, 44+1 and immediately thought "oh no". 😆
Absolutely crazy videos you're pumping up. Love your comedic editing style too!
Every video of yours makes me feel like to the entire internet could break at any moment lol
The entire world runs on regular expressions that were written in rage.
8:46 what tool are you using here? Can I use it to visualize other programming languages' regex engine?
I was gonna ask the same thing
I dont blame you for doing Cloudflare again, their RCAs are always excellent. This is such an excellent channel, you deserve far more subs! These are exceedingly entertaining and interesting for software engineers (and probably most other folks too!)
I kept wondering why they didn't just do a rollback to fix the issue, thanks for addressing that at the end.
Yeah, as much as I love Cloudflare for smaller stuff there's a reason a lot of large enterprises use Akamai. A little overpriced for a simple growth phase startup and not as transparent as Cloudflare when something breaks on their end, but that massive bucket list of features available with Ion Premier, Cloudlets, and many more, especially Datastreams and their web security analytics portal, is an absolute lifesaver. Hell, it helps us debug all sorts of broken stuff upstream of it too, although I wouldn't be surprised if Cloudflare offered something like Akamai Reference IDs for easy, enterprise-friendly tracing. Specifically, Akamai is really particular about having identical Staging and Production sections with really fast rollback when production error rates increase even a little.
All those rules are stored locally to each node, and you cannot rollback to a machine that is dead or so high on CPU that can't even handle a connection. I presume they globally disabled WAF and restarted the nodes, so when up, they didn't try to apply the WAF rules and were free to be rolled back/forward. Then they re-enabled WAF (very slowly, I presume :) ) and all was back to normal.
Non-capturing groups (unlike lookahead and lookbehind) do get included in the match result (think $0), they just don't create an additional sub-match.
Eg at 6:35, that would match $0: $1: $2: while removing the ?: would make it $0: $1: $2: $3:
Regex is great like shell scripts: works everywhere and does it jobs... up until a certain script size when the chance of bugs starts increasing and you should think of using another tool instead or in conjunction.
Also this sounds like GitOps to the extreme: when you can only change your state via your repo and all the triggers that come with it you might as well replace your CD with a single bash script (see above).
I absolutely lost it when I saw the "Cloudflare Analytics Dashboard" - and then remembered I use real software that has a dashboard that looks very similar, not even as pretty but grey 1998 vibes
I had a regex blow up on me like that once. Not **quite** as silly as .*(?:.*=.*), but pretty close. The regex library we were using implemented backtracking with recursion, so instead of eating CPUs like a bag of chips it would instead masticate for a while before eventually running out of stack, whereupon it would puke Pringles. This was an especially fun one to fix because if you google “regex stack overflow” you’ll find that there are zillions of questions on stackoverflow about regexes that have nothing to do with stack overflows.
And yes, in shame I must admit the regex in question did not fit on my screen all at once. In my defense, however, that was because a year or so earlier I had torn the line noise apart and put 3 to 5 characters of actual regex on each line, followed by a comment. Only two lines had // I have no idea, this shouldn’t do anything, but it doesn’t work without it.
lmaoo man i feel you
Come to the dark side. We indent our regexes.
@@tehlaser jesus christ man
It's kinda funny that the Internet was designed to be a `web` that hopefully would prevent failures of a single node taking down the whole system, but nowadays we heavily rely on a handful of service providers just to run the Internet.
@11:50 You don't need a DFA to guarantee time linear in the input length because you don't actually need to follow every path, you just need to keep track of the set of states that the NFA is in. This does make it more expensive proportional to the size of the NFA, but it's still linear in the input length.
Ken Thompson is crying really hard. His work has been around for decades. As a CS guy who has specialized in algorithms, this hurts in the middle of the heart.
I stumbled across your videos yesterday, and i find them really entertaining and interesting to watch! Thank you explaining these topics in a clear way that even if I know nothing about regex or cloudfare, i can still follow along and understand the video :)
out of the 10,000 times that this topic has been covered on TH-cam in this exact amount of detail,
this is so far the most recent.
kudos!
The reason we predominantly use NFA regular expression engines is not just because they're usually faster if we don't throw non-degenerative expressions at them, but also because they support expressions that exceed the capabilities of a regular grammar, such as back references to a specific capture group that has been seen previously.
I was under the impression that they're generally slower
@@MH_VOIDfor a normal case the performance is generally similar, but the difference is that these linear engines like RE2 are more predictable and less likely to blow up in your face.
If you don't have control of the pattern and the input, they are *much* safer, and losing features that depend on backtracking is generally not a big deal.
If it's really performance critical just don't use regex at all if you can avoid it.
NFAs and DFAs are computationally equivalent and recognise only exactly the regular languages. So a NFA-backed RE engine would have to implement additional functionality as languages with backreferences are not regular.
I've learned in the meantime that the biggest speed advantage is actually due to unrelated technologies such as a JIT compiler in PCRE2, which is in fact a top-down parser that happens to accept regex-like expressions. The only thing that is definitely faster about NFA is compiling regular expressions.
@@D0SampCertain types of things that regex engines/matchers support aren't true regex, and can't be covered by a DFA/NFA. That's a bit BS to me.
Love these videos Kevin! Your amazing storytelling, editing, animations, and everything else comes together in an amazing way! Love watching every video you put out, keep it up :)
The Lemmino music really rounds this video off
Awesome and informative video! A small correction: NFA matching is still linear in the input string. You just have to store the configuration as a set of NFA states, rather than a single state. You don't get exponentially many paths in the way you describe in the video because paths ending at the same state are merged in this set representation.
these videos are so well produced, the jokes in the imagery are so on point.
> Delete master after the pull request is merged
LMAOO
Your videos are amazing, keep going you are bound to blow up
I like the part of explaining non-capturing groups and then throwing them out the window immediately after
Except it's wrong. You do non-capturing for performance reasons, to consume characters in some group (this is simply parentheses syntax reuse). In this particular case this was so obviously wrong I can't imagine anyone familiar not to spot this, but in general you shouldn't capture what's not required after the match is done.
paging kevin fang, your services are needed once again
This is an incredible video. You took very complex and difficult to understand concepts and simplified it well. Well done.
Where do I get the tool that you used at 7:24 for debugging regex?
When in doubt, implement your expression in a delayed loop so it doesn't murder everything.
Even without being into coding I understand how this could work and it puzzles me why they didn't do this lol.
A rollback being a special case of a rollforward had me kekking heartily
I love the graphics, depicting real processes very well but hilariously funny at the same time!
Things exploding in your videos is the entire reason I wake up every morning, thank you friend, it's freakin hilarious
love the style keep it up man
this video about shitty code explains regex, dfas and nfas better in like 5 minutes than my university formal languages course did in months
The process seems awfully familiar...but how......
The animation and comedic aspect of this video is great. Plus its explained extremely well. Nice
I mean they could have just done .*?=.* but I guess RE2 is safer long-term. Still this screams "I don't understand regex, it's just magic to me" on the part of that developer.
which is fair honestly, regex is basically just magic and once you understand the syntax you dont question it's ways.
though im surprised nowhere else along the development process was anybody concerned over it. Apparently nobody that looked over it had any idea what it was doing.
The star means to match 0 or more, making it lazy with ? Is pointless
2:37 I love that he checks the "delete master branch after merging" box
I appreciate that the Australian Cloudflare is upside down at 4:25
I'm a simple man. The upside down Cloudflare over Australia made me laugh. Thanks.
Well, from the thumbnail image, the regexp (.*=.*) says "find the LARGEST chunk of text possible before a literal = sign, then find the largest chunk after it, including other = signs if they exist", and it will walk the entire chunk of data many times to ensure it gets ALL of them.
They probably meant to do (.*?=.*?), which would have found the SMALLEST chunks of text around literal = signs, and would stop as soon as it found even a single = sign.
Cloudflare's free tier has the most value of any other free tier on the internet. They give you access to almost everything that the big companies have access to, just with certain limitations like max rule count. Amazing company
I can't believe I've only just run across your content - it's really well done and humorous, you're going places!
11:00 Wow, you've explained the usefulness of using DFA way better than my professor! Now it all makes sense!
Never have I ever expected to hear LEMMiNO soundtracks anywhere, glad to hear it from this channel!
8:00 I suspect if the one dotstar wasn't in a different group the parser would've been smart enough to just simply it to a single dotstar
God I love watching these vids, I love the duality of high quality, digestible, information coupled with a nice sprinkling of "don't be a dipsh*t" commentary over issues and causation. Developer wise, nothing brings me more joy about my job than someone pointing out how much of an imbecile I *could* have been on that one day.
You are a gem,
thanks for the detailed information.
You didn't only explain complex information, you also explained how WAF companies work.
Thanks again.
Salam!!
Didn't expect to hear about theoretical computer science (which is a subject I take this year) in this video but nice work. It's nice to see actual real-world usage of converting e-NFA to DFA's. I wish our prof would have included this video in his lecture...
I understood more with this video than in half a semester of languages and automatons
Another nice dev store, with interesting storytelling, really enjoyable to watch. Thanks, we are waiting for more : )
Excellent vid. Couldn't have been shorter. Didn't need to be longer. Learned lots. You might've achieved perfection.
Dude your illustrations are so good and funny! :D
This video has a pretty good explanation of regex engines TBH
2:37 this tiny easy-to-miss detail right before the cut to the next shot killed me 😂
I still have questions as to why there are so many non-capturing groups in that regex and isn't the second ".*" before the "=" redundant? Could not this regex have been simplified?
Edit: Also considering parameter names are typically shorter than their values, even if you had to do this I would assume (.*?=.*) would be more efficient on average.
The following regex would return the exact same match count: =
If the purpose of that regex was to consume an entire line containing "=" (another thing it does, badly) then we could do something similar to what ya wrote: (?:.*?=.*)
Though, I'm unsure if the non-capturing group is necessary, so maybe the version provided is fine.
If we run the original through a regex tester? It doesn't matter how long the string is, it plain fails in every way. There is no use case for that regex except DoSing thyself.
Honestly the algorithm for turning a regex into a DFA is pretty simple. You process it into an NFA by creating a trivial one (just the regex itself on the single edge) and then expand edges based on what operations were applied (for instance, an aa(ab)*b edge becomes an aa edge into a new state with a looping ab edge with b edges leading out. Once all edges are a, b, or epsilon, we can traverse to create a DFA. Each state of the DFA will correlate to a set of states in the NFA, so now we follow the outgoing edges for each input, and note what group of NFA nodes we reach. If we haven't been in that group before, we add a state to the DFA. Then we connect it to the DFA group-state we were in with the input we followed. Repeat until all paths in the NFA are followed to a state we already checked.
breuuugh, how has this channel not been suggested to me much sooner
The line at the end could be revised; it isn't that "convoluted" regex should be avoided, in fact writing regex that appears more complicated often ends up being better in that it's more specific. Really bad regex happens when people want to write something quick and dirty that will consume all valid cases, but without considering constraints, which was part of the issue here. I know this is probably what you meant by saying convoluted, but one thing missing here is a bit talking about what you should consider writing instead, and people unfamiliar with good vs bad regex might not come away with the right idea.
another based regex enjoyer
Hahah, where did you get the animation @4:30m from? Looks really nice.
7:17 is it just me or is this process total insanity? Check all combinations??? I’m not very knowledgeable about these things, but I can’t help but think so
Regex matching cannot terminate unless:
1. You reach a success case
2. Every single search is a failure case
Kevin illustrated a depth first search implementation of regex, and all tree search algorithms have a worst case where your destination is the last place you look. I believe he picked depth first search because it causes the most dramatic worst case time waster
How else would you know a string doesn't match unless you check all possibilities? A computer can't predict the future.
@@gileee Very true. It's so easy to overlook that a computer can not just glance at something and see at once that it doesn't need to look at it like a human. We're so used to our brains' ability to do massive parallel computing on the images our eyes see that we often forget that a computer cannot do that but has to sequentially look at each element.
I love your videos on the internet blowing up. Perfect blend of programming, memes, and good graphics
this is by far my most interesting youtube channel . please keep it up ! I really enjoy this content
While technically all NFA can be converted into DFA, the algorithm to do so (subset construction algorithm) has an exponential worst case time runtime. This is probably why people try to approximate the DFA.
I love your videos and goofy animations please never stop doing these
11:35 hurt a lot. Nfa's do not need to 'split off' new instances; rather state can be represented as a set of currently occupied nodes, which is, as the name nondeterministic finite automata would suggest, bound by some constant in size. A given nfa, then, can be evaluated in time linear to the length of the string.
Just observing your example, for what purpose would you keep multiple instances of the same node? A node is either currently active or not. Furthermore, there is no garuntee on the size of the reduced dfa better than exponential in the number states in the nfa, so working with the deterministic version isn't necessarily always better.
Dude your graphics are so funny 💀 I just about died when you edited the flow chart for WAF deployments
12:17 I really loved how you used sarcasm throughout the video😂
You have perfect combination bro ngl
You just explained regex which i literally couldn't get the grasp of in literally 30s
10:40 I believe this DFA does not recognise the empty string.
Therefore it doesn't match the (a|b)* regex.
Oops, added to description
I always thought that the = was the stopping state. Why would you want to backtrack if the next character is the next character that you are looking for and the previous expression matched already?
10:56
Are these two automata really equivalent? The left one doesn't take an empty string as a valid input, the right one does.
the upside down Australian Cloudflare server was a nice touch
every time this dude uploads it's an absolute banger
I'm starting to hope another outage somewhere on the internet occurs just to see a Kevin Fang video with hundreds of explosion effects in my recommendations yet again
This channel concept is brilliant. Thank you so much.
I'm currently working on my thesis where I'm also dealing with regular expressions and their internal NFA representation quite a bit. And I recently encountered some papers about these risks of the naive backtracking implementation most engines use. Very interesting to now see one of these problems occur in practice. This happening in practice also gives quite some validity to the approach Rust is taking, which ensures good asymptotic characteristics.
Only thing I want to point out is that you saying "the increase in steps can potentially be exponential" is fairly misleading. Since it makes it sound like this particular case has an exponential asymptotic runtime, while it only has a quadratic one.
Define "Most". All unix utilities use DFA's . The problem are interpreted languages, they use PCRE so they can use the ~= operator dynamically, as the recursive regex doesn't need compilation and the implementation is easier. (even bash uses PCRE for the =~ operator)
Using compilation caching, for example, all of them could use the normal regex library, but I guess it increases complexity.
@@framegrace1 That's fair. Most regex implementations I've encountered 🙂
But I'm a windows user plus I mostly use higher level interpreted languages, so adds up with what you're saying.
The DFA version combined with capture groups just gets much more complex, and I don't know of any implementation that supports lookarounds with this either yet.
0:36 oh australia
0:55 As soon as you said RegEx, I guessed it was a lazy match everything wildcard.
Do most implementations of regex parsers use depth first search to parse the possibilities? Or is that just for drama and ease of explanation since backtracking is more intuitive to understand?
They need to, as that is how the "*" operator is defined. There are some common shortcuts, e.g. for ".*" followed by a fixed character, the engine will keep track of that character while gobbling up the string, so it can limit backtracking to those locations instead of going character by character. But in general, the engine has not much choice as there is only one correct way of matching a regex to a specific string. For example, with ".*(.*=.*)", the only correct way to match "A=B,c=d,=" is to match "A=B,c=d," to the first ".*", nothing to the other two ".*s" and the last "=" to "=".
That, however, is why there's a general recommendation to avoid ".*" until you really want the engine to start looking at the end of the string first. In almost all cases ".*?" (as many as possible, but start with 0 and add characters" instead of "as many as possible, start with everything and backtrack") is what you actually want. Also, often you don't really want ".*X" but "everything but X, then X", which is "[^X]*?X", or even "[^X]+?X" ("at least one character that is not an X, but as many as are needed, then an X").