Yes. And make sure each microsevice is on a different machine. For performance reasons. And have them persist to a database. One database. On one machine 👌
Imagine creating a small site that hits the mainstream media. Marketing: That would be GREAT! Engineering: That would be AWFUL. All you need to know right there.
@@NickTaylorRickPowers Ugh, lol, brings up bad memories. We went through a phase where we moved to an enterprise agreement with MS, and everyone in IT moved their existing Excel forms, Access databases etc. to SQL server. MS hit us with a $2 MM true up that the business didn’t expect. That went over about as well as you’d imagine…
The establishment: There's no way society wouldn't crumble if we reduce working hours by 1 hour per day The world's population during work hours: checking boxes
I really appreciate how you brainstorm how things could be done before learning how the author actually did it. Really a good way to teach and really interesting to see the different ways to approach the problem.
It's a good learning strategy in general. When I'm watching something like a course, I'll pause before a problem, try to solve it myself, and see why I'm an idiot and why the shown solution is better. It helps with both engaging with the material and practicing your own problem solving.
After watching through this whole thing, I can't help but be reminded of the write ups that Reddit did for r/place. The most interesting/technical one was for the 2022 version where they talk about doing a lot of these same things. "I-Frame" updates with websockets for difference updates, managing how to store the data in the backend, and even a little bit about scaling at that level. What I like about this article though, is that it's just one dude with some help from a couple of friends. It's much more of a feel good story than when a corpo does something similar.
"is that it's just one dude with some help from a couple of friends. It's much more of a feel good story than when a corpo does something similar." To be fair, I doubt more than one lead engineer and a few others were working on the r/place architecture. So the scale of people is roughly the same, the resources and personal network to gain information/skills you lack in the moment are just vastly different. Also projects like r/place are supposed to not fail, which definitely needs a lot more thought in advance. That being said the process that the author here described of learning how and why your scaling breaks step by step like you learning to understand your own code feels definitely more personal than corp essays.
@@psychoedge Yeah, it's different in some ways, but similar in the scale of the project. The key difference with r/place is that Reddit being a massive company has the money to front any bandwidth and server costs, so they didn't have to worry nearly as much about the cost of the overall project and could focus more on pure performance. If you're just one person fronting all the costs, you have to be a lot more focused on how much your implementation is going to cost you, and less about how performant you can possibly make your implementation.
What I would have done. 1. Put the "entire page data" endpoint behind a CDN with a ttl of 1 second. That way your app server only gets 1 request for the entire state per second and the cdn handles scaling and distribution. 2. Perform delta updates via websocket, also sending a checksum across. If after the delta is applied the check sums don't match, fetch from cdn again.
bruh we were drawing bitmap black/white images on the site, turning the 1mill checkboxes into a 1000x1000 canvas we had a discord server of a few dozen people doing this I eventually created my own infrastructure of 55 servers to quickly draw images. I spent all night from 11pm until 7am creating my own CI/CD system and checking boxes at an inhuman rate. The rate limit on OCMB was pretty tough, and it was per-ip, so I had to increase my IP count - thus the high amount of servers. Others managed to utilize about 2000 proxies which nearly nuked the site out of existence. this was fun.
An easy optimization would be to just send 10k checkboxes at once. Anything not visible is not interesting to the user, so no need to send to full million to them
And also batch process the clicks. Show the user an update on the boxes they check, but only update the other pages and totals by 1K increments or whatever.
Do note that the bitset implementation requires neither divisions nor modulo. Because i / 8 == i >> 3 and i % 8 == i & 7, so the lookup would be const bit = (arr[i >> 3] >> (i & 7)) & 1; I believe.
I would like to mention something about the "bool" being 64 bits long. You are absolutely correct that it depends on which system it is, but the actual length itself depends on the implementation for the language. For example, C++ bool's are 8 bits long (a byte) because that is the smallest addressable size you can have.
Yeah, same in Rust and x86 CPUs in fact can read single bytes, just not single bits. I think he just misspoke and the overall point that there's no such thing as native bit arrays is ofc still correct.
@@1vader sure, but lets be real, how often do you actually need a large array of booleans anyway? I can't remember the last time I needed more than 1 per function
@ 9:44 -> Fake news! booleans in rust don't take up 64 bits, they take up 1 byte or 8 bits. A struct of 3 bools takes up 3 bytes You can test this by inserting the following in the rust playground: struct Bools { foo: bool, bar: bool, baz: bool, } fn main() { // Returns the size of a type in bytes. let size = std::mem::size_of::(); assert_eq!(size, 3); }
Looks like it was written in JavaScript at that time and V8 is written in C++. A quick google search said that while a boolean yes takes up 1 byte, depending on alignment it can take up to 8 bytes on a 64-bit platform Edit: went back and listened to Prime here. Yeah he explicitly calls out Rust. He was wrong with your example above lol
Yea i thought i was going crazy for a second hearing this. That kind of overhead in Rust would make zero sense to me since memory is not quantized at word size. x86 has been byte-addressable for how long? Doesn't matter that they are being accessed in the CPU as 64 bits
Brother, bool is only 8 bits, not 64. The only time wastes that much space is if you're doing something like (u64, bool) which would take a total of 128 bits due to alignment and padding.
When he talks about using bitset, I'm pretty sure he means something like this -- A single byte is represented by 8 bits: 0000 0000 Each bit can be represented in-place as the decimal version of the 1 byte value: 64 = 1000 0000 // 8 = 0000 1000 etc You can "set" or "get" a boolean value (represented by a single bit) from this "array" of bits using bitwise ops and that single byte value to represent the place for the bitflag. That means you can effectively store 8 boolean values with just a single byte. A checkbox is a boolean value in this case. So you can get 8 checkboxes into a single byte. Hope this clarifies.
b-frames (past future frames/pixels) are just content that didn't change between 2 frames or at least didn't change too a degree where they'd be included in a p-frame. During streaming that would be your static images/banners or your mic.
That's not correct. The data in P & B-frames contain both full macroblocks (image data), and predicted macroblocks that describe the difference between it and the macroblock it is referencing from another frame (in the form of motion vector). They require other frames it references to already be decoded and stored in memory in order to fully reconstruct the image. The difference between P (predicted) & B (bi-predicted) frames is that P-frames can only reference prior frames, while B-frames can also reference future frames. This is the reason why IPB codecs are much more computationally heavy than if you only use P-frames, which in turn are computationally heavier than an All-Intra codec. In addition, depending on how many frames ahead or backwards you allow the encoder to reference, you will need more compute to encode or decode your video. In addition, depending on the codec, all frames may reference itself. Meaning if you have an exact repeating pattern, it may only need to have one macroblock of it.
I've been responsible for maintaining a startup with exactly this infrastructure (r5+node+redis+mongo) for chatting with 10-15k paying subs, but I did not develop it, I was taking over with a new team after the startup was about to go bankrupt (bought out). This is how it's done in the real world too boys! But as I have not developed it, but took it over, I 2x or 3xed my ~10 years of commercial dev exp (20 total) just by watching this video. All the things I did as a gamedev/frontend/backend just neatly tying into a single thing. Nice.
14:05 - I'm unsure Redis makes sense here… if your entire app lives on 1 or 2 instances on the same machine, then all Redis does is add latency to an otherwise simple operation/sharing of data.
Often still add redis for a single app if state matters so you can update the app without downtime (if I'm too lazy to have an actual DB/it's a just cache it thing for short term with temp user state). A vast majority of that time the latency doesn't have much impact, you're talking 0.15ms or so for a set request (piping is like 0.5ms), this is on my work laptop though, but figures shouldn't be too different. This is especially true as every checkbox is independent of the other and lost updates don't really matter as just needs to be remembered that someone set it to true (is ticked).
@@Masterrunescapeer Serializing a file and sharing the file is not an hard task either. Latency matters because we're talking about thousands of requests per second. I understand why you/we do it (using redis), but the truth is we keep adding dependencies that each adds complexity and bottlenecks… that sometimes we could just did the work ourselves, spend the same time on it, but still get a vastly superior app.
I saw this article and started reading it, but my eyes glazed over and my brain wouldn't engage, despite being really interested i was just too tired. Reading through it with you and your commentary was so much better, thanks.
I recently had a personal project go semi-viral in a niche twitch community and had to deal with many of these same issues, from late night panic scaling to shit hat hackers. Thanks for covering this story.
I believe I will have to read this article multiple times to just understand everything fully. But that just shows how much this guy has learnt from this project
(Did not finish the video, just putting in what I think is the reasonable way to do it, if you knew you would need to scale) You could just give the initial state to the user, make the client send updated checkboxes to the server as it goes, and then every 5 seconds or smth like that giving back to every users a patch of what's been happening the last 5 seconds which you would display to the user for the next 5 seconds as if it is happening in real time, made me remember a bit how Reddit did r/place.
For a bitset: you can mask the 6 (for 64 bit words, i.e. u64) lowest bits with AND (mask 63 or 0x3F) to get a bit address and right shift by 6 to get the array index. If the array is U8 both AND and shift would be 3 bits (mask 0x07, shift 3). You would, in a case like this, not want to address each bit individually you can use the U8 to do a lookup into an array or even a switch. You would treat the word as a block and read the all at once (8, or word size, checkboxes at a time).
Hey Prime, you were partially correct on one part. In C/C++ and most other languages, a struct of 3 booleans (or 1-8 booleans) takes up 64 bits on a 64 bit machine; 8 bits per boolean + padding to fill the struct to an aligned size. I don't know about Rust, but I strongly would suspect that Rust does the same. The CPU indeed has to read from memory with 64 bits at a time, so the entire struct is read at once. Accessing members of the struct is architecture and compiler dependent and it varies wildly what optimizations are done compile-time that affect how the underlying members are accessed. Storing 64 bits per boolean regardless of the amount of booleans in the struct would be a tremendous waste and would cause significant performance issues (think cache misses etc.).
Cache line is 64 bytes, not bits, so he's even wrong there 😔 Rust bool also has 1 byte align and structures align to the largest member align, so no it's actually better on Rust (iirc most C impls default to a min 4 byte alignment)
@@SimonBuchanNz the standard doesn't enforce a size nor an alignment for bool (or any other integer type). But it does say that bool has the lowest rank of all integer types: "The rank of bool shall be less than the rank of all other standard integer types"
myself recently dealing with processor architecture can say the smallest unit the CPU can handle is 8 bit for GPIs but there are specific x86 instructions that allow a operation on a singular bit so a vector could be posibleish but yes even that implimentation would not change the fact that pages, lines are read at a time not only a bit
Actual losers in the chat when he mentioned being an asshole and not a hacker. Like that Zeychin guy saying "Hey, some of us are both." 🤓🤓 Like what a loser
9:28 "There is no such thing as a vector of bits" to this I ask, what do you thing *vectorization* on many modern desktop CPUs actually is? Theyre instructions made specifically for performing operations on **bit vectors** (and other vector types too for your add/subtract/etc) but your bitwise operations are still valid! I've seen resizable bit vector implementations specifically so these instructions can be optimally made use of!
I love how I have no programming knowlage outside of basic python yet im here, listening to Prime saying magic words and pretending I'm a big boy and who understands everything 😎
Just a side line about bitsets. In languages that support them (or at least the ones I have worked on) they are literally a pointer to the start of some assigned memory, then the bits are not set in the logical manner, but instead set up so that we can logic them out with a PEXT between the bit request index (appended to it's inverse) then an xor between the returned byte, the repeated request index, and some magic constant. It was an insane bit of dark arts maths, but it is truly static time.
16:00 On the update channel I would just send 64 bit (or maybe 32 bit) word updates with the word index (which is just 2 byte long), so 10 bytes (or 6). And if you can batch outgoing updates then array of these index, word records. With sending just 100 bytes you can update 10 sparse bit change to 640 bit change. With just 1 bit change you can sync 63 additional nearby bit.
I like brown hat hacker. But you know that's essentially what hacking started as. Breaking it because it's fun and interesting to do. Although like you said the target matters so I think brown hat is applicable for yeah just personal neat projects and stuff especially single owner sites with essentially no sec infrastructure.
6:40 I would assume that they would do something like this: u8 data[1000000 / 8] = {0}; void flip(u32 index) { u32 offset = index / 8; u8 bit = index % 8; data[offset] ^= 1 > bit) & 1; }
Protobuf would go crazy here. The binary representation would be super easy and you can enable gzip compression over the wire trivially if I remember right. Protobuf with like ZeroMQ would be so fast here.
10:00 They meant specialization of vector that bit packs many bools into single adress. It is default in C++ vector and in rust there is so called bit-vec
@@f.f.s.d.o.a.7294And a python thing as well, but at the time Prime was thinking in terms of JS and saying there's no such thing with an implied "in JS" to his comments.
Its mostly used in memory constrained environments like embedded systems or even game engines. Each boolean only takes up 1 bit in a bitset, atleast with how its implemented in c++ standard.
It is a set because ordering is not important. We are never going to want to loop through it in a way that depends on order, so calling it an array would be over specified. Additionally, we don't care about adjacency in memory, so it would be perfectly reasonable to separate it into chunks, and store them without hard layout constraints on them. It will be bit arrays somewhere down there, but it does make sense to hide it behind an abstraction. If you decide later that you need a specific implementation then you can build that without violating the specification of the bitset then you can do that without expecting other things to break. Also, while some of the brownhat stuff was definitely dickish (DDoSing is not clever or novel), the person that checked box 100 million might not have been. If Redis expanded that set to that size automatically with just one set, you would only need one person with QA instincts to try it once. That doesn't need malice.
A bool is 1 byte. A cpu can 100% read and write 1 byte. Some CPUs like AVR can do bit level manipulation too, but the instructions can only execute on certain address ranges, I believe on registers.
You guys are silly, firestore documents charge per read so use base 64 encoding to store checked state and index with ads to pay for the read write cost XD
In a DB make a state variable with 1,000,000 bits, whenever a checkbox gets toggled i.e. checked or unchecked. A request is sent to the DB about which bit to flip. Here for example if we send 34 then the 34th bit will toggle and a update would be broadcasted on all active devices giving them the data of 34. This prevents multiple read from the variable with 1,000,000 bits. This variable would only be read by the client side on initiation of the webpage or reload/refresh of it. you can duplicate the variable to attain a historical record and increment a counter for every request sent
9:50 I can find literally zero evidence for this assertion. Even looking online the worst case I can find people citing is that the LLVM will generally assign each boolean to an i1 and assign one byte to each boolean. I cannot find a single source corroborating that it assigns 64 bits to each boolean. C *_also_* generally assigns 1 byte to each bit as far as I'm aware but if you have a struct of several bits then they can share the same byte. I could be wrong on this, I never quite got a firm hold on packing bits, but I think this is the case. The reason Rust doesn't is (as far as I can tell) 1 : because then they can't be referenced easily. (if you use other datastructures that forego this requirement they can pack bits in together) In theory I think this could be sidestepped in the compiler, but it'd be rather complex to add. 2 : "it's complex, if you pack them like that it's slower because then to reference any individual bit you need to use bitshifts and bitmasks!" no, just, just no. This is just more "hur de hur premature opimiztion rut of ahl EVIL!" waffling. Realistically for most operations you'd only need to do a predetermined bitmask but even if you *_did_* need to do both a bitmask and a bitshift those are like the two fastest operations a CPU can possibly perform so the memory savings would dramatically overcompensate for any possible CPU slowdowns. Oh, also caching is a thing. Having them packed together would improve the spatial locality of the data thereby improving cache hits, making it faster. So, no, this is just wrong, stop saying it. What you mean by "premature optimization is the root of all evil" is "I'm too lazy and/or bad to write good code". 3 : lack of demand given how much effort it'd take to make it happen, plus how easily sidesteppable it is with dedicated bitfield crates and such.
C will put booleans in consecutive bits but only if you use the bitfield syntax (struct S { char x:1; char y:1; char z:1} __attribute__((packed));) and add the compiler attribute to pack them together. And why are you fighting invisible enemies? It's pretty obvious why Rust doesn't have it is simply because it's not that common of a usecase unless you're making very low-level communication protocols, so there isn't much need for it, it doesn't have to do with the fact that some (idk who) would consider it inefficient.
I don't do Rust but I'm pretty sure the correct value is 1 byte per bool, same as C++. Incidentally, in C++ a vector of bools is implemented as a bitset. Two comments on that choice: 1. It's extremely fast. 2. It's considered a grave mistake. It's nice that the standard library has a bitset type with a similar API as vector. It's not nice that it's literally a vector. If you index into an element of the vector (of any other type) you get a reference to that element, which is pretty much interchangeable with it for all intents and purposes. When you index into a vector of bools you get a proxy type that does the bitmasking for you. It's convenient, it's a fine api, but it's not a vector.
" if you have a struct of several bits then they can share the same byte. " Pretty sure this part is not correct. What is normally done is to pack for example, an i16 and two u8s in the space of a single u32. This generally depends on how the elements are ordered, since for example, if you did u8 i16 u8, the i16 would end up misaligned. Cpus can tolerate this, but it's slower, so compilers will typically add padding in that situation. In C++ you can opt out of this (to have a well-determined memory layout) with a pragma. No idea how it works in Rust. But I _can_ say that packing a bunch of bits into the same byte would require the same sort of bitmasking logic as in C++'s vector. Doable, but it _has_ to be opt-in. I would be shocked if Rust made the wrong choice here.
You could have easily explained it by referencing the C# BitArray struct. Bool in almost every language is at least the same size as char/u8, and BitArray is 8 of those ... 8*8 = 64 bits (8 bytes) to store 1 byte, and most programming languages do this similarly if they have a similar function. When I emulate 8bit and 16bit CPUs, I always use the relevant data type (u8 or u16) specifically for that reason - I've already got my word size available and I can just use boolean operators to get the value of the specific bit and go on from there. One thing you are missing though, is that no just because you're targeting x64 instead of something else, that doesn't automatically make the data type jump up. I target exclusively x64 in visual studio, my base int size is still 4 bytes (32 bit). And I didn't set it up to do that, it just came default that way for me. A CPU can and will pull increments of data, as long as it's not single bits (as you said), but it only pulls what it needs. An ASM instruction exists even now for grabbing 8 bits of memory in the ISA (this is why in programming languages, the smallest real data types are 8 bit - which reinforces what you were trying to say: there is no CPU instruction in any consumer ISA that pulls less than 1 byte of data), and this is what is run when your code is compiled. In both C++ and C# I have to manually use the 64 bit data types to make my ints hold that value, or change some compiler flags. Also, 64 bit CPUs can address 128bits of data. This isn't even new tech, as even the OG GameBoy, with its 8bit Z80 like cpu could address 16bit addresses, by combining two registers and treating them as one (which is, as far as I know, how current gen 64bit CPUs address 128bits). It's all dependent on what instructions are implemented at the hardware level. Edit: The C++ equivalent to C#'s BitArray is BitField. Although, instead of it being a struct of type Bool (which in C#, is itself a struct if memory serves... why I'll never know), it can be a collection of any integral type according to cppreference
I just found your TH-cam channel recently.. I enjoy all of your videos.. Very insightful.. Although I disagree with your opinion on AI.. I see AI taking over virtually all of IT, with minimal expansion of AI tech.. Your videos will be useful for contextual reference in AI models
Just make a microservice for each bit
😂
Yes. And make sure each microsevice is on a different machine. For performance reasons. And have them persist to a database. One database. On one machine 👌
LMAO
Actually so F funny
With a map reduce job for the state
Every checkbox checked spins up a docker container.
That's what I call microservices
And every pod spins up a node.
"Bytes are stored in the balls, right next to the microplastics-"
pee is stored in the balls in the balls
You can experience the ultimate challenge with 8 billion checkboxes with flags on my website : www.playstd.com
please support me
Imagine creating a small site that hits the mainstream media.
Marketing: That would be GREAT!
Engineering: That would be AWFUL.
All you need to know right there.
and it only cost 2 man-weeks in engineering .
Accounting: and we're fucked if that's on AWS or Microsoft servers
@@NickTaylorRickPowers Ugh, lol, brings up bad memories. We went through a phase where we moved to an enterprise agreement with MS, and everyone in IT moved their existing Excel forms, Access databases etc. to SQL server. MS hit us with a $2 MM true up that the business didn’t expect. That went over about as well as you’d imagine…
The establishment: There's no way society wouldn't crumble if we reduce working hours by 1 hour per day
The world's population during work hours: checking boxes
The establishment : you have to work.
How many humans actually work producing the food we eat : 3%
@@monad_tcp used to be everyone getting food all the time. Specialization is the reason we are where we are as a species.
@@monad_tcp if food is all you need you can just leave this complicated and arguably shitty life for a much simpler agricultural life
the visualization of cooperate PR and "pledges"
I really appreciate how you brainstorm how things could be done before learning how the author actually did it. Really a good way to teach and really interesting to see the different ways to approach the problem.
It's a good learning strategy in general. When I'm watching something like a course, I'll pause before a problem, try to solve it myself, and see why I'm an idiot and why the shown solution is better.
It helps with both engaging with the material and practicing your own problem solving.
The fact that some nerd tried to DDOS the silly check box website
Average redditor
Lol i know the person that did that
I would have tried to spin up some bots to click on the checkboxes but that would have ddos'd the thing too
@@Kynatosh we were literally 50 people in a discord server doing this. I had 55 servers. Called in sick to work so I could check more boxes lmfao
There are plenty of basement dwellers who feed off of schadenfreude.
After watching through this whole thing, I can't help but be reminded of the write ups that Reddit did for r/place. The most interesting/technical one was for the 2022 version where they talk about doing a lot of these same things. "I-Frame" updates with websockets for difference updates, managing how to store the data in the backend, and even a little bit about scaling at that level.
What I like about this article though, is that it's just one dude with some help from a couple of friends. It's much more of a feel good story than when a corpo does something similar.
"is that it's just one dude with some help from a couple of friends. It's much more of a feel good story than when a corpo does something similar."
To be fair, I doubt more than one lead engineer and a few others were working on the r/place architecture. So the scale of people is roughly the same, the resources and personal network to gain information/skills you lack in the moment are just vastly different. Also projects like r/place are supposed to not fail, which definitely needs a lot more thought in advance. That being said the process that the author here described of learning how and why your scaling breaks step by step like you learning to understand your own code feels definitely more personal than corp essays.
@@psychoedge Yeah, it's different in some ways, but similar in the scale of the project. The key difference with r/place is that Reddit being a massive company has the money to front any bandwidth and server costs, so they didn't have to worry nearly as much about the cost of the overall project and could focus more on pure performance. If you're just one person fronting all the costs, you have to be a lot more focused on how much your implementation is going to cost you, and less about how performant you can possibly make your implementation.
I’m gonna be honest…. I’m feeling out of the loop because this is the first I have heard of this lol.
plot twist: brown hat hackers == cloudflare security (sales) team
I mean, he likely used the free cloudflare plan, so it wouldn't even count as sales
I give that one a 20% chance of being true.
"Hi, we noticed your site getting a LOT of traffic. Sign up for the premium plan now at $130,000 or we shut you down."😈
@@Exilumwooshy
@@gwentarinokripperinolkjdsf683 whoosh only works on full on jokes, not when there are clear undertones.
What I would have done.
1. Put the "entire page data" endpoint behind a CDN with a ttl of 1 second. That way your app server only gets 1 request for the entire state per second and the cdn handles scaling and distribution.
2. Perform delta updates via websocket, also sending a checksum across. If after the delta is applied the check sums don't match, fetch from cdn again.
thats a great idea. maybe ttl of 5 or even 10 seconds would make more sense.
interesting take
bruh
we were drawing bitmap black/white images on the site, turning the 1mill checkboxes into a 1000x1000 canvas
we had a discord server of a few dozen people doing this
I eventually created my own infrastructure of 55 servers to quickly draw images. I spent all night from 11pm until 7am creating my own CI/CD system and checking boxes at an inhuman rate. The rate limit on OCMB was pretty tough, and it was per-ip, so I had to increase my IP count - thus the high amount of servers. Others managed to utilize about 2000 proxies which nearly nuked the site out of existence.
this was fun.
This kind of content from prime is gold. Explaining simple stuff like a pro
The blog post is already easy. Prime just rehashes. This isn’t his content
He isn't, he read everything somewhere and just acting like he knows a lot about it.
Middle Management Simulator 2024
lmfao
An easy optimization would be to just send 10k checkboxes at once. Anything not visible is not interesting to the user, so no need to send to full million to them
Essentially virtualization/windowing
And also batch process the clicks. Show the user an update on the boxes they check, but only update the other pages and totals by 1K increments or whatever.
The term you are searching for is : View Frustum Rendering
@@JGComments click batching doesn't really work because multiple users could click the same box multiple times between updates.
Yeah the 125kb could have been better optimized, seemed like a lot of the work went into the patches
The guy had a in-house server but uses a Cloud solution for DB. GENIOUS.
Do note that the bitset implementation requires neither divisions nor modulo. Because i / 8 == i >> 3 and i % 8 == i & 7, so the lookup would be const bit = (arr[i >> 3] >> (i & 7)) & 1; I believe.
I would like to mention something about the "bool" being 64 bits long. You are absolutely correct that it depends on which system it is, but the actual length itself depends on the implementation for the language. For example, C++ bool's are 8 bits long (a byte) because that is the smallest addressable size you can have.
Yeah, same in Rust and x86 CPUs in fact can read single bytes, just not single bits. I think he just misspoke and the overall point that there's no such thing as native bit arrays is ofc still correct.
32 bit ints still need to be aligned to 4 bytes. So chances are good your bool value takes up 4 bytes while only using 1.
@@bgdgdgdf4488 No, not in an array. In a struct, yes, if there are no other small types to fill up the rest, but Prime was talking about arrays.
@@1vaderEven in a struct, you may be able to "pack" all the members, forcing them to not have any alignment and be immediately after another in memory
@@1vader sure, but lets be real, how often do you actually need a large array of booleans anyway? I can't remember the last time I needed more than 1 per function
@ 9:44 -> Fake news! booleans in rust don't take up 64 bits, they take up 1 byte or 8 bits. A struct of 3 bools takes up 3 bytes
You can test this by inserting the following in the rust playground:
struct Bools {
foo: bool,
bar: bool,
baz: bool,
}
fn main() {
// Returns the size of a type in bytes.
let size = std::mem::size_of::();
assert_eq!(size, 3);
}
Looks like it was written in JavaScript at that time and V8 is written in C++. A quick google search said that while a boolean yes takes up 1 byte, depending on alignment it can take up to 8 bytes on a 64-bit platform
Edit: went back and listened to Prime here. Yeah he explicitly calls out Rust. He was wrong with your example above lol
Yea i thought i was going crazy for a second hearing this. That kind of overhead in Rust would make zero sense to me since memory is not quantized at word size.
x86 has been byte-addressable for how long? Doesn't matter that they are being accessed in the CPU as 64 bits
Won't this be aligned up to 4 bytes though?
Shifting bits can be done quickly and efficiently because the ALU has a "Shift Register" which implements an extremely efficient bit shift operation.
Yep it's literally a less than single cycle operation natively (if you account for superscalar/ out of order exec)
The guy who made this is a genius. Almost as smart as the Banana guy.
Brother, bool is only 8 bits, not 64. The only time wastes that much space is if you're doing something like (u64, bool) which would take a total of 128 bits due to alignment and padding.
reminds me of the one million dollar website 😂
It revealed your age
@@prajnaparamitahrdaya damn it... you got me :-)
The naming reminded me immediately.
Same vibes for sure
Yes first thing that comes to mind, late 20s is the age
I had a feeling prime would take this and spin into an hour long video. Not complaining though :D
When he talks about using bitset, I'm pretty sure he means something like this --
A single byte is represented by 8 bits: 0000 0000
Each bit can be represented in-place as the decimal version of the 1 byte value: 64 = 1000 0000 // 8 = 0000 1000 etc
You can "set" or "get" a boolean value (represented by a single bit) from this "array" of bits using bitwise ops and that single byte value to represent the place for the bitflag. That means you can effectively store 8 boolean values with just a single byte. A checkbox is a boolean value in this case. So you can get 8 checkboxes into a single byte. Hope this clarifies.
Yes that is what a bit set is.
You are right in the principle, but not on the values
0b10000000 = 128 or -128 depending on signedness
0b00001000 = 8
To simplify, you can memorize hex :
0x0 -> 0b0000
0x1 -> 0b0001
0x2 -> 0b0010
0x3 -> 0b0011
0x4 -> 0b0100
0x5 -> 0b0101
0x6 -> 0b0110
0x7 -> 0b0111
0x8 -> 0b1000
0x9 -> 0b1001
0xA -> 0b1010
0xB -> 0b1011
0xC -> 0b1100
0xD -> 0b1101
0xE -> 0b1110
0xF -> 0b1111
bitset.js you can read in the imp
This might actually make a great interview question for Senior Engineers.
9:48 he really just said that? Struct with 3 bools is 3 bytes in size with align of 1.
Depends on if there's padding but yes. I think Prime got a little confused.
@@johndoe-j7z There won't be padding because bools have an alignment of 1 byte on every common architecture
Yeah I think he confused himself because the chatter was making him short circuit lol. The other stuff was bang on though
Brown Hat! What a designation (do refrain from visually imagine that). Will adopt
There is also 2 million checkboxes in Elixir
LOL
Go Elixir! 😁
b-frames (past future frames/pixels) are just content that didn't change between 2 frames or at least didn't change too a degree where they'd be included in a p-frame. During streaming that would be your static images/banners or your mic.
That's not correct. The data in P & B-frames contain both full macroblocks (image data), and predicted macroblocks that describe the difference between it and the macroblock it is referencing from another frame (in the form of motion vector). They require other frames it references to already be decoded and stored in memory in order to fully reconstruct the image. The difference between P (predicted) & B (bi-predicted) frames is that P-frames can only reference prior frames, while B-frames can also reference future frames.
This is the reason why IPB codecs are much more computationally heavy than if you only use P-frames, which in turn are computationally heavier than an All-Intra codec. In addition, depending on how many frames ahead or backwards you allow the encoder to reference, you will need more compute to encode or decode your video.
In addition, depending on the codec, all frames may reference itself. Meaning if you have an exact repeating pattern, it may only need to have one macroblock of it.
watching prime react to anything makes it 100x better
Prime makes a 49 minute video about scaling 1 million checkboxes. Liveview does it with no scaling at all. Elixir on top as usual.
I've been responsible for maintaining a startup with exactly this infrastructure (r5+node+redis+mongo) for chatting with 10-15k paying subs, but I did not develop it, I was taking over with a new team after the startup was about to go bankrupt (bought out). This is how it's done in the real world too boys! But as I have not developed it, but took it over, I 2x or 3xed my ~10 years of commercial dev exp (20 total) just by watching this video.
All the things I did as a gamedev/frontend/backend just neatly tying into a single thing. Nice.
14:05 - I'm unsure Redis makes sense here… if your entire app lives on 1 or 2 instances on the same machine, then all Redis does is add latency to an otherwise simple operation/sharing of data.
he doesn't do software ofc he can't understand that the way to do this is either a queue or a ha-dr store + algorithmic optimization
Load balancing is good for QOL
Often still add redis for a single app if state matters so you can update the app without downtime (if I'm too lazy to have an actual DB/it's a just cache it thing for short term with temp user state).
A vast majority of that time the latency doesn't have much impact, you're talking 0.15ms or so for a set request (piping is like 0.5ms), this is on my work laptop though, but figures shouldn't be too different. This is especially true as every checkbox is independent of the other and lost updates don't really matter as just needs to be remembered that someone set it to true (is ticked).
@@Masterrunescapeer Serializing a file and sharing the file is not an hard task either.
Latency matters because we're talking about thousands of requests per second.
I understand why you/we do it (using redis), but the truth is we keep adding dependencies that each adds complexity and bottlenecks… that sometimes we could just did the work ourselves, spend the same time on it, but still get a vastly superior app.
That was so much fun to listen to!
Total adventure!!
Absolutely worth it!
“This guy is gonna get hired” took me out. Nolen used to work for Jane Street!
i know next to nothing about software engineering. this is so entertaining, i'm learning a lot
I saw this article and started reading it, but my eyes glazed over and my brain wouldn't engage, despite being really interested i was just too tired. Reading through it with you and your commentary was so much better, thanks.
I recently had a personal project go semi-viral in a niche twitch community and had to deal with many of these same issues, from late night panic scaling to shit hat hackers. Thanks for covering this story.
I believe I will have to read this article multiple times to just understand everything fully. But that just shows how much this guy has learnt from this project
i dont know why i got recommended this video, i know nothing about web servers, but im enjoying every *bit* of this video.
"Bytes are installed in the balls right next to the micro plastic"
TheOldPrimy, Ex FAANG engineer.
(Did not finish the video, just putting in what I think is the reasonable way to do it, if you knew you would need to scale) You could just give the initial state to the user, make the client send updated checkboxes to the server as it goes, and then every 5 seconds or smth like that giving back to every users a patch of what's been happening the last 5 seconds which you would display to the user for the next 5 seconds as if it is happening in real time, made me remember a bit how Reddit did r/place.
Although effective, but feels like cheating
that's exactly what website was doing, except 10 times a second and not once every 5. Also it was sending full state once in a while just to be sure
For a bitset: you can mask the 6 (for 64 bit words, i.e. u64) lowest bits with AND (mask 63 or 0x3F) to get a bit address and right shift by 6 to get the array index. If the array is U8 both AND and shift would be 3 bits (mask 0x07, shift 3). You would, in a case like this, not want to address each bit individually you can use the U8 to do a lookup into an array or even a switch. You would treat the word as a block and read the all at once (8, or word size, checkboxes at a time).
Hey Prime, you were partially correct on one part.
In C/C++ and most other languages, a struct of 3 booleans (or 1-8 booleans) takes up 64 bits on a 64 bit machine; 8 bits per boolean + padding to fill the struct to an aligned size. I don't know about Rust, but I strongly would suspect that Rust does the same.
The CPU indeed has to read from memory with 64 bits at a time, so the entire struct is read at once. Accessing members of the struct is architecture and compiler dependent and it varies wildly what optimizations are done compile-time that affect how the underlying members are accessed.
Storing 64 bits per boolean regardless of the amount of booleans in the struct would be a tremendous waste and would cause significant performance issues (think cache misses etc.).
Cache line is 64 bytes, not bits, so he's even wrong there 😔
Rust bool also has 1 byte align and structures align to the largest member align, so no it's actually better on Rust (iirc most C impls default to a min 4 byte alignment)
@@SimonBuchanNz that's not true? in C a struct with bool has alignment of 1
@@hwstar9416 hmm, is that spec or implementation defined? I could easily be misremembering, too.
@@SimonBuchanNz the standard doesn't enforce a size nor an alignment for bool (or any other integer type).
But it does say that bool has the lowest rank of all integer types:
"The rank of bool shall be less than the rank of all other standard integer types"
Every checkbox, creates the next gen javascript framework.
this is one big visual metaphor for companies pretending to care about safety or going green, checking boxes
myself recently dealing with processor architecture can say the smallest unit the CPU can handle is 8 bit for GPIs but there are specific x86 instructions that allow a operation on a singular bit so a vector could be posibleish
but yes even that implimentation would not change the fact that pages, lines are read at a time not only a bit
YEeeeSsssss……. BROWN HATS!
Epic
Actual losers in the chat when he mentioned being an asshole and not a hacker. Like that Zeychin guy saying "Hey, some of us are both." 🤓🤓 Like what a loser
"I'm sorry they came in from Mastodon"
Yeah that could explain why you're blocked by furries x)
What's the deal with Mastodon?
Wait.. I've been watching the second Channel this whole time without knowing there was the first one 😮
42:23 “he’s also probably not a furry” rookie error on his part.
Prime needs to get serious "chatter, tell me how I'm overcomplicating this or admit you're trolling or you're banned" 😂
Wow, thats top Content. What a great rundown and so good written.
There is a Vector Array. It's basically an array that gets bigger and smaller as needed. Like a dynamic array .
They are useful
Fighting the masculine urge to make 1 Billion Checkboxes
Did you see what the man has invested with proper limitations? One billion would be way worse xD
This was quite insighfull. Thank you
"taught the client" is the official style for commit messages in the Linux kernel
learning a ton from this simple little project
This is like listening to a junior engineer who just learned basic RLE concepts.
0:32 Its the Internet equivalent of bursting bubble-wrap...? ✌🏼💕
I just checked a whole bunch of boxes in the shape of the word “GYATT” lol
Server gives static webpage content. client randomly check and uncheck boxes to simulate other users. Nobody’s the wiser
Except if people figure it out and communicate with each other.
I can't even imagine getting any of my projects' flask server code picked apart by Prime. Zero optimization in most cases, but it works, lmao.
u need to go viral and wrote a technical article about it.
7:40 I had to store a giant 3d grid of 1-bit cells, and I did pretty much exactly this
And people still say "No need to scale it up to thousands of users when your only going to get 1 user."
9:28 "There is no such thing as a vector of bits" to this I ask, what do you thing *vectorization* on many modern desktop CPUs actually is? Theyre instructions made specifically for performing operations on **bit vectors** (and other vector types too for your add/subtract/etc) but your bitwise operations are still valid! I've seen resizable bit vector implementations specifically so these instructions can be optimally made use of!
I love how I have no programming knowlage outside of basic python yet im here, listening to Prime saying magic words and pretending I'm a big boy and who understands everything 😎
Just a side line about bitsets.
In languages that support them (or at least the ones I have worked on) they are literally a pointer to the start of some assigned memory, then the bits are not set in the logical manner, but instead set up so that we can logic them out with a PEXT between the bit request index (appended to it's inverse) then an xor between the returned byte, the repeated request index, and some magic constant. It was an insane bit of dark arts maths, but it is truly static time.
16:00 On the update channel I would just send 64 bit (or maybe 32 bit) word updates with the word index (which is just 2 byte long), so 10 bytes (or 6). And if you can batch outgoing updates then array of these index, word records.
With sending just 100 bytes you can update 10 sparse bit change to 640 bit change. With just 1 bit change you can sync 63 additional nearby bit.
Each checkbox turns on a physical server to store the bit
Best read of the year so far.
Seems like a good usecase for learning how to use roaring bitmaps as well.
Tc is so wicked, ages ago i spent like a day reading the man page and trying out some of the interesting things it does
I like brown hat hacker. But you know that's essentially what hacking started as. Breaking it because it's fun and interesting to do. Although like you said the target matters so I think brown hat is applicable for yeah just personal neat projects and stuff especially single owner sites with essentially no sec infrastructure.
10:34 you dont have to use 64 bit registers exclusively on x86-64 IIRC..?
I don't think brown hacking was intentional , this started to become a game for people playing a game of king of hill
Man i got a lot to research from this
Take a shot for every box checked.
6:40 I would assume that they would do something like this:
u8 data[1000000 / 8] = {0};
void flip(u32 index) {
u32 offset = index / 8;
u8 bit = index % 8;
data[offset] ^= 1 > bit) & 1;
}
Protobuf would go crazy here. The binary representation would be super easy and you can enable gzip compression over the wire trivially if I remember right. Protobuf with like ZeroMQ would be so fast here.
How would you do that in a browser?
@@MikhailBorisovTheOne Protobuf has a JavaScript library. One of its main use cases is gRPC which is browser -> server.
@@JMurph2015 Yea, maybe you are right. It probably uses websocket and sends binary data over it?
10:00 They meant specialization of vector that bit packs many bools into single adress. It is default in C++ vector and in rust there is so called bit-vec
Hey, I'm the guy who said "Vec with Bits" in the chat. Glad somebody knew what I was talking about.
man never heard of bitsets after 10 years of programming, things happen
BitSet is a standard class since Java 1.0, but I only noticed it a few years ago
@@f.f.s.d.o.a.7294And a python thing as well, but at the time Prime was thinking in terms of JS and saying there's no such thing with an implied "in JS" to his comments.
Its mostly used in memory constrained environments like embedded systems or even game engines. Each boolean only takes up 1 bit in a bitset, atleast with how its implemented in c++ standard.
bro be checking boxes
It is a set because ordering is not important. We are never going to want to loop through it in a way that depends on order, so calling it an array would be over specified. Additionally, we don't care about adjacency in memory, so it would be perfectly reasonable to separate it into chunks, and store them without hard layout constraints on them.
It will be bit arrays somewhere down there, but it does make sense to hide it behind an abstraction. If you decide later that you need a specific implementation then you can build that without violating the specification of the bitset then you can do that without expecting other things to break.
Also, while some of the brownhat stuff was definitely dickish (DDoSing is not clever or novel), the person that checked box 100 million might not have been. If Redis expanded that set to that size automatically with just one set, you would only need one person with QA instincts to try it once. That doesn't need malice.
A bool is 1 byte. A cpu can 100% read and write 1 byte.
Some CPUs like AVR can do bit level manipulation too, but the instructions can only execute on certain address ranges, I believe on registers.
Lithuania mentioned!!! Woop woop!
D mentioned! one more time! For 2 videos in a row... dlang never has been such popular before
knowing bit math should be a mandatory prerequisite for all programming jobs.
laughs in UX designer
@@DogeMultiversehe said programming jobs 👀
cries in embedded systems design
5:27 wouldn`t it be better if we ignored single cases? So we would have 4W2JD or even ignore doubles, having the same lenght as 4WJJD
Slower reads
I'd like this as a informal coding challenge
Redus is overkill statically
I this is why falcor exists lol.
You guys are silly, firestore documents charge per read so use base 64 encoding to store checked state and index with ads to pay for the read write cost XD
In a DB make a state variable with 1,000,000 bits, whenever a checkbox gets toggled i.e. checked or unchecked. A request is sent to the DB about which bit to flip. Here for example if we send 34 then the 34th bit will toggle and a update would be broadcasted on all active devices giving them the data of 34. This prevents multiple read from the variable with 1,000,000 bits. This variable would only be read by the client side on initiation of the webpage or reload/refresh of it. you can duplicate the variable to attain a historical record and increment a counter for every request sent
I'm waiting for the Elixir rebuild of this site
9:50 I can find literally zero evidence for this assertion. Even looking online the worst case I can find people citing is that the LLVM will generally assign each boolean to an i1 and assign one byte to each boolean. I cannot find a single source corroborating that it assigns 64 bits to each boolean.
C *_also_* generally assigns 1 byte to each bit as far as I'm aware but if you have a struct of several bits then they can share the same byte. I could be wrong on this, I never quite got a firm hold on packing bits, but I think this is the case. The reason Rust doesn't is (as far as I can tell)
1 : because then they can't be referenced easily. (if you use other datastructures that forego this requirement they can pack bits in together) In theory I think this could be sidestepped in the compiler, but it'd be rather complex to add.
2 : "it's complex, if you pack them like that it's slower because then to reference any individual bit you need to use bitshifts and bitmasks!" no, just, just no. This is just more "hur de hur premature opimiztion rut of ahl EVIL!" waffling. Realistically for most operations you'd only need to do a predetermined bitmask but even if you *_did_* need to do both a bitmask and a bitshift those are like the two fastest operations a CPU can possibly perform so the memory savings would dramatically overcompensate for any possible CPU slowdowns. Oh, also caching is a thing. Having them packed together would improve the spatial locality of the data thereby improving cache hits, making it faster. So, no, this is just wrong, stop saying it. What you mean by "premature optimization is the root of all evil" is "I'm too lazy and/or bad to write good code".
3 : lack of demand given how much effort it'd take to make it happen, plus how easily sidesteppable it is with dedicated bitfield crates and such.
How do you avoid the bitshift? Rust only allows 1 to be recognized as `true` iirc
Yeah, bit fields in structures is a pretty common trick to use cache more effectively, AFAIK
C will put booleans in consecutive bits but only if you use the bitfield syntax (struct S { char x:1; char y:1; char z:1} __attribute__((packed));) and add the compiler attribute to pack them together.
And why are you fighting invisible enemies? It's pretty obvious why Rust doesn't have it is simply because it's not that common of a usecase unless you're making very low-level communication protocols, so there isn't much need for it, it doesn't have to do with the fact that some (idk who) would consider it inefficient.
I don't do Rust but I'm pretty sure the correct value is 1 byte per bool, same as C++. Incidentally, in C++ a vector of bools is implemented as a bitset. Two comments on that choice:
1. It's extremely fast.
2. It's considered a grave mistake. It's nice that the standard library has a bitset type with a similar API as vector. It's not nice that it's literally a vector. If you index into an element of the vector (of any other type) you get a reference to that element, which is pretty much interchangeable with it for all intents and purposes. When you index into a vector of bools you get a proxy type that does the bitmasking for you. It's convenient, it's a fine api, but it's not a vector.
" if you have a struct of several bits then they can share the same byte. "
Pretty sure this part is not correct. What is normally done is to pack for example, an i16 and two u8s in the space of a single u32. This generally depends on how the elements are ordered, since for example, if you did u8 i16 u8, the i16 would end up misaligned. Cpus can tolerate this, but it's slower, so compilers will typically add padding in that situation. In C++ you can opt out of this (to have a well-determined memory layout) with a pragma. No idea how it works in Rust. But I _can_ say that packing a bunch of bits into the same byte would require the same sort of bitmasking logic as in C++'s vector. Doable, but it _has_ to be opt-in. I would be shocked if Rust made the wrong choice here.
You could have easily explained it by referencing the C# BitArray struct. Bool in almost every language is at least the same size as char/u8, and BitArray is 8 of those ... 8*8 = 64 bits (8 bytes) to store 1 byte, and most programming languages do this similarly if they have a similar function. When I emulate 8bit and 16bit CPUs, I always use the relevant data type (u8 or u16) specifically for that reason - I've already got my word size available and I can just use boolean operators to get the value of the specific bit and go on from there.
One thing you are missing though, is that no just because you're targeting x64 instead of something else, that doesn't automatically make the data type jump up. I target exclusively x64 in visual studio, my base int size is still 4 bytes (32 bit). And I didn't set it up to do that, it just came default that way for me. A CPU can and will pull increments of data, as long as it's not single bits (as you said), but it only pulls what it needs. An ASM instruction exists even now for grabbing 8 bits of memory in the ISA (this is why in programming languages, the smallest real data types are 8 bit - which reinforces what you were trying to say: there is no CPU instruction in any consumer ISA that pulls less than 1 byte of data), and this is what is run when your code is compiled.
In both C++ and C# I have to manually use the 64 bit data types to make my ints hold that value, or change some compiler flags. Also, 64 bit CPUs can address 128bits of data. This isn't even new tech, as even the OG GameBoy, with its 8bit Z80 like cpu could address 16bit addresses, by combining two registers and treating them as one (which is, as far as I know, how current gen 64bit CPUs address 128bits). It's all dependent on what instructions are implemented at the hardware level.
Edit: The C++ equivalent to C#'s BitArray is BitField. Although, instead of it being a struct of type Bool (which in C#, is itself a struct if memory serves... why I'll never know), it can be a collection of any integral type according to cppreference
I just found your TH-cam channel recently.. I enjoy all of your videos.. Very insightful.. Although I disagree with your opinion on AI.. I see AI taking over virtually all of IT, with minimal expansion of AI tech.. Your videos will be useful for contextual reference in AI models