CORRECTIONS: 1. In the video I stated that Ashes of Creation was aiming to do a single shard for the whole planet (like EVE). This is incorrect. Like SC, they will be doing regional shards at launch, which you can see here: ashesofcreation.wiki/Regions 2. WW2ol popped (THANK YOU!) in to provide corrections on my interpretation of WW2 Online's architecture here: th-cam.com/video/B8zkYsqLk7g/w-d-xo.html&lc=UgxlrANGE1OQ1YHlM_x4AaABAg Summary: they have server-tracked munitions that will allow cross zone bombardment. This is fascinating and definitely an earlier precursor to some other stuff I talked about that I did not properly credit. I will seek to close the gap in my understanding on this ASAP! Thanks again! Here's some links for those that want to dive even deeper. Find something interesting? Lets discuss! Ashes: th-cam.com/video/pdav0as54mU/w-d-xo.html Pax: playpaxdei.com/en-us/faq/tech SC: robertsspaceindustries.com/comm-link/transmission/18397-Server-Meshing-And-Persistent-Streaming-Q-A
Due to latency, regional shards are not optional for any game that seeks real-time interaction between players. The 100ms latency between US and EU is enough to give a distinct advantage to the player located near the physical server, and the 250ms latency between US/EU and Asia is enough to create sufficient desynch to make the game unplayable. This is a speed of light/causality issue that is physically impossible to overcome.
If I may share, there is also this source I have found very insightful: The video here on YT called "A tech introduction to SpatialOS". On a high abstraction level this seems to have a lot of similarities to how SC and in part AoC is doing it. Use of gRPC for cross-machine communication, authority system for dynamic (stateful?) distributed simulation, state replication across game servers with a sophisticated (Area of) Interest Management system, scalable services between game servers and clients. And then, if I may, the "Unofficial Road to Dynamic Server Meshing" for Star Citizen as a comprehensive collection of everything CIG has shared on their engine regarding their Server Meshing implementation. I let you know that this was created by me and therefore indirectly reflects my understanding of their architecture, even though I based it on CIG statements and my computer science knowledge.
@@SETHthegodofchaos I mean that a single worldwide shard is not possible for any game that requires low (real-time) latency. Regional shards are therefore required to provide playable and "fair" latency. I should add, however, that for Star Citizen, CIG has said that separate shards will likely still share a single economy, and the new "space shield token" concept will have the limited number of winners selected from across *all* shards.
@@ceb1970 Yes, thats also what I was getting at. As far as I know, CIG is aware of this limitation. I think the idea of "one universe" stems from the original 2012 idea of them using traditional instances. They were not planning anything regarding Server Meshing yet back then. So players would be matchmade into instances when they quantum travel, jump through a jumppoint or automatically land on a planet location. This would have been much simpler in design and capability. Back then CR explicitly stated that they dont like locking players into shards. They want it to be more free flowing. So they wanted to allow players to join instances regardless where they are playing from. So you could meet friends all around the globe. Back then, they proposed they could put the game servers somewhere between the geographical locations of the players to even out the latency a bit for a better experience for all players on average. It was a neat idea, but imo, I believe that most players would have just sticked to playing instances of their region. With the redesign with Dynamic Server Meshing they extrapolated that idea to "everybody is actually playing in the same universe". I do think they shouldnt advertise it in that way. Because many people are still expecting it and said that they wont play the game if its not working like that. Single Shard Purists, I guess :'D These days (a few years ago already), they already knew that a single global shard is difficult to achieve. So they are focusing on getting a single regional shard first. So one large shard for EU, another one for US, etc. And only once those are running well, they would do further R&D on how to do that well. Netcode can help smooth some latency out, but even that has its limits and will introduce rubberbanding such as dying after entering cover or peekers advantage. I personally would be content with a single shard per region. I dont need the global one.
Thank you for answering my question! I really love the thoroughness and seeing how things advanced over time. I see now that the main confusion people have is in thinking that a server handling different regions of a world is all that server meshing is. That is more of a halfway point, and the real advancement of Star Citizen and Ashes of Creation is in allowing players to see and interact with each other across these regions. I'll be sharing this video anytime the subject comes up. Thanks again!
Yeah the cross server interactions (replicating data between neighbouring servers in real-time) plus the dynamic division of the world as the population density changes (dynamically changing which part of the world is governed by which server, not just splitting the population in an area up between servers (sharding)) are the important things not done elsewhere except these 2 games as far as I know. In SC this would also work with moving parts of the world (vehicles) and any arbitrary 'container' can become a server, down to a single room (in theory). At least that is the plan, we shall have to see if they can pull that off without major issues. They have the first part working.
Particularly the "interacting" part. Even in Ultima Online from decades ago (a game that deserved a place in this video but was sadly overlooked), you could often see other players / creatures / structures across server line boundaries, but you couldn't interact with them.
@@ceb1970 Appreciate the comment! I played UO for a bit. Prior to it, the only multi-player games I played on the internet were text or ASCII based (BBS door games, MUDs, etc). UO really changed how I thought of MMOs and nothing was ever the same after it :)
@@grolo-af While I understand it isn't a normal MMO, I think Second Life has a server mesh that would have been good to talk about. They have regions of 256x256 meters. As you get close to a region boundary you will get copied or have ghost created in the neighboring regions that allows player interaction between servers. Once you cross the boundary then your authority gets transferred to the other server. Because there is an open source version of their server structure, OpenSimulator, it is a good way to dive into how a server mesh can work.
Agreed. It's not an obvious tech to wrap your head around, well for us that don't work in the field. Digging deeper into how SC works would be interesting.
@@alexb115 I think to really understand what is unique/special about StarCitizen's approach / implementation of server meshing you also need to understand & dive into what object container streaming (OCS) / Server-side Object Container Streaming (SOCS) is, what Persistent Entity Streaming (PES) is and with that how graph databases work. These are all fundamental to how CIG is accomplishing what they are doing with server meshing and what they call "hybrid"; being their replication service + replication messaging queue that has allowed them to decouple what we traditionally think of as a "game server" from the "state" of the game world that is usually also managed by a game server. The server architecture with 'hybrid' is one of the unique things about SC; where by you have a database cluster interfacing with a replication service / server which is holding the game state; and both the "game server", which is now only doing game logic/simulation, and players game clients are connected to that replication server. Very interesting tech for sure
@@grolo-af Yeah, more deep dive onto the tech, and especially how you expect them to tackle the inherent problems of player meeting together at the same place, etc. Things that CIG has not really answered with this tech.
Hey Grolo, so what you're referring to as our sectors are actually 9 squares (what we call octets/cells), with the character in the middle square, and it continuously loads as you move throughout. There's no despawning to go between these cells and you can fly from England to Germany and it'll take you 45 real world minutes. We actually do have server-tracked-rounds which allow you to fire artillery from one sector to others, with about a 15km range (game is modeled at 1/2 scale of western europe). We also have server tracked bombs that allow you to conduct high altitude bombing and you can hit enemy factories much like in the real war. The scale of what WWII Online can achieve is really where the magic is, you can have hundreds of players in a single battle and thousands of players on the entire game cluster concurrently.
Much better I was speaking from memory & from some of my ignorance.. the many years of WW2ol developments, hurdles and trade off would do a very juicy video especially in the current context of SC trying to implement large scale battles and concurrent players. Although history might not remember I believe WW2OL pioneered this! S! o7!
Thank you so much for visiting and dropping more accurate information in here!! I will add this to the pinned correction post! But beyond that, I'm just thrilled to have you as a guest! I misunderstood the architecture a good deal it sounds like. I will correct this knowledge gap! I'm not sure I comprehend what you're describing as 9 squares, but I will soon!
@@grolo-af Think of it as a tic-tac-toe grid with the avatar in the center square. As the avatar moves the gride moves with the infantry, plane, tank, or ship as it travels. Hope that helps. Thanks for mention the game.
this is a great video, and it goes over basically all the different types of game server technologies too. thanks for making it and looking forward to your subsequent vids on planetside and going more indepth into star citizen's totally unique next generation future of meshing behemoth that has been a long time coming
One of the things we learned from the recent Star Citizen meshing tests is that the smaller an area a server is controlling, the larger the player count it can handle. As you note, a single server shard of Stanton before meshing can handle about ~100 players at the moment. But when we did the most recent meshing test, we saw >400 people on a single back end server without any issues that weren't fundamentally art or level design related (e.g. not enough hangar doors at key landing zones). The big issue they have right now is the capacity of the shard as a whole is limited at the top level by the service the players are connecting in to. Fixing that is key to getting it to scale to the levels they want it to scale to.
> limited at the top level by the service the players are connecting in to Can you elaborate on that? re: smaller area equating to a higher population cap - this makes sense when you take other things that must be managed into account, such as: entities, NPCs, boids, etc
@@grolo-af re: top level shard limits in SC. The way SC works at the moment is players connect to a service called "the hybrid", which looks to the client like a regular game server, but really it contains the replication layer and sends data to / from the back end servers. Your game client is not given a direct connection to a specific backend server, everything goes through this middleman. When a game server wants to know about things approaching it's border, it uses the same service to get info from adjacent servers. This is good in some ways - it's how the cross border problem is solved (the hybrid can feed you data from multiple servers, and also clue a server in that something is coming it should be ready for, so it can have your ship, character and cargo all loaded and ready while you are rapidly approaching in Quantum). It's also how backend server crash recovery works (servers can die and be replaced and you just see the game pause for a few seconds, then everything continues from where it was). It's bad in others - almost all the scaling issues they are having in the meshing tests are because the hybrid itself has reached capacity and they need to do more optimisation on it. Each shard has precisely one hybrid, so more hybrids == more whole universe shards. In the most recent meshing test, it started to display performance issues at around 900 people in a shard. Yes, it /worked/ at 2000, but not flawlessly - there were interaction delays (press button, wait several seconds to see results) which made things like elevators painful to use and issues with players floating and bumping each other (severely enough to cause injuries in some cases) all caused by the messaging delays as a result of the player count on the single hybrid. Static meshing as of today (with the hybrid replication layer) moved the needle from 100 players in Stanton alone to about ten times that across Stanton and Pyro with 4 to 6 backend servers. The optimisations will keep coming, and that capacity will go up. But to go orders of magnitude to the point they can collapse the shards down to just the four geographic regions they have (America, Europe, Asia, Australia), they'll need another architectural change. I think that change is separating the client connection part from the replication layer, and allowing multiple client connection services in the same universe shard. I assume that's also part of the dynamic meshing goal. It will be interesting watching it develop :-)
@@JRM-VSRThat was pretty detailed. I didn't know they were hitting the limit of the intermediate server already. Probably having these intermediates also be dynamically meshed and spool up along with backend servers to control a single section of the map could help this?
@@InfiniteWatermelon I think the end goal state is to pull the hybrid apart into a separate process for everything it's doing. Exactly how much parallelism this then allows and what the limits end up being we'll find out as they do it. The current system is definitely "a step along the road" and not the intended end state.
I cannot add information to this subject in a way that others more knowledgeable have but I appreciate the walkthrough on the core workings of these systems and methods. I’m sure the minutiae of each method has its own unique aspects but to be able to grasp the methods in a way that I can understand it to a generalized standard is wonderful. Thank you.
Excellent video! I appreciate your ability to breakdown complex systems in a way that anyone can gain a basic understanding of how they work. Keep them coming!
Great video man, really well explained too. I'm into Star Citizen and had a decent understanding of Server Meshing in Star Citizen before this video but after watching this and learning about the other games that use Server Meshing and how it works in all of those it really helped expand my knowledge of Server Meshing in general. I find that it is always good to learn more about things, and I find it quite enjoyable to do too, so thank you.
"Star Citizen is planning on sharding the entire Universe." 😂 I'm so immature. This was the best breakdown of all these technologies I've heard, thanks!
Saw a guy in SC fly a tiny ship through a C2 sitting on a shard transition border. Go into the ship on one shard, come out of it into another. Tech is cool as hell, moreso once they hammer out the bottlenecks. Really incredible to see them figuring out threshold issues at 2000, where a year ago it was only 200.
Is this a new channel? I love this kind of content. I've been playing those games for so long (EVE on the background now) and it's cool to know some of the tech behind them. Hope to see more!
Yeah started this channel a few days ago... so I really appreciate the encouragement!! Good luck in EVE! Love that game. And if this is the space ghost that visited the stream last night... THANK YOU for that too!!
Super interesting talk, great history lesson, I learned a lot. One subject I would love your explanation on is the other side of server to client communication. This was about scale and how the future looks for handeling a large amount of players, but what about the opposite. Fighting games use what's known as "roll-back netcode" and I never really understood what that meant and how it works so well in games where literal frames matter. Keep doing what you're doing, this is really insightful!
Really appreciate the encouragement! Means a tremendous amount to someone new at this. Thank you! So far, I've been staying away from the client side, as there's going to be a lot more considerations there that an actual game dev is going to have greater insights into. Servers are servers, and systems are systems and networks are networks and I can talk a lot about all this stuff... buuuut client types vary wildly. I've never touched 3D modeling software, and I've never developed for a GPU. All that said... I can say a little bit about this here: I understand conceptually that rollback netcode anticipates inputs. It gets its name from what it does when it anticipates incorrectly. Something we do in systems all the time is roll back improper deployments, states, etc. For example... when some exploit happens in an MMO and the economy goes to crap as a result, the company may make the call and do a roll back of game state to before the exploit took place (after fixing the exploit of course). This is a good 'ole fashion rollback! Many of us may have been through one. I've been through many rollbacks in my time, mostly because we shipped a broken version of software, and had to roll back quickly to the previous known good version. Rollback netcode is doing the same, but on the millisecond scale. In an attempt to get you a very fluid fighting game that feels fluid, it is predicting the inputs of your competitor based on previous inputs and contexts (state game was in + input received) and it will render that input for you BEFORE it even receives it. It's literally predicting the future. Pretty amazing! But sometimes it's wrong. Humans can be unpredictable! When it gets a different input than the one it rendered it "rolls back and re-applies the correct input". The thing is, this is happening over very small increments of time, and most often you won't even perceive the fact that it's happening. Pretty cool. Technology has gotten to the point we can trick our eyes and brains because we can just be "good enough" that a human can't tell the difference. Rollback netcode is one example. You see similar things happening with display technology, audio tech, AR/VR, and etc.
I would describe the difference between Star Citizen compared to previous server meshes in this way: Previous technology focused on meshing in-game spaces, Star Citizen is meshing entity graphs. Physical spaces within Star Citizen are just special nodes in the entity graph that can contain other entities. It makes it far more scalable and dynamic than older meshes that depending on physical spaces in the game world, which as you mentioned often have to be set up to limit how players can traverse between them. Think of the universe of Star Citizen as a huge tree with each system, planet, moon, location, ship or even a player as existing as branch on that tree. A server is told which branches they are in charge of. Other servers could be responsible for branches that stem from that other server's branches. Players or ship branches as they move through the universe move to different spots in the universe tree and anything associated with them, branches stemming from their own root entity, all get moved as one. Theoretically, if they could create the physical space nodes on the fly, they could divide up any space however they wanted. The upcoming in game event, Intergalactic Aerospace Expo, is held within a conference center in the game. They could get hundreds if not thousands of players within that space by dividing the rooms, or subdividing the rooms into virtual spaces, and each space could be on its own server. The problem then becomes whether the clients can see all those players and how much data they can transfer to support all of them. They have talked previously about doing some sort of frame skipping for entities in the background of a client so that the client might not receive updates every server tick but skip a tick or two so that the foreground entities can have priority.
Thanks for the incredibly insightful comment! Yes to all of this. The frame skipping bit interests me. UE5 seems to be making a lot of progress in this space and the benefits are obvious. Look forward to seeing where CIG goes with this, but this particular area is well outside my area of expertise. I'm just an enjoyer of such engineering marvels. : ]
There is indeed a difference in choice for the spatial partitioning datastructure/algorithm used. Various games used different ones. SC with AABB tree. Dual Universe with Octree. SpatialOS with grid. AoC with grid + Quadtree subpartitioning. AABB tree does seem to be the best choice for a space MMO but I am unsure if that will set the dynamic meshing implementation apart enough. It already seems as if the main concepts are all converging on the same ones across all dynamic implementations so far. Replication across game servers, authority system for distributed partial simulation. Maybe server(s) in the middle to faciliate all that. The way the game world is split may differ. But I am not sure how much that has an impact in the end.
I have been trying to understand exactly what is different with SC (from a non-software engineer’s understanding level) since I discovered the game last year. This video did a great job at explaining the history and helping me understand some of these more abstract concepts. Really good video! Thanks!
Just as a note the architecture shown for wow was pioneered (as far as I know) by Anarchy online in 2001 whereby they had independent machines running areas. AO didn't have the 'clever' hiding of limitations they just had sparkling walls that would cause a transition with no ability to see what was going on on the 'other side'. Some areas in AO were also instanced like with GW2 (or dungeons in Wow). As a comment to star citizen, 'have to resolve state somewhere' which puts limits on how much this can scale.
@@grolo-af It was a hot mess at launch and never reached much popularity, but it broke some new ground. It was common enough that the most popular areas would get overwhelmed and the server for a city (Tir for the Clans, Athens for the Omnitek, or Newland for neutral) would go down. The other zones would all stay up, as would the chat server, but if you tired to zone into the crashed city you would go offline until that server was restored. Similarly if you got a mission whether personal or team and entered an area to do that mission it would be an instanced zone of your own/your team's own. Raid content tended to be a shared zone so anyone going into the raid would be on a shared instance regardless of teaming or faction - some raids like Tarrasque allowed PvP in the zone too.
Cross server interactions are super-interesting! I think they are hardest part of this to get right. Game dev myself and I don't really think having static meshing with no-border communication is too difficult. But I have difficulties wrapping my head around how I would go with server-to-server communication because that complicates network stack so much
100% agreed! I don't work in gaming, but I work on very large platforms, and so I can reason about the challenges they're facing... and there is nothing easy about it. Especially when dynamic partitioning must occur and you have to (in real time) split a 300 player server into two 150 player servers... without anyone noticing? Ha! When I see it in action I might shed a tear too! ;)
I feel like this gets missed in many discussions with people who criticize or lightly follow some of these games. Your descriptions gives a great explanation to the differences that I had a hard time articulating then.
Really interested in deep diving in the SC replication layer and potential bottleneck that could come with this architecture vs a meshed solution like what Ashes of Creation is doing I like how the replication layer is solving the reads for every clients but I'm wondering how much they can scale this part
I would love to hear more specifics and technical speak regarding how two similar results (seeing players across server lines for example) can be tackled in different ways. Maybe a more direct and in-depth comparison of SC and Ashes. Great video though. Thanks.
Benoit did an interview on the future of meshing, and he seemed really confident that dynamic meshing is not going to be as big a hurdle as the windup for standard meshing
@@grolo-af honestly I get the same impression, if you think about the problem of setting it up to be static vs setting it up to dynamically allocate zones, it's a bit of a different direction you have to go in that you are just going to throw out (static meshing rules) once you go the other way (dynamic system).
We found one of the server transition lines we had a soft depth C2 on one server banner reclaimer and vulture on the other side of the line and we were able to salvage the C2 from across the server transition line
As a retired back end dev I like to keep up with game server tech. You verified pretty much everything I have been able to find so far. Sadly, I have not been able to find any significant information on Planetside 2 and Throne and Liberty's back ends. The reason I bring these up is that both of these games might also be using some sort of meshing tech. The one thing I have heard multiple times is Planetside is cheating a bit by using way too much client authority for that type of game. Another interesting topic would be exactly how the servers are communicating with each other. I can't help but think CIG and AoC are using some sort of websocket tech for the real time replication. CIG has bene working on some sort of queueing system as well, which is kind of concerning because they can be a PITA to work with so late in a project.
Someone else mentioned Planet Side 2 which I touch on in this comment here: th-cam.com/video/B8zkYsqLk7g/w-d-xo.html&lc=Ugwb4MZ0V80g9ivuqA54AaABAg.A9sRK0aurjbA9sYZyT-DL7 I know nothing about T&L. As for communication protocols - I could speculate but that's what I'd be doing not having worked at these places. I've not found details on that topic published anywhere. As far as queuing... pros and cons to that one too. A queue can reduce fragility by allowing a system to absorb load spikes (better); however, it can introduce latency too. And a real-time system can only queue so much before the experience breaks. Great comment! Appreciate the discussion! Hope you're enjoying retirement! :D I'll join ya soon enough.
The queue system in question is "NMQ (Network Message Queue)", and it feeds actions made by entities/clients to the DGS' and by proxy Replication Layer (Hybrid). They recently did a soft rework to the system to prevent it getting jammed up due to the increased load from higher playercounts. This new version is called "RMQ (Replication/Replacement Message Queue)" th-cam.com/video/I0AdNFF286Y/w-d-xo.html I believe this system - and maybe the wider system too - operates primarily using UDP sockets. Take with a grain of salt, going off memory with this one
What I keep seeing missing from most of these server meshing discussions around Star Citizen is the end goal of Dynamic Entity Server Authority/Meshing. Essentially, you can have a server simulating an area like New Babbage Commons. Then you can have 700 players there, and they are all authoritated by different servers, not by physical(geographic IN GAME) location, but just by entity. An analogy I use to explain is that The Servers are all CPU cores. The clients are all IO devices. The Replication Layer is the RAM and the "world server" is the SSD/HDD and BIOS that tells which server is authoritative over what entities.
Great comment, thanks so much for taking the time to leave it! I get that, and that's extremely cool, but I feel it's slightly different than what was discussed here. This video focused on population management and managing the interactions between that population. I _could_ be wrong, but my understanding is that if you have two players on a moon and they're assigned to different authorities, and those two players punch one another, it's not those entity authorities that resolve that. If it _is_ then please correct me... and also explain to me how they decide which one of them is more correct in whats about to transpire? I.E. which one "wins"? If it is _not_ the entity authority that decides, but rather the game server responsible for the environment within which they reside... then THAT is what is pertinent to this particular conversation and why I focused on that. I agree with you that entity authority is a whole another thing that deserves it's own discussion. It relates closer to persistence, PES, etc. Again, something no other game (to my knowledge) is tackling.
@@grolo-af The entity authority I was speaking of is not each player as an entity, but more of a entity within the locations. If you watch the r_displayinfo3 information as you traverse in Star Citizen, you'll see which entity you're a part of. When you wake up in Lorville, it says you're captured in the entity of the habs, which is in the entity of Lorville, in the entity of Hurston, etc... Then as you enter the elevator, you're encapsulated within that entity. My example of the Commons in New Babbage, each room/shop is an entity I believe. What I meant from not being geographically defined, is that it's not "physical" boundaries that the servers are going to be assigned, it's going to be based on which entity (location based entity) needs the additional processing power to simulate within it's confines. I hope I'm making sense....
The issue wiht star citizen is cross shard replication is very consequential. Let's say you're defending your base with your org. If you reach enough density of players to force the system to shard, your base will be replicated but be invulnerable in the replicas. This effectively denies your enemies a chance to attack. May not be practical for small outposts, but for very valuable targets like sosce stations, you bet that large organizations will keep 24/7 activity. If CIG decides the replicas can be modified too and work out the state reconciliation issues, then you may see your base suddenly blow up for no apparently reason as a massive party too big for your shard intentionally engineered its shard placement to avoid opposition.
I don't think that's how it'll play out, as the game servers running the combat calculations are all dealing with a "replicant" copy of the space station. No game server owns the space station. Even the game server that is governing the result of fire upon it. I do agree that there are limits, even with this system, and density can only go so far before state transfer bandwidth demands become too much. Appreciate the discussion!!
Good video! The reason for regional shards in Star Citizen is definitely not due to a lack of ambition. But a practical one. Low latency is more important in Star Citizen due to the twitch based nature of first person games. When playing on servers across the world, the speed of light becomes an issue. You cannot go faster than the speed of light. It's our universal speed limit. Due to the size of the Earth, traveling across it essentially adds a theoretical minimum one way latency of 100.2 milliseconds. But we do not live in a theoretical world. Round trip we are now talking of real world delays of 500ms. That's an eternity for a twitch based FPS game where people complain clients having a delay of 16ms (60fps) is too much. You don't want to give North Americans an advantage over the rest of the world. That's unfair. The most practical solution is to have regional shards. Americas, Europe, and Asia are their servers. Just 3. Cultural differences, language barriers, and time zones have people already isolating themselves. I am not going to stay up until 3am just to play with friends in the UK. And I am probably not going to play with Chinese or Japanese people because I don't speak their language. They have stated their goal is one shard. But, I don't think so. I don't think it's possible to have a fair and good experience for everyone with the speed of light limiting what we can do. But I am not a network guy. 3 shards is a nice, practical compromise IMO.
Totally agreed. I spoke on this without fully informing myself or giving it a ton of thought... I extrapolated the "one universe" I've heard in passing and proceeded with unquestioned assumptions. See where that got me?! : ] I also got caught up in the comparison with EVE Online, which CAN pull this off because latency is much less of a concern there (due to how input is received and processed in that game). But yeah, you're spot on 100%. Even that there's other isolating characteristics that support regionalization, beyond just latency. Thanks so much for the insights and comment. Really really appreciate it!
@@grolo-af Been loving these talks. It's great to hear someone break down the technical details. It has frustrated me to see people downplaying server meshing as though it was no big deal. Especially the possible dynamic server meshing. I loved seeing Star Citizen with 2,000 players. But one thing that has got me wondering is what are the biggest bottlenecks? I get hundreds in one server. But what about evenly distributed? I would guess the replication layer would be the bottleneck? Thousands of people sending data, querying the graph database, etc. What are the theoretical and practical limits? Wouldn't dynamic server meshing still put a strain on the replication layer ...server? Just a suggestion for a future video. I must admit, even though I basically understand the concepts of server meshing, seeing it demonstrated live last year was still like seeing magic. But I understood why that map was L shaped. I bet a limitation is if you threw a ball from server 1 to server 3, it would transition from server 1 to 2 just fine. But on server 1, you'd probably see the ball disappear transitioning from server 2 to server 3. Adding the L shaped map masks that possible issue. Still amazing to see though!
The problem you're describing shouldn't be the case once the replication layer is functioning as advertised. But I'm assuming not all advertised features are in place yet : ) And I'm sure you're right - the L shape kept that from being obvious. Appreciate the thoughts on future content! Definitely in the realm of what I'm thinking about.
I'd like to toss my hat in to mention that 'Mortal Online 2' has probably the best 'meshing' system I've experienced to date. I don't imagine that the server performance could handle the recent expansion to server meshing tests (the 1k test was HORRIBLE) but for what it does, and the community it servers, it feels very very good.
I used to play Second Life quite a bit, where their world was split into 256m x 256m "sims". As you approached a sim boundary (within 16m if I remember right) your client would establish a connection to the neighboring sim (as a "secondary connection"), and as you crossed a sim boundary you would swap your primary and secondary connections. Sim crossings would be relatively rough depending on how much data needed to be exchanged and how quickly you were moving.
Thank you for this video! I enjoyed it. Can you revisit the Star Citizen server meshing implementation once SC 4.0 drops and go deeper into why & how server meshing works?
From last year's citizencon and some other system guys. Basically the way Star Citizens dynamic server meshing works is each system and wormhole are their own severs and the planets are shards within those servers that overlay into a seemless interaction. Meaning as the population of a system grows or decreases resources are allocated to other systems that need them. An Example would be the starting system in Stanton is a server onto itself, the wormhole between Stanton and Pryo is also a server that handles the transfer of the player between systems without latency. On lets say Stanton, the planet of Microtech its a shard along with her two moons. As more people flood into micro tech (the planet) onto New Babbage, the Shard starts to break up into sub-shards to handle load while pulling resources from other parts of the server. Eventually lets say 20,000 players are on Microtech, The other systems may be just shards while the server power is pushed to maintain the stability of the players on that planet. As soon as the demand dies down allocation of those resources move back to their original space or in some cases go into standby.
Aye; it's exactly what will be required to support the type of behavior we'll see in the sandbox as conflict erupts, or players hold huge events... think Daymar racing on a grand scale, or festivals... people tend to congregate in droves and it can happen quickly.
@@grolo-af CIG has gone on record that the group that hosts the Daymar Grand Prix is now lore incorporated. I think some of the event holders are working to create a seasonal thing. So we'll actually have a dedicated racing teams in the verse.
@@grygaming5519 no its more specfic than that. a server per system is not good enough long term. that might be how it runs for now, that thats a temporary solution.. so when you say "Basically the way Star Citizens dynamic server meshing works"... you aren't accurate at all.
@@pylotlight As of right now, I did forget the scale part of it all. The whole system that CIG is building is for a near limitless ability to scale what they need and when they need it. I also meant for as of right now, we have only a portion of it working, once 1.0 comes out its going to act differently as the game itself will just be overall upgraded to its full potential.
Hey thanks for this awesome video, could you do a more in depth explanation about this dynamic Server mesh from SC? I think alot of people would also love a kind of deep analyse about that,
It's worth considering which of these games allows you to aim and time an attack manually like a traditional FPS, versus which depend on designated targets and skills which automatically attack on a schedule, like a traditional RPG. The second is MUCH easier for a server to handle than the first. Eve is very much in the RPG camp. You can't aim the guns manually, and you can't time a shot manually either. You designate a target ship, and activate a specific slot, which then cycles attacking with those weapons. Hit and miss is down to all things the server can figure out -- and a dose of RNG. Player aim and accuracy does not come into it. Elite wanted manual aiming and firing. Hence why it's designating one of the clients as a peer-to-peer master every time there's a close encounter. The limits on what can be in each battle are ... coincidentally about the kind of limits you'd expect on an FPS game from the same era where one of the clients was also running the server at the same time.
You're exactly right. Someone joined the Discord server to discuss this and I noted the same, reposting here: If you think about something like WoW, the player says “start attacking” and the server starts computing a series of attacks with no further input from client. Contrast that with something like New World’s action combat in which the player is swinging, dodging, aborting swings, blocking… there’s way more the server needs to react to. And Pax Dei. All these games are placing way more demands on server. Ashes is interesting in that it’s going with a hybrid model, not true action combat… a choice that will likely allow them higher player counts per server. Great / insightful comment. Thank you!!
On Star Citizen: if you are able, would you please talk about in game examples and how CIG manage the servers and how they want to fix these issues and maybe explain a little more in detail the persistance in the universe and how it would work with shards. I loved that video, thanks!
@@grolo-af Glad you like the idea! I'm far from a network engineer so having in game examples of a server behaviour would help me understand much better. I just saw you other video comparing AOC and SC and I loved the in game examples you gave in it! Currently, I cannot play SC in PU. My error code and behaviour I see lets me think my character is stuck in a "live" and "offline" state at the same time. I still don't know how I managed to do this but I hope understanding the backend a little more than zero (my current state) would help me replicate the issue and flag it properly on SC Issue Council (IC). Really interesting stuff, I'm a new fan!
Really really appreciate that! Glad to have ya here! : ) Thinking up some new stuff incorporating some of these ideas! Gonna require me to hone my editing skills a little more which I need to anyway...
I wonder how Planetside 2 does it. (which, of these methods it's using) It's still the only FPS with ~900 players on the same map. Very interesting video! And very well explained. I would like to hear more in depth analysis. If you can keep it "simple" to understand - like this one.
Thanks for the encouragement and vote! Quite a few people expressed that, so I'm going to try and do something along those lines soon. As for Planetside 2, I talked about that a 'lil in this comment: th-cam.com/video/B8zkYsqLk7g/w-d-xo.html&lc=Ugwb4MZ0V80g9ivuqA54AaABAg.A9sRK0aurjbA9sYZyT-DL7
This was great! Very helpful. Thank you. I wanted to do something like this as well but I would have to do more research for some of these games and how they worked. So I never did 😅 I looked at Second Life and they seem to have had the "looking into adjacent servers" already. It was a very basic grid where each server connects to its neighbor and loading happens when a player crosses boundaries (which could lead to some stalling on those borders). So it didnt even have a sophisticated authority system building yet like some of the other and later solutions. If I understand it correctly, Elite would categorize as peer-to-peer, correct? I am curious how do they prevent cheating then? Does someone know? I know usually p2p has the participants validate each other and find a consensus through that way. But if they have a player as a host for authority (I assume) then that may not be the case. It seems WoW Layers within Shards are GW2 Shards within Megaservers. Would that be a fair assessment? Also, in the SC Server Meshing Q&A they mention "Layers" as a contingency for highly populdated areas/shards. I am not entirely sure what they ment by that though. Do they mean having multiple copies of a specific area (making it an instance) or multiple copies of the whole universe within a shard, where a matchmaking service can move you between those? Besides, we now know that there will be areas that are instanced and (semi)separated from the rest of the universe/shard. Such as hangars, apartments and levels underneath Arccorp. Maybe the planned in-game videogames could fall under that category as well, since you are supposed to enter a simpod and enter game lobbies. I do think thats a good addition. It allows for more contained and curated gameplay without the chaos of a potentially chaotic open world. 23:05 Is that cool though? I always felt like thats cheating :D It's like playing in slow motion. Its a cool implementation to have every client and the server stay in perfect sync, dont get me wrong. But as a player, I value my real-time gameplay ;) I guess it works for Eve's specific type of gameplay, so I cant complain. It is clearly working for them and one of a kind. 30:38 It is interesting to me that WoW, Pax Dei and many other MMOs call their instancing of areas of their hub world "sharding". In the meanwhile, SC as well as Ultima Online calls each whole game world/universe a Shard. This makes discussion like these a lot more confusing. Different games using the same terminology but for different concepts. What a Realm is for WoW is a shard for Star Citizen. I also find it interesting how each dynamic solution uses a different way to split their game world. SpatialOS just a basic 2D grid, AoC uses a a 2D grid with Quadtree, Dual Universe used a Octree, Star Citizen uses their custom AABB tree (ZoneSystem) and planning to use custom groups of players and game objects independent of the AABB tree (so some form of spatial clustering algorithm). Great that you mentioned the nested nature of the SC'S AABB tree in that regard! I do think most of these solutions are already converging on the same solution. SpatialOS seems to have been the defacto leader here although it is odd that nobody released a game with it in the end :S On that note, I still find it odd that AoC didnt go for a server between clients and game servers. They now have to route data to neighbouring ones so that the Interest Management system on each game server can decide for its clients what to send. Thats a lot of unnecessary data and load from other game servers. Even New World seemed to have a "Replication Layer" service between clients and servers for that load for their Static Server Meshing implementation. And Star Citizen's first internal version tested in 2020 was also one where the game servers connect directly. But they stated that it would scale well for them and the opted for the service in the middle. I dont have much hope for AoCs version even tho they proclaim otherwise, but we will see. I personally believe they were just unable to move Unreal's ReplicationGraph (their Interest Management solution) out of the game server and onto its own service (in time for alpha 2). Still found it bm how they called out other solutions that do use a middleman service. Anyway, I hope this ways insightful. Let me know what you think.
Very insightful post; THANK YOU for taking the time to drop it here!! A lot of good thoughts, points, and questions. I'll respond to a few... > Elite would categorize as peer-to-peer, correct? I am curious how do they prevent cheating then? Correct. The devs play a cat and mouse game, and patch to hamstring cheaters as they're caught. It will always be a con of this solution. All solutions have cons. > It seems WoW Layers within Shards are GW2 Shards within Megaservers. Would that be a fair assessment? Yep. > Also, in the SC Server Meshing Q&A they mention "Layers" as a contingency for highly populdated areas/shards. I am not entirely sure what they ment by that though. Do they mean having multiple copies of a specific area Yes, but my understanding on this bit is that the contingency is not due to the inability to scale the mesh, but rather the in-game zones. I find this fascinating actually. It's not that they think they won't be able to handle ten thousand players in New Babbage, for example, but rather than 10,000 players would completely clog up the spaceport. Wouldn't be enough ASOP terminals, or hangars, or trains, etc. In other words, the space itself has limits of scale. It'd be like trying to fit all the people in Heathrow airport in your local small town community airport. Imagine the security line! So they'd have to physically redesign many areas to accommodate much larger populations. That will take time. And so the contingency plan is layering (another word for sharding). But since they call the top level fork a shard, they're calling a local fork a layer. > Is that cool though? I always felt like thats cheating :D It's like playing in slow motion. Cheating? Maybe :) I think it's a cool way of integrating a technical limitation into gameplay. Like you say, it works for EVE explicitly. It likely wouldn't work for any other MMO. > I still find it odd that AoC didnt go for a server between clients and game servers. This is a huge difference between AoC and SC! One I also find very interesting. I can't wait to see how it all plays out. This is why I tend to think of SC's approach as "more elegant". HOWEVER, I have seen many elegant solutions lose to less elegant ones in the real world. So it is no guarantee. Think of micro kernels vs monolithic kernels in operating systems. Most would argue micro kernels are more elegant, more secure, more maintainable, etc etc... ... and yet Linux won the internet.
I think riot's work with lol servers is interesting. They essentially built their own high speed data network and use that to connect players. Like their own CDN for saltiness. I'm interested in how they're planning to leverage that server network for their fighting game 2xko. Fighting games have their own cool network tech, rollback aka ggpo, which the creators open sourced. Do you think rollback and bespoke server networks have a place in MMOs? For example, in Star citizen, the ship accelerations are known quantities. Seems to me like rollback would help with latency in space battles. Edit: in case anyone is not familiar, there's a killer instinct dev video on roll back that's really good
I absolutely think Rollback netcode could benefit action combat MMOs. I don't think it would offer much to other types of combat resolution, such as the hybrid tab-target implementation that Ashes of Creation is going with. Input latency is far less of a concern with that type of a system. But with Star Citizen's action combat, where precision matters, it could have a positive impact on gameplay from my POV. Now keep in mind, that while I'm a seasoned architect, I am _not_ a game developer, and there could be good reasons it's not applicable to SC that I'm not aware of. As for bespoke server networks... 100% I think the more you can tailor the hardware, the software, and the system design to a particular task, the better the system will be at that task. One place I would expect this to happen (eventually) is in the replication layer.
Could you also discuss Planetside 2's server meshing tech. I think it's going to be most congruous to Star Citizen's goals. To my understanding, they had several mega servers that functioned similar to wow's (Brigs Lag Wizards represent!!), and similar to wow those servers contained multiple servers for each location within them. The key was that the subservers were able to transmit data much faster making firefights across server boundaries possible (if noticably laggier)
Star Citizen also had to solve the issue of 64-bit server meshing wich barely any of the others do, aswell as Star Citizen being the only game where everything is a fully physical persistent entity that will stay forever and not just despawn at some point so it has to do billions of entity and physics calculations each frame
Absolutely. I didn't even get into the persistence goals of SC, I decided to focus this video primarily on population management. The persistent entity stuff is quite incredible entirely on its own. PES is critical to making this possible though so that it does NOT have to process billions of entities on each frame. The primary benefit of PES is that entities are only streamed in (and paid attention to) when there's players near them. When players vacate an area, the entities are "streamed out" and no longer impact the performance of the server. Hopefully it's clear in the context of this video why that's so important. Appreciate the comment! Great one.
@@grolo-af I have been following CIGs implementation of their graph db and all the fun it has been causing them. I am not sure how they can really test it though because at least with my limited experience every time I go to access the inventory system in game you can almost hear the request slapping their raw DBMS every time with no caching. I also wonder if that graph db is going to do some of the forbidden multi-master data writes! I really do think it is an interesting system and seems familiar to the hierarchical DBMS I worked with. There is some very interesting minefields with these types of systems and how they index. CIG really had to throttle back the persistence because of all this and now only a few high priority items seem to stay very long. On the good side though is they can keep optimizing it and adding more persistence over time.
@@grolo-af Yeah this is the depth that is what makes SC somewhat unique in their approach to server meshing. OCS/SOCS -> PES which both enable the decoupling of game/world state from game logic/simulation, which in turn makes their approach to entity-zone based server meshing possible. Without SOCS their approach is infeasibly more load. Good video from a high-level as to how these types of games have approached this and evolved btw.
the servers won't have to calculate all entities, and same with NPCs, once not seen by players. they will 'vanish' into data points that only do anything once they get to certain checkpoints like stations after a quantum travel etc. incidentally this is exactly how oblivion works, and all the other games on the gamebryo engine (skyrim starfield fallout 3 4 NV etc.) they only load when a player can see them, and it's the same with all the entities in the universe. it works like that in the game now
@@DeSinc that is only the case when no players are around, but with shards in the hundret thousand Ranges with dynamic Server meshing that is still billions of entities and NPCs my friend
You should definitely do a vidoe on star citizen doing cross server interaction, I'm curious how ashes of creation are solving that, especially as that's what is actually novel.
the major diffrence is PROP persistance you drop an object in the world it stays there you build something in the world all will see it other games do it on the themepark level
In the example of Ashes or WoW or Pax Dei, why couldn't they use dynamic horizontal scaling (aka spinning up more servers) on the individual grids of the map to increase that grid's processing capacity, without the need of sharding/layering or sub-dividing that grid further? In other words, if hundreds of players begin entering Barrens and now the individual server responsible for Barrens suddenly needs more processing power than the biggest individual server money can buy, can MMOs leverage horizontal scaling, load balancing and traffic distribution systems to solve this challenge? I can imagine it would be complex to do it and also that parallelism, latency and synchronization would be challenges, but why specifically can't horizontal scaling be used instead of seemingly more complicated and innovative solutions? Thanks
In this example, if you do not divide the barrens up into smaller sections, then how to you designate which server a player is on? You could just do it by player name. So names starting with A - L are on server 1 and M - Z is on server 2. Okay, so now you have 1/2 the players on server 1, and the other half on server 2, all in the barrens, spread all over the zone. How do they see one another? Each server can only see it's own memory. So now you would need a way for each server to tell the other server where every player is... and now you're back to the problem Ashes faces except worse because the area is larger. Alternatively, you have the players on both servers NOT see each other. That's called sharding/phasing and is exactly what WoW does today. And what Pax Dei is doing (on a per tile basis), and etc.
@@grolo-af I'm probably just stupid, but why do you need to divide the Barrens into smaller sections? Why not have each server just handle the first X players who join the map and so on and so forth? About memory, wouldn't it be possible to lower the number of players per server and reserve some space to handle state sharing and updating? Although I'd bet doing that via network isn't fast enough to be playable as it would if it was all in the same PCB, probably a huge bottleneck there. Thanks for the reply though!
@@baska- Well that's what I was getting at... if two servers have two groups of players, and they need to send all the state to each other... how is that different from just putting everything on one server? You end up with everything on both servers anyway. I know it _seems_ simple... but it's far from it. Not stupid at all! Good questions! This stuff isn't obvious, especially if you don't live & breathe it, hehe
The star citizen explanation is close, but its not quite accurate, its complicated to say the least. The first implementation is actually semi-dynamic. there is more than 1 server running in stanton and pyro respectively. They also have some dynamic features to them to assign them in different ways to do some load balancing. benoit explained it recently in an interview. there's a lot more to it but its quite crazy to see it working on the test servers today. the other thing about the star citizen is that they are treating the servers as clients and utilizing a replication service to host everything. so if a server crashes, another ones can spin up and replace it without a disconnect or loss of data.
Ya I didn't even get into failure modes and recovery. Also I wasn't trying to imply it was a Pyro server and a Stanton server at the launch of 4.0.... I have NO IDEA how they'll configure the static mesh. Twas just an example. Sorry for any confusion. And thanks a TON for dropping a comment about it! :highfive:
(at SC) What about the nodes, backend services and the way the handle information :D I've heard they're doing something similar to Facebooks way of handling databases and their data. Maybe I have the answer in the next videos? I'll find out soon :D
Glad to hear you're watching several! Happy to discuss more. I can respond to "similar to Facebooks way of handling databases" - as this hits very close to home for my own work - Facebook invented GraphQL - a technology used to represent arbitrary data sources via a graph, and a corresponding language (GQL) to query it with. CIG has chosen to use a graph database for it's entities. You'll hear it called the entity graph. This is likely why you've heard what you did.
I believe you got it very wrong about WW2ol(former player & I studied game programing but went into normal software). WW2ol actually is more or less what you describe in ashes of creation(not familiar with it), I believe WW2 Cells/Sectors are first sub-divided by terrain(I think they call these host cells) but then these subdivide according to player population & hardcoded equipment visibility, so it's dynamic. So the limits for WW2ol implementation in a very high population scenario(I think max player tracking is around ~128), when the max cell player is reached, if you are for example an infantry player you will get priority drawing/tracking tanks & infantry while planes will only draw if they are like 150m away, so in these scenarios planes will pop up out of existent like in seconds, drop their bombs and then disappear from your game, Infantry can pop up and disappear as low as like 30m in very extreme "1000" player city battles, if you are a fighter or AA then infantry will get low priority while planes will draw 3km or something like that. Some of the issues that this creates(created) is that if you are doing high level bombing on a town your priority were AA, Factories, Ships but not infantry, so because the bombs were also Cell tracked and there was no infantry in your Cell cuz you are a bomber, infantry could not be killed nor could they hear or see the bombs you dropped yet infantry would see the buildings blow up, this was like 3 years ago, it was fixed by making a custom server to track bombs and artillery shells. There were some issues a while back that when handshaking Cell to Cell sometimes a player would not be drawn but would still be able to kill other players, the game also has an advance ballistic and penetration model & uses a lot of client simulation to avoid race issues but passes the calculation to the opponent(pseudo p2p), so you could for example increase your packet loss by seeding, this would desync the predictor path on your game and levitate or do crazy rotations, you then could shoot at vulnerable spots on a tank and this client shot simulation was passed to your opponent and the tank blew up on both the server and his game, despite not having a direct path to such spot. WW2ol is probably the best case study for StarCitizen by far, I think Planetside also copied this. This was the most advanced FPS networking at the time and I think it still is(provided SC succeeds), the guy that did this was contracted by Blizzard some years after & left the company. And this comes finally to one of the issues I dont see addressed by anyone with a software or networking background that you briefly touched. WW2OL large combine player realistics battles are baked in the game design of the game, the infantry gameplay has some dragginess to it, because it incorporates player position prediction heavily I think 50 ms to 100 ms, this is present on other vehicles but because these have higher momentums are not noticeable. In SC afaik as I know they are just doing basic out of shelve interpolation & I'm concerned the current fidelity might be impossible to achieve if not performance wise maybe even economically, running WW2ol isnt that cheap.
A lot of good insights here. You have studied WW2ol much more than I, and I appreciate you bringing the extra knowledge! You've given me some more things to dig into. Always easier to do so when you have some leads. It sounds like WW2ol has continued to add layers atop the original foundation to accommodate edge cases or new features as the game evolved? This can help pave the way for features more quickly (and with less risk!), but often at the cost of increased latency throughout the platform, when such things are layered on after-the-fact, rather than redesigning the original foundations.
I tagged some Devs to see if they add any comments to this video, unfortunately they remade the website & older articles got nuked.. 😅 But I think I played since 2006 on & off. The basic load balancing architecture i think is mostly the same other than a server to track projectiles & bombs to detach from player visibility Cells/servers. When it came to network packets, initially, they were sending many smaller packets & in my opinion it had smoother infantry play but you often had both players able to shoot each other, they eventually went with less but larger packets, in my view this made mutual kills more rare but gave a bigger advantage to low ping players & kinda screw up a little bit of the infantry predictor code. From my understanding SC more or less wanted to implemented WW2ol but instead of having the servers all in the same location, they were using the cloud to get a server where all the connected players where at middle distance so like West vs East Coast players would get assigned to a central US server. But it was news to me that you said they are having a universe for EU another for Australia etc.. but I wouldn't be suprised due to the current level of fidelity. But my point is that you don't have to split the game by terrain & assign players but instead split players by visibility and then assign terrain, this will mostly almost fix the missile or rocket analogy you used other than the scenario I mention for indirect fire.
With all that cig have talked about all their dreams, by my calculation we should easily be able to have upwards of a million players in a single star system battling it out. That's only 2,000 ish bengals.... so about 10-20 servers managing the ships (I would expect less tbh) than about 7000 managing all the players within.... making for a million player battle.... with all 2,000 bengals on screen.... What's you take on max theoretical players in a single star system/battle?
I think it's going to take a _lot_ of work to get to those numbers. Is it possible? I believe so. But it would be a herculean development effort. The devils in the details. A few things I'd list as challenges with reaching those numbers: 1. Clients - even if you manage to scale the backend to 1M players in a battle, you now have to figure out how to manage rendering it all on a single client machine. We've seen things like the Planetside 2 system drop how many things it would render, but even Planet Side 2 only scaled to 2k players in a battle. You'll need some SERIOUSLY aggressive down sampling. Engines are getting better at this over time. I'm not sure how well StarEngine does with this today, but my guess it, it would need a lot more work in this area. 2. Assigning more and more servers to an area of space will eventually lead to network congestion on the backend. I think the replication layer is a fantastic design; however, there will still be limits. Pumping all those state changes around, from millions of players and thousands of ships across thousands of game servers and hundreds/thousands of replication layer servers and millions of client PCs... you'll certainly be staring down some physics. 3. You'll have countless edge cases happening in abundance - what happens when the boundaries of 2 servers overlap (assuming two capital ships collide)? What about 5 overlapping all at once? How fast is the handoff between game servers? What happens when a player traverses like 15 game servers in 2 seconds because boundaries are getting so close? To be clear; I think _all_ of this is solvable. I don't think it's solved yet. Will take a ton of work. But that's why they're funded. If players want to play in a universe "without limits" someone has to do some _very difficult_ work. There's no easy path to such a thing.
I think it's possible and inevitable. There's no guarantee that CIG is the one to achieve it, but as of today, I believe they are in the best position to do so.
Apparently your viewership skews young, or someone would have corrected you by now on Asheron's Call. Asheron's Call had dynamic load balancing across a single unified world. No shards at any level. Source: The book _Designing Virtual Worlds_ by Richard Bartle, page 38. The book was published in 2003, and is now available online under a Creative Commons license. It's easy to find with a quick Internet search. It's a good read, going into the history of MUDs all the way back to the '70s. So there are two answers to the question of why has player capacity dropped. Fidelity is one answer. Inferior technology is the other answer. Dynamic load balancing is so hard that almost no games even attempt it. Asheron's Call is a very notable exception, especially since it was done so early in the history of MMOs.
Really appreciate you taking the time to correct me (although I can assure you my audience does not skew young :) ). I never played Asheron's call and didn't do enough research and made some assumptions based on time period. I've done additional digging since your comment and it's quite fascinating what they accomplished for their time . Will continue to educate myself. Also, I appreciate the book reference. I ordered a paper copy too!
This book is fantastic. I know what I'll be reading at night. Thanks again!! Reading about this stuff is one thing... I'm curious, if you were someone that actually played the game... how did it perform in the real world? Did load balancing events disrupt play? Were they seamless? Were there situations in which players overwhelmed it? What do you think kept AC from enjoying the success of EQ or UO? Marketing alone? The technology seemed superior.
Very interesting and informative. I wish the dude had put Planetside 2 in the list. Its probably relying a lot on client side but the tech for the time was like nothing else.
Planetside 2 did a lot of awesome stuff, but yes, it offloaded a lot of calculations (like did I hit?) to the client, and then verified a subset of those calculations on the server to catch cheating. Continents were shards. So no cross continent activity (which would not interfere w/ gameplay, so good choice). Supported like 1.5 to 2k players on a continent. A continent was further divided into hex zones and players were bucketed and the server would process everything for that zone/bucket together. Things distant from that bucket would be ignored, lowering the load on the server by treating each region of conflict as distinct. Some pretty novel approaches. By mixing this approach for minimizing the calcs it had to do per user and offloading some stuff to the client (not purely server authoritative) it scaled a server pretty far. Not infinitely so though. In an MMO, 2k players on a continent would not be a lot. But it seemingly served this game very well as I hear a lot of people talk about how great it was. If a game FEELS great, it IS GREAT. Perception is reality in these situations. Kudos to those devs!
@@grolo-af Hey, I appreciate you took time to answer, amazing video btw, watched it all. I vaguely remember playing Planetside 2 to be honest, been so long. I don't recall if we could shoot someone across the server/bucket, was mainly what I was wondering, because if its a shooter so if there's server to server shootings, it would have been maybe the first game to do allow server-server boundary interactions, but I honestly don't recall that. I recall though some massive battles would basically stop rendering players and reduce the range of visibility, a bit like cities in FFXI back then, which made snipers useless as you could only hit what you see. Funny thing is that grenades still worked on the invisible players, so intense battles became grenade spams.
you are wrong about ashes of creation taking the eve online route. Servers will be regional based and not anything like a mega server. i dont know if you got your information from pirates software but he was wrong. Also on star citizen they did say that a capital ship is gonna need its own server spun up if in the open world but in citizen con they did show something called instanced fleet battle so they might instance some of these big guild battles but im not really sure cause then what's the point of dynamic server meshing if were just gonna instance the biggest battles. By the way amazing video and insight. alot of people always say server meshing is old and fail to see how its being modernized by ashes and star citizen. I have a question about Star Citizen's server meshing. Wouldn't the replication layer still create a bottleneck, since it has to manage data across all the servers and star systems across all the shards?
Instanced fleet battles are for large PVE raid type encounters, is what i understood from the video. So we will have some super hard content that isn't able to be cheesed by sheer numbers in the open universe.
You're right! I got that wrong... I understood it to be that way based off their own server meshing video where they went deep, but I obviously misinterpreted. Thanks for the call out! I'll pin a correction. Yeah I raised an eyebrow at the instanced fleet battle thing... not sure what to make of it yet. Need to dig into it more. As for your question on the replication layer... one way I've described this to people is... (and take this w/ a grain of salt because there's a lot of educated guessing here, I don't work for CIG and don't have access to the code)... but think of it like iCloud (or whatever the Android equivalent is) where you have this massive pool of data (aka state). The clients are talking to it and the servers are talking to it. They're _both_ signaling changes to it. It's a pretty beautiful design really. Now, if it was one big blob truly tracking everything at once, you would be correct, but I doubt that's what's going on. I'm sure this state layer itself is sharded. Back to the icloud analogy, when I make a request to iCloud, the system doesn't need to search ALL of icloud for the data I need. It very likely is sharded by user ID, and so it knows exactly which host/cluster to go to given my account. Likewise, when a user or game server fetches or signals a command against this layer, which will surely be sharded by game region, it'll be a very small subset of the replication layer that needs to be consulted. That's how I'm seeing it based on what I've consumed and pieced together. Happy for any corrections to my understanding anyone may have!
CORRECTIONS:
1. In the video I stated that Ashes of Creation was aiming to do a single shard for the whole planet (like EVE). This is incorrect. Like SC, they will be doing regional shards at launch, which you can see here: ashesofcreation.wiki/Regions
2. WW2ol popped (THANK YOU!) in to provide corrections on my interpretation of WW2 Online's architecture here: th-cam.com/video/B8zkYsqLk7g/w-d-xo.html&lc=UgxlrANGE1OQ1YHlM_x4AaABAg
Summary: they have server-tracked munitions that will allow cross zone bombardment. This is fascinating and definitely an earlier precursor to some other stuff I talked about that I did not properly credit. I will seek to close the gap in my understanding on this ASAP! Thanks again!
Here's some links for those that want to dive even deeper. Find something interesting? Lets discuss!
Ashes: th-cam.com/video/pdav0as54mU/w-d-xo.html
Pax: playpaxdei.com/en-us/faq/tech
SC: robertsspaceindustries.com/comm-link/transmission/18397-Server-Meshing-And-Persistent-Streaming-Q-A
Due to latency, regional shards are not optional for any game that seeks real-time interaction between players. The 100ms latency between US and EU is enough to give a distinct advantage to the player located near the physical server, and the 250ms latency between US/EU and Asia is enough to create sufficient desynch to make the game unplayable. This is a speed of light/causality issue that is physically impossible to overcome.
@@ceb1970 did you mean cross-regional shards?
If I may share, there is also this source I have found very insightful: The video here on YT called "A tech introduction to SpatialOS". On a high abstraction level this seems to have a lot of similarities to how SC and in part AoC is doing it. Use of gRPC for cross-machine communication, authority system for dynamic (stateful?) distributed simulation, state replication across game servers with a sophisticated (Area of) Interest Management system, scalable services between game servers and clients.
And then, if I may, the "Unofficial Road to Dynamic Server Meshing" for Star Citizen as a comprehensive collection of everything CIG has shared on their engine regarding their Server Meshing implementation. I let you know that this was created by me and therefore indirectly reflects my understanding of their architecture, even though I based it on CIG statements and my computer science knowledge.
@@SETHthegodofchaos I mean that a single worldwide shard is not possible for any game that requires low (real-time) latency. Regional shards are therefore required to provide playable and "fair" latency.
I should add, however, that for Star Citizen, CIG has said that separate shards will likely still share a single economy, and the new "space shield token" concept will have the limited number of winners selected from across *all* shards.
@@ceb1970 Yes, thats also what I was getting at. As far as I know, CIG is aware of this limitation.
I think the idea of "one universe" stems from the original 2012 idea of them using traditional instances. They were not planning anything regarding Server Meshing yet back then. So players would be matchmade into instances when they quantum travel, jump through a jumppoint or automatically land on a planet location. This would have been much simpler in design and capability. Back then CR explicitly stated that they dont like locking players into shards. They want it to be more free flowing. So they wanted to allow players to join instances regardless where they are playing from. So you could meet friends all around the globe. Back then, they proposed they could put the game servers somewhere between the geographical locations of the players to even out the latency a bit for a better experience for all players on average. It was a neat idea, but imo, I believe that most players would have just sticked to playing instances of their region.
With the redesign with Dynamic Server Meshing they extrapolated that idea to "everybody is actually playing in the same universe". I do think they shouldnt advertise it in that way. Because many people are still expecting it and said that they wont play the game if its not working like that. Single Shard Purists, I guess :'D
These days (a few years ago already), they already knew that a single global shard is difficult to achieve. So they are focusing on getting a single regional shard first. So one large shard for EU, another one for US, etc. And only once those are running well, they would do further R&D on how to do that well. Netcode can help smooth some latency out, but even that has its limits and will introduce rubberbanding such as dying after entering cover or peekers advantage. I personally would be content with a single shard per region. I dont need the global one.
Thank you for answering my question! I really love the thoroughness and seeing how things advanced over time. I see now that the main confusion people have is in thinking that a server handling different regions of a world is all that server meshing is. That is more of a halfway point, and the real advancement of Star Citizen and Ashes of Creation is in allowing players to see and interact with each other across these regions. I'll be sharing this video anytime the subject comes up. Thanks again!
Glad to hear it!! Thanks again ErikLiberty!
Yeah the cross server interactions (replicating data between neighbouring servers in real-time) plus the dynamic division of the world as the population density changes (dynamically changing which part of the world is governed by which server, not just splitting the population in an area up between servers (sharding)) are the important things not done elsewhere except these 2 games as far as I know. In SC this would also work with moving parts of the world (vehicles) and any arbitrary 'container' can become a server, down to a single room (in theory). At least that is the plan, we shall have to see if they can pull that off without major issues. They have the first part working.
Particularly the "interacting" part. Even in Ultima Online from decades ago (a game that deserved a place in this video but was sadly overlooked), you could often see other players / creatures / structures across server line boundaries, but you couldn't interact with them.
@@ceb1970 Appreciate the comment! I played UO for a bit. Prior to it, the only multi-player games I played on the internet were text or ASCII based (BBS door games, MUDs, etc). UO really changed how I thought of MMOs and nothing was ever the same after it :)
@@grolo-af While I understand it isn't a normal MMO, I think Second Life has a server mesh that would have been good to talk about. They have regions of 256x256 meters. As you get close to a region boundary you will get copied or have ghost created in the neighboring regions that allows player interaction between servers. Once you cross the boundary then your authority gets transferred to the other server. Because there is an open source version of their server structure, OpenSimulator, it is a good way to dive into how a server mesh can work.
Thanks would love more in-depth or more videos on this
Very good to know! Thanks for the "vote"! : )
Agreed. It's not an obvious tech to wrap your head around, well for us that don't work in the field. Digging deeper into how SC works would be interesting.
@@alexb115 I think to really understand what is unique/special about StarCitizen's approach / implementation of server meshing you also need to understand & dive into what object container streaming (OCS) / Server-side Object Container Streaming (SOCS) is, what Persistent Entity Streaming (PES) is and with that how graph databases work. These are all fundamental to how CIG is accomplishing what they are doing with server meshing and what they call "hybrid"; being their replication service + replication messaging queue that has allowed them to decouple what we traditionally think of as a "game server" from the "state" of the game world that is usually also managed by a game server. The server architecture with 'hybrid' is one of the unique things about SC; where by you have a database cluster interfacing with a replication service / server which is holding the game state; and both the "game server", which is now only doing game logic/simulation, and players game clients are connected to that replication server. Very interesting tech for sure
@@grolo-af Yeah, more deep dive onto the tech, and especially how you expect them to tackle the inherent problems of player meeting together at the same place, etc. Things that CIG has not really answered with this tech.
Hey Grolo, so what you're referring to as our sectors are actually 9 squares (what we call octets/cells), with the character in the middle square, and it continuously loads as you move throughout. There's no despawning to go between these cells and you can fly from England to Germany and it'll take you 45 real world minutes. We actually do have server-tracked-rounds which allow you to fire artillery from one sector to others, with about a 15km range (game is modeled at 1/2 scale of western europe). We also have server tracked bombs that allow you to conduct high altitude bombing and you can hit enemy factories much like in the real war. The scale of what WWII Online can achieve is really where the magic is, you can have hundreds of players in a single battle and thousands of players on the entire game cluster concurrently.
Much better I was speaking from memory & from some of my ignorance.. the many years of WW2ol developments, hurdles and trade off would do a very juicy video especially in the current context of SC trying to implement large scale battles and concurrent players. Although history might not remember I believe WW2OL pioneered this! S! o7!
Thank you so much for visiting and dropping more accurate information in here!! I will add this to the pinned correction post! But beyond that, I'm just thrilled to have you as a guest! I misunderstood the architecture a good deal it sounds like. I will correct this knowledge gap!
I'm not sure I comprehend what you're describing as 9 squares, but I will soon!
@@grolo-af Think of it as a tic-tac-toe grid with the avatar in the center square. As the avatar moves the gride moves with the infantry, plane, tank, or ship as it travels. Hope that helps. Thanks for mention the game.
this is a great video, and it goes over basically all the different types of game server technologies too. thanks for making it and looking forward to your subsequent vids on planetside and going more indepth into star citizen's totally unique next generation future of meshing behemoth that has been a long time coming
Really appreciate the encouragement and vote! I'm cooking up more...
MAAAAAAATE
Very clear explanation of the differences between how each company is approaching this problem. Thanks a lot.
One of the things we learned from the recent Star Citizen meshing tests is that the smaller an area a server is controlling, the larger the player count it can handle. As you note, a single server shard of Stanton before meshing can handle about ~100 players at the moment.
But when we did the most recent meshing test, we saw >400 people on a single back end server without any issues that weren't fundamentally art or level design related (e.g. not enough hangar doors at key landing zones).
The big issue they have right now is the capacity of the shard as a whole is limited at the top level by the service the players are connecting in to. Fixing that is key to getting it to scale to the levels they want it to scale to.
> limited at the top level by the service the players are connecting in to
Can you elaborate on that?
re: smaller area equating to a higher population cap - this makes sense when you take other things that must be managed into account, such as: entities, NPCs, boids, etc
@@grolo-af re: top level shard limits in SC.
The way SC works at the moment is players connect to a service called "the hybrid", which looks to the client like a regular game server, but really it contains the replication layer and sends data to / from the back end servers. Your game client is not given a direct connection to a specific backend server, everything goes through this middleman. When a game server wants to know about things approaching it's border, it uses the same service to get info from adjacent servers.
This is good in some ways - it's how the cross border problem is solved (the hybrid can feed you data from multiple servers, and also clue a server in that something is coming it should be ready for, so it can have your ship, character and cargo all loaded and ready while you are rapidly approaching in Quantum). It's also how backend server crash recovery works (servers can die and be replaced and you just see the game pause for a few seconds, then everything continues from where it was).
It's bad in others - almost all the scaling issues they are having in the meshing tests are because the hybrid itself has reached capacity and they need to do more optimisation on it. Each shard has precisely one hybrid, so more hybrids == more whole universe shards. In the most recent meshing test, it started to display performance issues at around 900 people in a shard. Yes, it /worked/ at 2000, but not flawlessly - there were interaction delays (press button, wait several seconds to see results) which made things like elevators painful to use and issues with players floating and bumping each other (severely enough to cause injuries in some cases) all caused by the messaging delays as a result of the player count on the single hybrid.
Static meshing as of today (with the hybrid replication layer) moved the needle from 100 players in Stanton alone to about ten times that across Stanton and Pyro with 4 to 6 backend servers. The optimisations will keep coming, and that capacity will go up. But to go orders of magnitude to the point they can collapse the shards down to just the four geographic regions they have (America, Europe, Asia, Australia), they'll need another architectural change.
I think that change is separating the client connection part from the replication layer, and allowing multiple client connection services in the same universe shard. I assume that's also part of the dynamic meshing goal. It will be interesting watching it develop :-)
@@JRM-VSRThat was pretty detailed. I didn't know they were hitting the limit of the intermediate server already.
Probably having these intermediates also be dynamically meshed and spool up along with backend servers to control a single section of the map could help this?
@@InfiniteWatermelon I think the end goal state is to pull the hybrid apart into a separate process for everything it's doing. Exactly how much parallelism this then allows and what the limits end up being we'll find out as they do it. The current system is definitely "a step along the road" and not the intended end state.
I heard they were using more powerful hardware for the game servers in those test though. So maybe its a combination of the two?
I cannot add information to this subject in a way that others more knowledgeable have but I appreciate the walkthrough on the core workings of these systems and methods. I’m sure the minutiae of each method has its own unique aspects but to be able to grasp the methods in a way that I can understand it to a generalized standard is wonderful. Thank you.
Really appreciate the feedback! Happy to do it.
Excellent video! I appreciate your ability to breakdown complex systems in a way that anyone can gain a basic understanding of how they work. Keep them coming!
Thanks so much for the encouragement! Means much to a noob (at video) like me!
Fun fact, star citizen’s server meshing has been actively tested by players recently in 2000 player lobbies
Yusss! Pretty exciting to watch.
Static meshing but yeah
It was a lot of fun to participate in as well
Being in a 1900+ player server was way crazy. Things were so alive and packed. Lines at elevators the tram looking like a Japanese subway lol
Great video man, really well explained too. I'm into Star Citizen and had a decent understanding of Server Meshing in Star Citizen before this video but after watching this and learning about the other games that use Server Meshing and how it works in all of those it really helped expand my knowledge of Server Meshing in general. I find that it is always good to learn more about things, and I find it quite enjoyable to do too, so thank you.
I really appreciate the comment; means a ton! THANK you! Very happy you found it worthwhile to watch!
"Star Citizen is planning on sharding the entire Universe." 😂 I'm so immature. This was the best breakdown of all these technologies I've heard, thanks!
Saw a guy in SC fly a tiny ship through a C2 sitting on a shard transition border. Go into the ship on one shard, come out of it into another. Tech is cool as hell, moreso once they hammer out the bottlenecks.
Really incredible to see them figuring out threshold issues at 2000, where a year ago it was only 200.
Wow; that's amazing!
It is crazy indeed its sad however that so many ignorant people and players really don't understand the complexity of all of this..
@@yulfine1688Very true....so much toxicity on spectrum and social media in general regarding SC
Is this a new channel? I love this kind of content. I've been playing those games for so long (EVE on the background now) and it's cool to know some of the tech behind them. Hope to see more!
Yeah started this channel a few days ago... so I really appreciate the encouragement!! Good luck in EVE! Love that game. And if this is the space ghost that visited the stream last night... THANK YOU for that too!!
Super interesting talk, great history lesson, I learned a lot. One subject I would love your explanation on is the other side of server to client communication. This was about scale and how the future looks for handeling a large amount of players, but what about the opposite. Fighting games use what's known as "roll-back netcode" and I never really understood what that meant and how it works so well in games where literal frames matter. Keep doing what you're doing, this is really insightful!
Really appreciate the encouragement! Means a tremendous amount to someone new at this. Thank you! So far, I've been staying away from the client side, as there's going to be a lot more considerations there that an actual game dev is going to have greater insights into. Servers are servers, and systems are systems and networks are networks and I can talk a lot about all this stuff... buuuut client types vary wildly. I've never touched 3D modeling software, and I've never developed for a GPU. All that said... I can say a little bit about this here:
I understand conceptually that rollback netcode anticipates inputs. It gets its name from what it does when it anticipates incorrectly. Something we do in systems all the time is roll back improper deployments, states, etc. For example... when some exploit happens in an MMO and the economy goes to crap as a result, the company may make the call and do a roll back of game state to before the exploit took place (after fixing the exploit of course). This is a good 'ole fashion rollback! Many of us may have been through one. I've been through many rollbacks in my time, mostly because we shipped a broken version of software, and had to roll back quickly to the previous known good version.
Rollback netcode is doing the same, but on the millisecond scale. In an attempt to get you a very fluid fighting game that feels fluid, it is predicting the inputs of your competitor based on previous inputs and contexts (state game was in + input received) and it will render that input for you BEFORE it even receives it. It's literally predicting the future. Pretty amazing! But sometimes it's wrong. Humans can be unpredictable! When it gets a different input than the one it rendered it "rolls back and re-applies the correct input". The thing is, this is happening over very small increments of time, and most often you won't even perceive the fact that it's happening.
Pretty cool. Technology has gotten to the point we can trick our eyes and brains because we can just be "good enough" that a human can't tell the difference. Rollback netcode is one example. You see similar things happening with display technology, audio tech, AR/VR, and etc.
I would describe the difference between Star Citizen compared to previous server meshes in this way: Previous technology focused on meshing in-game spaces, Star Citizen is meshing entity graphs. Physical spaces within Star Citizen are just special nodes in the entity graph that can contain other entities. It makes it far more scalable and dynamic than older meshes that depending on physical spaces in the game world, which as you mentioned often have to be set up to limit how players can traverse between them.
Think of the universe of Star Citizen as a huge tree with each system, planet, moon, location, ship or even a player as existing as branch on that tree. A server is told which branches they are in charge of. Other servers could be responsible for branches that stem from that other server's branches. Players or ship branches as they move through the universe move to different spots in the universe tree and anything associated with them, branches stemming from their own root entity, all get moved as one.
Theoretically, if they could create the physical space nodes on the fly, they could divide up any space however they wanted. The upcoming in game event, Intergalactic Aerospace Expo, is held within a conference center in the game. They could get hundreds if not thousands of players within that space by dividing the rooms, or subdividing the rooms into virtual spaces, and each space could be on its own server. The problem then becomes whether the clients can see all those players and how much data they can transfer to support all of them. They have talked previously about doing some sort of frame skipping for entities in the background of a client so that the client might not receive updates every server tick but skip a tick or two so that the foreground entities can have priority.
Thanks for the incredibly insightful comment! Yes to all of this. The frame skipping bit interests me. UE5 seems to be making a lot of progress in this space and the benefits are obvious. Look forward to seeing where CIG goes with this, but this particular area is well outside my area of expertise. I'm just an enjoyer of such engineering marvels. : ]
There is indeed a difference in choice for the spatial partitioning datastructure/algorithm used. Various games used different ones. SC with AABB tree. Dual Universe with Octree. SpatialOS with grid. AoC with grid + Quadtree subpartitioning.
AABB tree does seem to be the best choice for a space MMO but I am unsure if that will set the dynamic meshing implementation apart enough. It already seems as if the main concepts are all converging on the same ones across all dynamic implementations so far. Replication across game servers, authority system for distributed partial simulation. Maybe server(s) in the middle to faciliate all that. The way the game world is split may differ. But I am not sure how much that has an impact in the end.
I have been trying to understand exactly what is different with SC (from a non-software engineer’s understanding level) since I discovered the game last year. This video did a great job at explaining the history and helping me understand some of these more abstract concepts. Really good video! Thanks!
Love the cohesive video!
Thank you!
@@grolo-af you're most welcome good sire
Just as a note the architecture shown for wow was pioneered (as far as I know) by Anarchy online in 2001 whereby they had independent machines running areas. AO didn't have the 'clever' hiding of limitations they just had sparkling walls that would cause a transition with no ability to see what was going on on the 'other side'. Some areas in AO were also instanced like with GW2 (or dungeons in Wow). As a comment to star citizen, 'have to resolve state somewhere' which puts limits on how much this can scale.
Appreciate the note! Never played AO. Indeed it sounds like a big stepping stone.
@@grolo-af It was a hot mess at launch and never reached much popularity, but it broke some new ground. It was common enough that the most popular areas would get overwhelmed and the server for a city (Tir for the Clans, Athens for the Omnitek, or Newland for neutral) would go down. The other zones would all stay up, as would the chat server, but if you tired to zone into the crashed city you would go offline until that server was restored. Similarly if you got a mission whether personal or team and entered an area to do that mission it would be an instanced zone of your own/your team's own. Raid content tended to be a shared zone so anyone going into the raid would be on a shared instance regardless of teaming or faction - some raids like Tarrasque allowed PvP in the zone too.
Cross server interactions are super-interesting! I think they are hardest part of this to get right. Game dev myself and I don't really think having static meshing with no-border communication is too difficult. But I have difficulties wrapping my head around how I would go with server-to-server communication because that complicates network stack so much
100% agreed! I don't work in gaming, but I work on very large platforms, and so I can reason about the challenges they're facing... and there is nothing easy about it. Especially when dynamic partitioning must occur and you have to (in real time) split a 300 player server into two 150 player servers... without anyone noticing? Ha! When I see it in action I might shed a tear too! ;)
Thank you sir grateful for your insight
I feel like this gets missed in many discussions with people who criticize or lightly follow some of these games. Your descriptions gives a great explanation to the differences that I had a hard time articulating then.
Appreciate this!
I would be happy about talking about exactly how sc server meshing is working on the backend
Awesome, good to know, thank you!
Really interested in deep diving in the SC replication layer and potential bottleneck that could come with this architecture vs a meshed solution like what Ashes of Creation is doing
I like how the replication layer is solving the reads for every clients but I'm wondering how much they can scale this part
Thanks for the vote! Definitely a lot of interest in this.
I would love to hear more specifics and technical speak regarding how two similar results (seeing players across server lines for example) can be tackled in different ways. Maybe a more direct and in-depth comparison of SC and Ashes. Great video though. Thanks.
You're not alone, so I will try to put something out going deeper on those subjects soon. Thanks for the vote & encouragement!
Benoit did an interview on the future of meshing, and he seemed really confident that dynamic meshing is not going to be as big a hurdle as the windup for standard meshing
Yeah I've heard some rumblings that some felt static was an un-needed step. Sounded like some wanted to go straight to dynamic.
@@grolo-af I think 3.18 scared the higher ups, and they just didn't want to risk another issue in that scale
@@itsdibbles understandable!
@@grolo-af agreed
@@grolo-af honestly I get the same impression, if you think about the problem of setting it up to be static vs setting it up to dynamically allocate zones, it's a bit of a different direction you have to go in that you are just going to throw out (static meshing rules) once you go the other way (dynamic system).
We found one of the server transition lines we had a soft depth C2 on one server banner reclaimer and vulture on the other side of the line and we were able to salvage the C2 from across the server transition line
Absolutely incredible. Very exciting. Thanks for dropping in to let us know!!
As a retired back end dev I like to keep up with game server tech. You verified pretty much everything I have been able to find so far.
Sadly, I have not been able to find any significant information on Planetside 2 and Throne and Liberty's back ends. The reason I bring these up is that both of these games might also be using some sort of meshing tech. The one thing I have heard multiple times is Planetside is cheating a bit by using way too much client authority for that type of game.
Another interesting topic would be exactly how the servers are communicating with each other. I can't help but think CIG and AoC are using some sort of websocket tech for the real time replication. CIG has bene working on some sort of queueing system as well, which is kind of concerning because they can be a PITA to work with so late in a project.
Someone else mentioned Planet Side 2 which I touch on in this comment here: th-cam.com/video/B8zkYsqLk7g/w-d-xo.html&lc=Ugwb4MZ0V80g9ivuqA54AaABAg.A9sRK0aurjbA9sYZyT-DL7
I know nothing about T&L.
As for communication protocols - I could speculate but that's what I'd be doing not having worked at these places. I've not found details on that topic published anywhere.
As far as queuing... pros and cons to that one too. A queue can reduce fragility by allowing a system to absorb load spikes (better); however, it can introduce latency too. And a real-time system can only queue so much before the experience breaks.
Great comment! Appreciate the discussion! Hope you're enjoying retirement! :D I'll join ya soon enough.
The queue system in question is "NMQ (Network Message Queue)", and it feeds actions made by entities/clients to the DGS' and by proxy Replication Layer (Hybrid).
They recently did a soft rework to the system to prevent it getting jammed up due to the increased load from higher playercounts. This new version is called "RMQ (Replication/Replacement Message Queue)"
th-cam.com/video/I0AdNFF286Y/w-d-xo.html
I believe this system - and maybe the wider system too - operates primarily using UDP sockets. Take with a grain of salt, going off memory with this one
What I keep seeing missing from most of these server meshing discussions around Star Citizen is the end goal of Dynamic Entity Server Authority/Meshing.
Essentially, you can have a server simulating an area like New Babbage Commons. Then you can have 700 players there, and they are all authoritated by different servers, not by physical(geographic IN GAME) location, but just by entity.
An analogy I use to explain is that The Servers are all CPU cores. The clients are all IO devices. The Replication Layer is the RAM and the "world server" is the SSD/HDD and BIOS that tells which server is authoritative over what entities.
Great comment, thanks so much for taking the time to leave it!
I get that, and that's extremely cool, but I feel it's slightly different than what was discussed here. This video focused on population management and managing the interactions between that population. I _could_ be wrong, but my understanding is that if you have two players on a moon and they're assigned to different authorities, and those two players punch one another, it's not those entity authorities that resolve that. If it _is_ then please correct me... and also explain to me how they decide which one of them is more correct in whats about to transpire? I.E. which one "wins"? If it is _not_ the entity authority that decides, but rather the game server responsible for the environment within which they reside... then THAT is what is pertinent to this particular conversation and why I focused on that.
I agree with you that entity authority is a whole another thing that deserves it's own discussion. It relates closer to persistence, PES, etc. Again, something no other game (to my knowledge) is tackling.
@@grolo-af The entity authority I was speaking of is not each player as an entity, but more of a entity within the locations.
If you watch the r_displayinfo3 information as you traverse in Star Citizen, you'll see which entity you're a part of. When you wake up in Lorville, it says you're captured in the entity of the habs, which is in the entity of Lorville, in the entity of Hurston, etc... Then as you enter the elevator, you're encapsulated within that entity. My example of the Commons in New Babbage, each room/shop is an entity I believe.
What I meant from not being geographically defined, is that it's not "physical" boundaries that the servers are going to be assigned, it's going to be based on which entity (location based entity) needs the additional processing power to simulate within it's confines.
I hope I'm making sense....
Very good explanation
From what i understand SCs reason for sharding is due to latency issues from geographical distance
Awesome vid though well explained
Geographic latency is a tough one! I'm surprised Ashes hasn't said it'll have geographic shards for this reason. Or maybe that have and I missed it.
@@grolo-af You can't beat the speed of light..
The issue wiht star citizen is cross shard replication is very consequential. Let's say you're defending your base with your org. If you reach enough density of players to force the system to shard, your base will be replicated but be invulnerable in the replicas. This effectively denies your enemies a chance to attack. May not be practical for small outposts, but for very valuable targets like sosce stations, you bet that large organizations will keep 24/7 activity. If CIG decides the replicas can be modified too and work out the state reconciliation issues, then you may see your base suddenly blow up for no apparently reason as a massive party too big for your shard intentionally engineered its shard placement to avoid opposition.
I don't think that's how it'll play out, as the game servers running the combat calculations are all dealing with a "replicant" copy of the space station. No game server owns the space station. Even the game server that is governing the result of fire upon it.
I do agree that there are limits, even with this system, and density can only go so far before state transfer bandwidth demands become too much.
Appreciate the discussion!!
Good video! The reason for regional shards in Star Citizen is definitely not due to a lack of ambition. But a practical one. Low latency is more important in Star Citizen due to the twitch based nature of first person games. When playing on servers across the world, the speed of light becomes an issue. You cannot go faster than the speed of light. It's our universal speed limit. Due to the size of the Earth, traveling across it essentially adds a theoretical minimum one way latency of 100.2 milliseconds. But we do not live in a theoretical world. Round trip we are now talking of real world delays of 500ms. That's an eternity for a twitch based FPS game where people complain clients having a delay of 16ms (60fps) is too much. You don't want to give North Americans an advantage over the rest of the world. That's unfair. The most practical solution is to have regional shards. Americas, Europe, and Asia are their servers. Just 3.
Cultural differences, language barriers, and time zones have people already isolating themselves. I am not going to stay up until 3am just to play with friends in the UK. And I am probably not going to play with Chinese or Japanese people because I don't speak their language. They have stated their goal is one shard. But, I don't think so. I don't think it's possible to have a fair and good experience for everyone with the speed of light limiting what we can do. But I am not a network guy. 3 shards is a nice, practical compromise IMO.
Totally agreed. I spoke on this without fully informing myself or giving it a ton of thought... I extrapolated the "one universe" I've heard in passing and proceeded with unquestioned assumptions. See where that got me?! : ] I also got caught up in the comparison with EVE Online, which CAN pull this off because latency is much less of a concern there (due to how input is received and processed in that game).
But yeah, you're spot on 100%. Even that there's other isolating characteristics that support regionalization, beyond just latency. Thanks so much for the insights and comment. Really really appreciate it!
@@grolo-af Been loving these talks. It's great to hear someone break down the technical details. It has frustrated me to see people downplaying server meshing as though it was no big deal. Especially the possible dynamic server meshing. I loved seeing Star Citizen with 2,000 players. But one thing that has got me wondering is what are the biggest bottlenecks? I get hundreds in one server. But what about evenly distributed? I would guess the replication layer would be the bottleneck? Thousands of people sending data, querying the graph database, etc. What are the theoretical and practical limits? Wouldn't dynamic server meshing still put a strain on the replication layer ...server? Just a suggestion for a future video.
I must admit, even though I basically understand the concepts of server meshing, seeing it demonstrated live last year was still like seeing magic. But I understood why that map was L shaped. I bet a limitation is if you threw a ball from server 1 to server 3, it would transition from server 1 to 2 just fine. But on server 1, you'd probably see the ball disappear transitioning from server 2 to server 3. Adding the L shaped map masks that possible issue. Still amazing to see though!
The problem you're describing shouldn't be the case once the replication layer is functioning as advertised. But I'm assuming not all advertised features are in place yet : ) And I'm sure you're right - the L shape kept that from being obvious.
Appreciate the thoughts on future content! Definitely in the realm of what I'm thinking about.
Thanks. Much appreciated content.
You bet!
Please deep dive into this and do an in-depth video on dynamic server meshing!
I'd like to toss my hat in to mention that 'Mortal Online 2' has probably the best 'meshing' system I've experienced to date. I don't imagine that the server performance could handle the recent expansion to server meshing tests (the 1k test was HORRIBLE) but for what it does, and the community it servers, it feels very very good.
And that, at the end of the day, is all that matters. If a game FEELS good to play, then the engineering team did its job well.
I used to play Second Life quite a bit, where their world was split into 256m x 256m "sims". As you approached a sim boundary (within 16m if I remember right) your client would establish a connection to the neighboring sim (as a "secondary connection"), and as you crossed a sim boundary you would swap your primary and secondary connections. Sim crossings would be relatively rough depending on how much data needed to be exchanged and how quickly you were moving.
Neat! Appreciate the info! Pretty impressive given Second Life's release date (a year before WoW)!
I would love an explanation for the replication layer. Especially that ,if I understood it right, AOC and SC have a different approach to it
Indeed they do. Appreciate the comment!
Thank you for this video! I enjoyed it.
Can you revisit the Star Citizen server meshing implementation once SC 4.0 drops and go deeper into why & how server meshing works?
Definitely going to go deeper into many of these topics. Appreciate the vote and encouragement!
@@grolo-af Thank you!
Facetious question, great explanation
All the memes of "sniping someone in a different server" are about to become real in SC, I love it.
From last year's citizencon and some other system guys. Basically the way Star Citizens dynamic server meshing works is each system and wormhole are their own severs and the planets are shards within those servers that overlay into a seemless interaction. Meaning as the population of a system grows or decreases resources are allocated to other systems that need them.
An Example would be the starting system in Stanton is a server onto itself, the wormhole between Stanton and Pryo is also a server that handles the transfer of the player between systems without latency.
On lets say Stanton, the planet of Microtech its a shard along with her two moons. As more people flood into micro tech (the planet) onto New Babbage, the Shard starts to break up into sub-shards to handle load while pulling resources from other parts of the server. Eventually lets say 20,000 players are on Microtech, The other systems may be just shards while the server power is pushed to maintain the stability of the players on that planet. As soon as the demand dies down allocation of those resources move back to their original space or in some cases go into standby.
Aye; it's exactly what will be required to support the type of behavior we'll see in the sandbox as conflict erupts, or players hold huge events... think Daymar racing on a grand scale, or festivals... people tend to congregate in droves and it can happen quickly.
@@grolo-af CIG has gone on record that the group that hosts the Daymar Grand Prix is now lore incorporated. I think some of the event holders are working to create a seasonal thing. So we'll actually have a dedicated racing teams in the verse.
@@grygaming5519 no its more specfic than that. a server per system is not good enough long term. that might be how it runs for now, that thats a temporary solution.. so when you say "Basically the way Star Citizens dynamic server meshing works"... you aren't accurate at all.
@@pylotlight As of right now, I did forget the scale part of it all. The whole system that CIG is building is for a near limitless ability to scale what they need and when they need it. I also meant for as of right now, we have only a portion of it working, once 1.0 comes out its going to act differently as the game itself will just be overall upgraded to its full potential.
Hey thanks for this awesome video, could you do a more in depth explanation about this dynamic Server mesh from SC? I think alot of people would also love a kind of deep analyse about that,
Really appreciate the encouragement. Definitely would like to! I've got some ideas percolating through the mind on it. Stay tuned.
Please do an indepth comparison or analysis. This is something much of the space-game community doesn't understand and they need to!
very cool vid
It's worth considering which of these games allows you to aim and time an attack manually like a traditional FPS, versus which depend on designated targets and skills which automatically attack on a schedule, like a traditional RPG. The second is MUCH easier for a server to handle than the first.
Eve is very much in the RPG camp. You can't aim the guns manually, and you can't time a shot manually either. You designate a target ship, and activate a specific slot, which then cycles attacking with those weapons. Hit and miss is down to all things the server can figure out -- and a dose of RNG. Player aim and accuracy does not come into it.
Elite wanted manual aiming and firing. Hence why it's designating one of the clients as a peer-to-peer master every time there's a close encounter. The limits on what can be in each battle are ... coincidentally about the kind of limits you'd expect on an FPS game from the same era where one of the clients was also running the server at the same time.
You're exactly right. Someone joined the Discord server to discuss this and I noted the same, reposting here:
If you think about something like WoW, the player says “start attacking” and the server starts computing a series of attacks with no further input from client.
Contrast that with something like New World’s action combat in which the player is swinging, dodging, aborting swings, blocking… there’s way more the server needs to react to. And Pax Dei. All these games are placing way more demands on server. Ashes is interesting in that it’s going with a hybrid model, not true action combat… a choice that will likely allow them higher player counts per server.
Great / insightful comment. Thank you!!
Please go in depth in CIG's version of server meshing, and explain it in laymens terms.
Thanks for the vote! Enough have requested this I'll see what I can do : )
On Star Citizen: if you are able, would you please talk about in game examples and how CIG manage the servers and how they want to fix these issues and maybe explain a little more in detail the persistance in the universe and how it would work with shards.
I loved that video, thanks!
This is a really good idea; especially connecting in-game things to the architecture. You’ve given me something to noodle which I will! Thank you!!
@@grolo-af Glad you like the idea! I'm far from a network engineer so having in game examples of a server behaviour would help me understand much better. I just saw you other video comparing AOC and SC and I loved the in game examples you gave in it!
Currently, I cannot play SC in PU. My error code and behaviour I see lets me think my character is stuck in a "live" and "offline" state at the same time. I still don't know how I managed to do this but I hope understanding the backend a little more than zero (my current state) would help me replicate the issue and flag it properly on SC Issue Council (IC).
Really interesting stuff, I'm a new fan!
Really really appreciate that! Glad to have ya here! : ) Thinking up some new stuff incorporating some of these ideas! Gonna require me to hone my editing skills a little more which I need to anyway...
I wonder how Planetside 2 does it. (which, of these methods it's using) It's still the only FPS with ~900 players on the same map. Very interesting video! And very well explained. I would like to hear more in depth analysis. If you can keep it "simple" to understand - like this one.
Thanks for the encouragement and vote! Quite a few people expressed that, so I'm going to try and do something along those lines soon.
As for Planetside 2, I talked about that a 'lil in this comment: th-cam.com/video/B8zkYsqLk7g/w-d-xo.html&lc=Ugwb4MZ0V80g9ivuqA54AaABAg.A9sRK0aurjbA9sYZyT-DL7
This was great! Very helpful. Thank you.
I wanted to do something like this as well but I would have to do more research for some of these games and how they worked. So I never did 😅
I looked at Second Life and they seem to have had the "looking into adjacent servers" already. It was a very basic grid where each server connects to its neighbor and loading happens when a player crosses boundaries (which could lead to some stalling on those borders). So it didnt even have a sophisticated authority system building yet like some of the other and later solutions.
If I understand it correctly, Elite would categorize as peer-to-peer, correct? I am curious how do they prevent cheating then? Does someone know? I know usually p2p has the participants validate each other and find a consensus through that way. But if they have a player as a host for authority (I assume) then that may not be the case.
It seems WoW Layers within Shards are GW2 Shards within Megaservers. Would that be a fair assessment? Also, in the SC Server Meshing Q&A they mention "Layers" as a contingency for highly populdated areas/shards. I am not entirely sure what they ment by that though. Do they mean having multiple copies of a specific area (making it an instance) or multiple copies of the whole universe within a shard, where a matchmaking service can move you between those?
Besides, we now know that there will be areas that are instanced and (semi)separated from the rest of the universe/shard. Such as hangars, apartments and levels underneath Arccorp. Maybe the planned in-game videogames could fall under that category as well, since you are supposed to enter a simpod and enter game lobbies. I do think thats a good addition. It allows for more contained and curated gameplay without the chaos of a potentially chaotic open world.
23:05 Is that cool though? I always felt like thats cheating :D It's like playing in slow motion. Its a cool implementation to have every client and the server stay in perfect sync, dont get me wrong. But as a player, I value my real-time gameplay ;) I guess it works for Eve's specific type of gameplay, so I cant complain. It is clearly working for them and one of a kind.
30:38 It is interesting to me that WoW, Pax Dei and many other MMOs call their instancing of areas of their hub world "sharding". In the meanwhile, SC as well as Ultima Online calls each whole game world/universe a Shard. This makes discussion like these a lot more confusing. Different games using the same terminology but for different concepts. What a Realm is for WoW is a shard for Star Citizen.
I also find it interesting how each dynamic solution uses a different way to split their game world. SpatialOS just a basic 2D grid, AoC uses a a 2D grid with Quadtree, Dual Universe used a Octree, Star Citizen uses their custom AABB tree (ZoneSystem) and planning to use custom groups of players and game objects independent of the AABB tree (so some form of spatial clustering algorithm). Great that you mentioned the nested nature of the SC'S AABB tree in that regard!
I do think most of these solutions are already converging on the same solution. SpatialOS seems to have been the defacto leader here although it is odd that nobody released a game with it in the end :S On that note, I still find it odd that AoC didnt go for a server between clients and game servers. They now have to route data to neighbouring ones so that the Interest Management system on each game server can decide for its clients what to send. Thats a lot of unnecessary data and load from other game servers. Even New World seemed to have a "Replication Layer" service between clients and servers for that load for their Static Server Meshing implementation. And Star Citizen's first internal version tested in 2020 was also one where the game servers connect directly. But they stated that it would scale well for them and the opted for the service in the middle. I dont have much hope for AoCs version even tho they proclaim otherwise, but we will see. I personally believe they were just unable to move Unreal's ReplicationGraph (their Interest Management solution) out of the game server and onto its own service (in time for alpha 2). Still found it bm how they called out other solutions that do use a middleman service.
Anyway, I hope this ways insightful. Let me know what you think.
Very insightful post; THANK YOU for taking the time to drop it here!! A lot of good thoughts, points, and questions. I'll respond to a few...
> Elite would categorize as peer-to-peer, correct? I am curious how do they prevent cheating then?
Correct. The devs play a cat and mouse game, and patch to hamstring cheaters as they're caught. It will always be a con of this solution. All solutions have cons.
> It seems WoW Layers within Shards are GW2 Shards within Megaservers. Would that be a fair assessment?
Yep.
> Also, in the SC Server Meshing Q&A they mention "Layers" as a contingency for highly populdated areas/shards. I am not entirely sure what they ment by that though. Do they mean having multiple copies of a specific area
Yes, but my understanding on this bit is that the contingency is not due to the inability to scale the mesh, but rather the in-game zones. I find this fascinating actually. It's not that they think they won't be able to handle ten thousand players in New Babbage, for example, but rather than 10,000 players would completely clog up the spaceport. Wouldn't be enough ASOP terminals, or hangars, or trains, etc. In other words, the space itself has limits of scale. It'd be like trying to fit all the people in Heathrow airport in your local small town community airport. Imagine the security line!
So they'd have to physically redesign many areas to accommodate much larger populations. That will take time. And so the contingency plan is layering (another word for sharding). But since they call the top level fork a shard, they're calling a local fork a layer.
> Is that cool though? I always felt like thats cheating :D It's like playing in slow motion.
Cheating? Maybe :) I think it's a cool way of integrating a technical limitation into gameplay. Like you say, it works for EVE explicitly. It likely wouldn't work for any other MMO.
> I still find it odd that AoC didnt go for a server between clients and game servers.
This is a huge difference between AoC and SC! One I also find very interesting. I can't wait to see how it all plays out. This is why I tend to think of SC's approach as "more elegant". HOWEVER, I have seen many elegant solutions lose to less elegant ones in the real world. So it is no guarantee. Think of micro kernels vs monolithic kernels in operating systems. Most would argue micro kernels are more elegant, more secure, more maintainable, etc etc...
... and yet Linux won the internet.
I think riot's work with lol servers is interesting. They essentially built their own high speed data network and use that to connect players. Like their own CDN for saltiness. I'm interested in how they're planning to leverage that server network for their fighting game 2xko. Fighting games have their own cool network tech, rollback aka ggpo, which the creators open sourced.
Do you think rollback and bespoke server networks have a place in MMOs? For example, in Star citizen, the ship accelerations are known quantities. Seems to me like rollback would help with latency in space battles.
Edit: in case anyone is not familiar, there's a killer instinct dev video on roll back that's really good
I absolutely think Rollback netcode could benefit action combat MMOs. I don't think it would offer much to other types of combat resolution, such as the hybrid tab-target implementation that Ashes of Creation is going with. Input latency is far less of a concern with that type of a system. But with Star Citizen's action combat, where precision matters, it could have a positive impact on gameplay from my POV. Now keep in mind, that while I'm a seasoned architect, I am _not_ a game developer, and there could be good reasons it's not applicable to SC that I'm not aware of.
As for bespoke server networks... 100% I think the more you can tailor the hardware, the software, and the system design to a particular task, the better the system will be at that task. One place I would expect this to happen (eventually) is in the replication layer.
Great explanation😊
Thank you!
very interesting!!!!!
Could you also discuss Planetside 2's server meshing tech. I think it's going to be most congruous to Star Citizen's goals. To my understanding, they had several mega servers that functioned similar to wow's (Brigs Lag Wizards represent!!), and similar to wow those servers contained multiple servers for each location within them. The key was that the subservers were able to transmit data much faster making firefights across server boundaries possible (if noticably laggier)
Appreciate the vote! You're not the only one so I'll certainly see what I can do. Really appreciate the comment and view!
Star Citizen also had to solve the issue of 64-bit server meshing wich barely any of the others do, aswell as Star Citizen being the only game where everything is a fully physical persistent entity that will stay forever and not just despawn at some point so it has to do billions of entity and physics calculations each frame
Absolutely. I didn't even get into the persistence goals of SC, I decided to focus this video primarily on population management. The persistent entity stuff is quite incredible entirely on its own. PES is critical to making this possible though so that it does NOT have to process billions of entities on each frame. The primary benefit of PES is that entities are only streamed in (and paid attention to) when there's players near them. When players vacate an area, the entities are "streamed out" and no longer impact the performance of the server. Hopefully it's clear in the context of this video why that's so important.
Appreciate the comment! Great one.
@@grolo-af I have been following CIGs implementation of their graph db and all the fun it has been causing them. I am not sure how they can really test it though because at least with my limited experience every time I go to access the inventory system in game you can almost hear the request slapping their raw DBMS every time with no caching.
I also wonder if that graph db is going to do some of the forbidden multi-master data writes! I really do think it is an interesting system and seems familiar to the hierarchical DBMS I worked with. There is some very interesting minefields with these types of systems and how they index.
CIG really had to throttle back the persistence because of all this and now only a few high priority items seem to stay very long. On the good side though is they can keep optimizing it and adding more persistence over time.
@@grolo-af Yeah this is the depth that is what makes SC somewhat unique in their approach to server meshing. OCS/SOCS -> PES which both enable the decoupling of game/world state from game logic/simulation, which in turn makes their approach to entity-zone based server meshing possible. Without SOCS their approach is infeasibly more load. Good video from a high-level as to how these types of games have approached this and evolved btw.
the servers won't have to calculate all entities, and same with NPCs, once not seen by players. they will 'vanish' into data points that only do anything once they get to certain checkpoints like stations after a quantum travel etc. incidentally this is exactly how oblivion works, and all the other games on the gamebryo engine (skyrim starfield fallout 3 4 NV etc.) they only load when a player can see them, and it's the same with all the entities in the universe. it works like that in the game now
@@DeSinc that is only the case when no players are around, but with shards in the hundret thousand Ranges with dynamic Server meshing that is still billions of entities and NPCs my friend
You should definitely do a vidoe on star citizen doing cross server interaction, I'm curious how ashes of creation are solving that, especially as that's what is actually novel.
Appreciate the vote!
the major diffrence is PROP persistance
you drop an object in the world it stays there
you build something in the world all will see it
other games do it on the themepark level
Absolutely. I didn't even get into entity persistence. I focused on population management. Very ambitious, and awesome.
In the example of Ashes or WoW or Pax Dei, why couldn't they use dynamic horizontal scaling (aka spinning up more servers) on the individual grids of the map to increase that grid's processing capacity, without the need of sharding/layering or sub-dividing that grid further?
In other words, if hundreds of players begin entering Barrens and now the individual server responsible for Barrens suddenly needs more processing power than the biggest individual server money can buy, can MMOs leverage horizontal scaling, load balancing and traffic distribution systems to solve this challenge?
I can imagine it would be complex to do it and also that parallelism, latency and synchronization would be challenges, but why specifically can't horizontal scaling be used instead of seemingly more complicated and innovative solutions?
Thanks
In this example, if you do not divide the barrens up into smaller sections, then how to you designate which server a player is on? You could just do it by player name. So names starting with A - L are on server 1 and M - Z is on server 2. Okay, so now you have 1/2 the players on server 1, and the other half on server 2, all in the barrens, spread all over the zone. How do they see one another? Each server can only see it's own memory. So now you would need a way for each server to tell the other server where every player is... and now you're back to the problem Ashes faces except worse because the area is larger.
Alternatively, you have the players on both servers NOT see each other. That's called sharding/phasing and is exactly what WoW does today. And what Pax Dei is doing (on a per tile basis), and etc.
@@grolo-af I'm probably just stupid, but why do you need to divide the Barrens into smaller sections? Why not have each server just handle the first X players who join the map and so on and so forth? About memory, wouldn't it be possible to lower the number of players per server and reserve some space to handle state sharing and updating? Although I'd bet doing that via network isn't fast enough to be playable as it would if it was all in the same PCB, probably a huge bottleneck there. Thanks for the reply though!
@@baska- Well that's what I was getting at... if two servers have two groups of players, and they need to send all the state to each other... how is that different from just putting everything on one server? You end up with everything on both servers anyway. I know it _seems_ simple... but it's far from it.
Not stupid at all! Good questions! This stuff isn't obvious, especially if you don't live & breathe it, hehe
The star citizen explanation is close, but its not quite accurate, its complicated to say the least. The first implementation is actually semi-dynamic. there is more than 1 server running in stanton and pyro respectively. They also have some dynamic features to them to assign them in different ways to do some load balancing. benoit explained it recently in an interview. there's a lot more to it but its quite crazy to see it working on the test servers today. the other thing about the star citizen is that they are treating the servers as clients and utilizing a replication service to host everything. so if a server crashes, another ones can spin up and replace it without a disconnect or loss of data.
Ya I didn't even get into failure modes and recovery. Also I wasn't trying to imply it was a Pyro server and a Stanton server at the launch of 4.0.... I have NO IDEA how they'll configure the static mesh. Twas just an example. Sorry for any confusion. And thanks a TON for dropping a comment about it! :highfive:
(at SC) What about the nodes, backend services and the way the handle information :D I've heard they're doing something similar to Facebooks way of handling databases and their data. Maybe I have the answer in the next videos? I'll find out soon :D
Glad to hear you're watching several! Happy to discuss more.
I can respond to "similar to Facebooks way of handling databases" - as this hits very close to home for my own work - Facebook invented GraphQL - a technology used to represent arbitrary data sources via a graph, and a corresponding language (GQL) to query it with.
CIG has chosen to use a graph database for it's entities. You'll hear it called the entity graph. This is likely why you've heard what you did.
I believe you got it very wrong about WW2ol(former player & I studied game programing but went into normal software). WW2ol actually is more or less what you describe in ashes of creation(not familiar with it), I believe WW2 Cells/Sectors are first sub-divided by terrain(I think they call these host cells) but then these subdivide according to player population & hardcoded equipment visibility, so it's dynamic. So the limits for WW2ol implementation in a very high population scenario(I think max player tracking is around ~128), when the max cell player is reached, if you are for example an infantry player you will get priority drawing/tracking tanks & infantry while planes will only draw if they are like 150m away, so in these scenarios planes will pop up out of existent like in seconds, drop their bombs and then disappear from your game, Infantry can pop up and disappear as low as like 30m in very extreme "1000" player city battles, if you are a fighter or AA then infantry will get low priority while planes will draw 3km or something like that.
Some of the issues that this creates(created) is that if you are doing high level bombing on a town your priority were AA, Factories, Ships but not infantry, so because the bombs were also Cell tracked and there was no infantry in your Cell cuz you are a bomber, infantry could not be killed nor could they hear or see the bombs you dropped yet infantry would see the buildings blow up, this was like 3 years ago, it was fixed by making a custom server to track bombs and artillery shells. There were some issues a while back that when handshaking Cell to Cell sometimes a player would not be drawn but would still be able to kill other players, the game also has an advance ballistic and penetration model & uses a lot of client simulation to avoid race issues but passes the calculation to the opponent(pseudo p2p), so you could for example increase your packet loss by seeding, this would desync the predictor path on your game and levitate or do crazy rotations, you then could shoot at vulnerable spots on a tank and this client shot simulation was passed to your opponent and the tank blew up on both the server and his game, despite not having a direct path to such spot.
WW2ol is probably the best case study for StarCitizen by far, I think Planetside also copied this. This was the most advanced FPS networking at the time and I think it still is(provided SC succeeds), the guy that did this was contracted by Blizzard some years after & left the company.
And this comes finally to one of the issues I dont see addressed by anyone with a software or networking background that you briefly touched. WW2OL large combine player realistics battles are baked in the game design of the game, the infantry gameplay has some dragginess to it, because it incorporates player position prediction heavily I think 50 ms to 100 ms, this is present on other vehicles but because these have higher momentums are not noticeable. In SC afaik as I know they are just doing basic out of shelve interpolation & I'm concerned the current fidelity might be impossible to achieve if not performance wise maybe even economically, running WW2ol isnt that cheap.
A lot of good insights here. You have studied WW2ol much more than I, and I appreciate you bringing the extra knowledge! You've given me some more things to dig into. Always easier to do so when you have some leads. It sounds like WW2ol has continued to add layers atop the original foundation to accommodate edge cases or new features as the game evolved? This can help pave the way for features more quickly (and with less risk!), but often at the cost of increased latency throughout the platform, when such things are layered on after-the-fact, rather than redesigning the original foundations.
I tagged some Devs to see if they add any comments to this video, unfortunately they remade the website & older articles got nuked.. 😅
But I think I played since 2006 on & off.
The basic load balancing architecture i think is mostly the same other than a server to track projectiles & bombs to detach from player visibility Cells/servers.
When it came to network packets, initially, they were sending many smaller packets & in my opinion it had smoother infantry play but you often had both players able to shoot each other, they eventually went with less but larger packets, in my view this made mutual kills more rare but gave a bigger advantage to low ping players & kinda screw up a little bit of the infantry predictor code.
From my understanding SC more or less wanted to implemented WW2ol but instead of having the servers all in the same location, they were using the cloud to get a server where all the connected players where at middle distance so like West vs East Coast players would get assigned to a central US server.
But it was news to me that you said they are having a universe for EU another for Australia etc.. but I wouldn't be suprised due to the current level of fidelity.
But my point is that you don't have to split the game by terrain & assign players but instead split players by visibility and then assign terrain, this will mostly almost fix the missile or rocket analogy you used other than the scenario I mention for indirect fire.
With all that cig have talked about all their dreams, by my calculation we should easily be able to have upwards of a million players in a single star system battling it out.
That's only 2,000 ish bengals.... so about 10-20 servers managing the ships (I would expect less tbh) than about 7000 managing all the players within.... making for a million player battle.... with all 2,000 bengals on screen....
What's you take on max theoretical players in a single star system/battle?
I think it's going to take a _lot_ of work to get to those numbers. Is it possible? I believe so. But it would be a herculean development effort. The devils in the details. A few things I'd list as challenges with reaching those numbers:
1. Clients - even if you manage to scale the backend to 1M players in a battle, you now have to figure out how to manage rendering it all on a single client machine. We've seen things like the Planetside 2 system drop how many things it would render, but even Planet Side 2 only scaled to 2k players in a battle. You'll need some SERIOUSLY aggressive down sampling. Engines are getting better at this over time. I'm not sure how well StarEngine does with this today, but my guess it, it would need a lot more work in this area.
2. Assigning more and more servers to an area of space will eventually lead to network congestion on the backend. I think the replication layer is a fantastic design; however, there will still be limits. Pumping all those state changes around, from millions of players and thousands of ships across thousands of game servers and hundreds/thousands of replication layer servers and millions of client PCs... you'll certainly be staring down some physics.
3. You'll have countless edge cases happening in abundance - what happens when the boundaries of 2 servers overlap (assuming two capital ships collide)? What about 5 overlapping all at once? How fast is the handoff between game servers? What happens when a player traverses like 15 game servers in 2 seconds because boundaries are getting so close?
To be clear; I think _all_ of this is solvable. I don't think it's solved yet. Will take a ton of work. But that's why they're funded. If players want to play in a universe "without limits" someone has to do some _very difficult_ work. There's no easy path to such a thing.
It sounds like pax dei doing phasing like black desert online and most of the Korean mmos use?
Correct but on a per tile basis. So if you leave a busy tile and cross into a lower pop tile, everyone can see each other again.
So the question is : do you think CIG’s goal / objective to create this new architecture of server messing with thousands of players is possible ?
I think it's possible and inevitable. There's no guarantee that CIG is the one to achieve it, but as of today, I believe they are in the best position to do so.
talk about it more. Love to hear experts talk about this stuff in detail.
I appreciate the breakdown, but CIG has a long way to go with Star Citizen beyond the golden bullet of server meshing.
Aye; all three games in the future segment of the video do.
What about metagravity
Indeed what about it!? : ]
Apparently your viewership skews young, or someone would have corrected you by now on Asheron's Call. Asheron's Call had dynamic load balancing across a single unified world. No shards at any level. Source: The book _Designing Virtual Worlds_ by Richard Bartle, page 38. The book was published in 2003, and is now available online under a Creative Commons license. It's easy to find with a quick Internet search. It's a good read, going into the history of MUDs all the way back to the '70s.
So there are two answers to the question of why has player capacity dropped. Fidelity is one answer. Inferior technology is the other answer. Dynamic load balancing is so hard that almost no games even attempt it. Asheron's Call is a very notable exception, especially since it was done so early in the history of MMOs.
Really appreciate you taking the time to correct me (although I can assure you my audience does not skew young :) ). I never played Asheron's call and didn't do enough research and made some assumptions based on time period. I've done additional digging since your comment and it's quite fascinating what they accomplished for their time . Will continue to educate myself. Also, I appreciate the book reference. I ordered a paper copy too!
This book is fantastic. I know what I'll be reading at night. Thanks again!!
Reading about this stuff is one thing... I'm curious, if you were someone that actually played the game... how did it perform in the real world? Did load balancing events disrupt play? Were they seamless? Were there situations in which players overwhelmed it?
What do you think kept AC from enjoying the success of EQ or UO? Marketing alone? The technology seemed superior.
Very interesting and informative. I wish the dude had put Planetside 2 in the list. Its probably relying a lot on client side but the tech for the time was like nothing else.
Planetside 2 did a lot of awesome stuff, but yes, it offloaded a lot of calculations (like did I hit?) to the client, and then verified a subset of those calculations on the server to catch cheating.
Continents were shards. So no cross continent activity (which would not interfere w/ gameplay, so good choice). Supported like 1.5 to 2k players on a continent. A continent was further divided into hex zones and players were bucketed and the server would process everything for that zone/bucket together. Things distant from that bucket would be ignored, lowering the load on the server by treating each region of conflict as distinct.
Some pretty novel approaches. By mixing this approach for minimizing the calcs it had to do per user and offloading some stuff to the client (not purely server authoritative) it scaled a server pretty far. Not infinitely so though. In an MMO, 2k players on a continent would not be a lot. But it seemingly served this game very well as I hear a lot of people talk about how great it was.
If a game FEELS great, it IS GREAT. Perception is reality in these situations. Kudos to those devs!
@@grolo-af Hey, I appreciate you took time to answer, amazing video btw, watched it all. I vaguely remember playing Planetside 2 to be honest, been so long. I don't recall if we could shoot someone across the server/bucket, was mainly what I was wondering, because if its a shooter so if there's server to server shootings, it would have been maybe the first game to do allow server-server boundary interactions, but I honestly don't recall that. I recall though some massive battles would basically stop rendering players and reduce the range of visibility, a bit like cities in FFXI back then, which made snipers useless as you could only hit what you see. Funny thing is that grenades still worked on the invisible players, so intense battles became grenade spams.
you are wrong about ashes of creation taking the eve online route. Servers will be regional based and not anything like a mega server. i dont know if you got your information from pirates software but he was wrong. Also on star citizen they did say that a capital ship is gonna need its own server spun up if in the open world but in citizen con they did show something called instanced fleet battle so they might instance some of these big guild battles but im not really sure cause then what's the point of dynamic server meshing if were just gonna instance the biggest battles. By the way amazing video and insight. alot of people always say server meshing is old and fail to see how its being modernized by ashes and star citizen. I have a question about Star Citizen's server meshing. Wouldn't the replication layer still create a bottleneck, since it has to manage data across all the servers and star systems across all the shards?
Instanced fleet battles are for large PVE raid type encounters, is what i understood from the video. So we will have some super hard content that isn't able to be cheesed by sheer numbers in the open universe.
You're right! I got that wrong... I understood it to be that way based off their own server meshing video where they went deep, but I obviously misinterpreted. Thanks for the call out! I'll pin a correction.
Yeah I raised an eyebrow at the instanced fleet battle thing... not sure what to make of it yet. Need to dig into it more.
As for your question on the replication layer... one way I've described this to people is... (and take this w/ a grain of salt because there's a lot of educated guessing here, I don't work for CIG and don't have access to the code)... but think of it like iCloud (or whatever the Android equivalent is) where you have this massive pool of data (aka state). The clients are talking to it and the servers are talking to it. They're _both_ signaling changes to it. It's a pretty beautiful design really.
Now, if it was one big blob truly tracking everything at once, you would be correct, but I doubt that's what's going on. I'm sure this state layer itself is sharded. Back to the icloud analogy, when I make a request to iCloud, the system doesn't need to search ALL of icloud for the data I need. It very likely is sharded by user ID, and so it knows exactly which host/cluster to go to given my account. Likewise, when a user or game server fetches or signals a command against this layer, which will surely be sharded by game region, it'll be a very small subset of the replication layer that needs to be consulted.
That's how I'm seeing it based on what I've consumed and pieced together. Happy for any corrections to my understanding anyone may have!
@@luminisunhinged that makes a ton of sense.
Alot of people are confused with Instancing and Server Meshing i guess.
If you think about it, meshing is a bunch of instances with the doors blown open so they can all see inside one another : )
@@grolo-af Man the ability of yours to explain complex things like sm to a noob like me is amazing! i love metaphores!
o7
o7
I dont think he would cry if it works on another game. Its more it works on he's game
His
Well sure. He also wouldn't cry if it was a well established technology that everyone was doing already.
@@SpaceDad42 Yes. Thanks