Our company has created the most complex architecture in the world! We are running 1 MILLION microservices all year round. We are proud to announce that we finally run... 1 service.
Maybe every function call doesn't need to traverse the network. Next up, statically linking our micro-services into a single binary called an executable.
The "cosplaying that you're Google or Amazon" part is so spot on. Some years ago, my company was hired for a client that wanted to migrate to a microservices architecture. It seemed really unnecessary because their userbase was ~10,000 users and their performance (and costs) were fine. Still, discussions were held about how "microservices are the future" and "to scale, we will definitely need microservices". In the end, I helped them to move from a traditional monolith to a modular monolith, which worked much better for their company and use case and prevented the need of hiring additional people and changing their entire infra DX.
@@darekmistrz4364 The app was a .NET app and was built using the aspnetboilerplate project (open source on GitHub). Regardless of the stack, we researched many articles regarding this architecture and in the end, it came down to the principle that you have a core monolithic architecture that handles features that any app or module would be built on top of, and separately, you have modules that are dependent of the core projects (or other modules). The core projects should not be dependent on modules, since these should be interchangeable (and removable) without consequences. Using this architecture, and if needed, you could still deploy feature specific instances if needed to balance the load of specific modules.
I firmly believe there are a few subsystems that do well as their own services. Those are: Recommendations. Search. Login (if complicated). Maybe I forgot one. What makes them good candidates is that in the first two cases they need completely different data structures working over the same data, which it makes no sense to share. It reduces complexity in the main app if you update your search database overnight or something. And in the login case its data is completely separate from the rest of the app which doesn't care how many 2FA codes or SMS numbers the user has set up. For example Stack Overflow has a standard tiered setup: web tier, application tier, database tier. Each set scalable and redundant. And then, off to the side, you have these search and tagging servers that are completely separate. Why? Because search and tagging are completely different from the rest of the application logic and they use a lot of resources by themselves.
@@darekmistrz4364 django and nest js help you to write modular monolith just by reading their docs you should be able to understand how to build good modular monolith
We solved this problem 30 years ago, with the compiler and the linker. Individual C/C++ files get compiled into object files, that are like binary puzzle pieces, and the linker's job is to complete the puzzle, creating a complete program. We threw decades of hardware efficiency and toooooling away to come up with a shittier, slower linker, all because we couldn't bother to teach the young heads how a computer actually works.
How well I remember linking all my objs and shoehorning everything into extended vs. expanded memory (or the other way around!). Now it's assumed resources are infinite; performance is an afterthought/who cares.
When I was in uni they talked about premature optimization. These days, everyone should talk about premature microservicing. First build it, then optimize it, then distribute it.
And then, once you need to distributed it you consider all options available to you and only use microsevices as a last resort. Databases can distribute on their own already, your service is probably handling http requests and there is a fair change they can just be load balanced across multiple machines. You might just be able to scale horizontally without changing a single line of code.
@@foobar8894Database distribution is not a free lunch. Blindly sharding incurs consistency and durability consequences, and the “mathematically correct” ways to do it are computationally and network non-trivial (Google Spanner, CalvinDB, FoundationDB) and that’s not to say of those who do it “wrong” and introduce a transaction oracle bottleneck anyways (Google Percolator, PostgresXL)
No no no, first do a low-code solution! And then realize your low-code solution has a total cost of ownership of 10x and is so ridiculously convoluted that you can't replace it and nobody dares to touch it oh and also you can't unit test it or run it locally so yeah you're basically screwed and can't even afford microservices even if you wanted to. That's how we get rid of premature microservicing!
@@fulconandroadcone9488 Any distribution has a pretty significant overhead(serialization + deserialization + moving data through entire network stack+ encryption/decryption if it is over https + various negotiation and leader election algos in truly distributed systems). You are trading performance for scalability or fault tolerance.
"some startups have gone as far as create a service for each function" LOL you don't know how many startups does this. one time we had a service for uploading file to s3 bucket 😁
Choosing the simplest and most standard technologies which are enough to solve the real problems we are facing right now, is my decision making algorithm when choosing technology stacks.
Microservices are great if they work. But they don’t work without logging, async, retry, failure queues, message busses, reprocessing and a team who understands it all.
No, even if they work microservices aren't great. You point out the added complexity yourself, on top of that passing data between servers over the network is insanely slow compared to sharing memory in the same process (not to mention CPU caches). It's a valid solution sometimes, but rarely and you better have a really really solid justification for as to way it is worth it for your product.
@@foobar8894You don't pass data between servers much. For example with Slack you can have a microservice to process chat messages and another one to handle voice calls. They will barely ever exchange data. In some cases there is data exchanged. For example with a shop, you maybe have a microservice for sellers to enter their product data and another one for handling the buyer process. So if a seller changes a product it's enough to publish a message and the seller service is done. The buyer service receives the change eventually and updates their elastic search and database entries. Sure that can for a small shop be in the same database. But for a bigger shops it makes sense to separate it, because different teams work on the seller and buyer processes and separating this concerns with clearly defined events helps them to be more independent. So the team working on the seller side can for example add an file import for products and push them as events to the same message broker, so they don't have to think about they buying process. Yes, you can achieve the same separating in monoliths, but the teams need to be much more disciplined then, else there will connections happen and then a change on the seller side brings down the buying side and the developers get scared to change anything and the development process slows down, etc. Also the seller side can deploy their changes any time without speaking with the buyers side.
Couple of years ago I started working at a startup that was rewriting their monolith to microservices (just cause hype, they have like 200 active users). First days in, I tried to understand how they structure their code. I asked something like "so how the code in each microservice is structured? you probably have a class for X, and then another class for Y, and a class for Z, right?" to which they replied "noooo, it's microservices dude, don't you know? each class should be it's own microservice! Microservices should be as small as possible, like a script". So right from the start they actually planned to split their monolith into hundreds of microservices. For a relatively simple CRUD-y API with some medium-complexity business logic. Complete insanity. I tried to convince them that this is bs, but they just went to the manager saying that I criticize their work and that i'm "unfriendly", so I left the company after a week. I did consultancy for multiple startups over the years and similar things happen everywhere.
Yeah, I often give up and don't "revolutionize" too often to not be seen as the "unfriendly" guy or to look less competent because some guy on a little higher position sells this BS to management, and I know all of that could be done simpler with less code and faster. I do it their way, maybe learn something new along the way. If they pay for my time and they want that, so it'll be.
Become the microservices whisper, and invent a audit checklist of unreachable qualifications required to validate using microservices for the company ROI Then sell companies on failing the audit, change it from "microservices should be small as possible" to "we don't meet audits criteria for microservices." Because the former is conflating a rule of thumb into a circular rhetorical lie, it's like "Build the wall and mexico will pay for it" it's no longer about whether the thing should be done but how it should be done by hiding the presumption that microservices must be used therefore must be small therefore if small therefore must be used ➿, sidestep that silliness.
At the last company I worked for they took serverless to the extreme. We ended up having a 50 person team to manage the mess for a shitty internal employee system which would have at tops 5000 users but day to day traffic was less than 100. Like shit could have been done on one server
Mess has nothing to do with microservices, you can easily make a messy monolith and i have seen plenty. It is about not making the mess and not about monolith or microservice.
Junior developer: Why can't I just make it a monolith? Mid-level developer: Why can't I have 300 containers provisioned? Senior developer: Why can't I just make it a monolith?
Microservices is all about turf wars. So each person/team keeps their job. No one has to talk to each other, we can just silo off and surf the web. It also avoids having the system to hang together as a cohesive app.
That's a management problem, Managers shouldn't be Made to feel that they are part of a coding team. They are management and Should see all the other managers as their teammates. If a microservice ends then the manager should be confident that their position will simply be moved to something new. A decent manager wouldn't be allowing the employees to just search the web for no reason as they should be managing their teams under them. You've just described a badly managed company that's all...
I am in full recruting interview mode now, aaaand job descriptions are just full of lots of things to know and i'm like "but are you using those things at max capacity?" (8 years of experience on the web doing back and front)
Been preaching this for the past year at the start-up I work for. At one point we had more services than we had engineers. I've successfully de-brainwashed my team about micro services though.
I always thought it's funny how the more powerful cpus become, engineers are making it seem like the opposite with all these microservices... (especially on 99% of platforms that attempt microservice arch right out of the gate) we have like 128+cores in cpus now lmao, it's not like 10+ years ago where we were limited to 4 or 8.. the vast majority of monoliths have a LONGGGGGG way to go before ever truly needing to scale out now a days. Diagonal scaling you get the best of both worlds in my opinion.
Coming from a nerdy-student perspective, I find it much more fun to try and build your own pieces in a backend rather than put some weird puzzle pieces together on AWS where you accidentally find a $200 fee sitting there a month later because you didn't configure it properly. I know that in a production/enterprise environment there are valid reasons to want to use AWS or the cloud or whatever, but as a C/C++ nerd I have been indoctrinated with the belief that you should stubbornly build what you need yourself
C# is obviously way better than K8 and AWS put together… don’t need serverless anything or auto scaled containers if I just deploy my .Net Framework solution to a single Windows Server 2019
This was a fantastic article/video. Made me think about how I have been thinking of a pure microservices setup for the smallest things.. and how what really makes sense is a monolith for those bits of calls/work that dont need much scale ever, and then microservices for those parts that need scale. Makes total sense.
One thing I have noted with the move to "full-stack" engineers is that the database side takes a major hit. I worked at one company where the dev database server ran out of diskspace and the solution was to double the diskspace capacity. I took a look at the database server and ran shrink operation on about 3 or 4 of the databases on the server and reclaimed over a TB of memory for the server. One of the issues is that practice for adding new data to some of the tables, was to complete delete all the data in the table and then re-insert the data along with the new changes.
Yep. I've seen this. In the same company, team A writes a micro server and publishes an API for it. Their requirement is to take some data, process it, and pass it on to another service which is maintained by Team B. Team B publishes an API to document the format in which they want their inputs, and also documents the outputs. Team A, being conscientious, decide to add some validation to their logic to ensure it meets Team B's API requirements before it is passed to Team B. Team B decides that it has a "policy" of validating all inputs passed to their service before processing it and passing it to Team C. So, the data is validated twice - once on the way out of A, and on the way in to B. Now team C, which takes input from team B decides to do the same thing because "it's good practice". And this is one of the ways (just one) that performance sinks in microservice systems. It's more related to the psychology of humans than it is to the systems themselves. A team of programmers will invent complexity for the sake of it just to show how leet they are, and to give them something to do.
Not sure why there would be validation on the output, seems a bit odd. But do agree it adds validation on every input that you'd not usually do in a monolith as the data would not have been at rest, but most of the time you'd still validate it in a monolith if you e.g. put it on/off a queue for resiliency, so doesn't really change anything there.
On one of the last bits. I'm actually inclined to break off authentication and registration only because it can be a ddos vector, even rate limiting and the use of heavy hashing on passwords can be a lot of overhead. Usually it's in a small server, but still separate so not to kill the main app or API servers. Just my own take on this one.
The other problem the article missed is that NodeJS devs are so removed from (and afraid of) physical hardware, they fundamentally have no understanding of just how powerful a modern server is. A Rust Axum+PSQL monolith running on a machine with several TB of RAM and hundreds of cores is hilariously fast. It can easily outperform $100k+/mo of AWS microservices in NodeJS. And you will never match the latency, because: physics.
We have been using trunk and branches approach and it worked really well for machine learning classification tasks. A single lambda function ran a BERT model and would easily scale to fit the load of incoming data. We ran it with ONNX on python.
"it must be scalable because we need high availability" Target should be zero downtime, because downtime most of the time cost money. Yet nobody esrimated how expensive zero downtime gets in overall costs.
One MODULAR monolith is so much sweeter than a bunch of microservices, but when that monolith becomes a ball of mud, holy shit it's annoying to work with.
I said in one of the streams like a month ago that sending json is faster then sending html when Prime was talking about HTMX, been spending the last couple of days being mind blown about how everything is easier when you adopt HTMX, the amount of complexity we added over the years to handle relatively simple websites is honestly such an embarrassment for the industry.
Well, it is a trade of, for most web apps HTMX sounds good, for something that needs a lot of updates that are large, you might save quite a bit sending less data like stock market or what not.
@@fulconandroadcone9488 there are trade-offs in everything, ofc, but I just want to say that I have a very strong feeling that the industry as a whole has been making some bad trade offs for quite some time now.
Isn't HTMX just loading more fragments into the page? Surely loading fragments works for some applications like blogs, but I imagine others need much more complexity. For example if you have a calendar to find a date reloading new fragments from the server when all the information could already be in the browser seems to be not that helpful.
I'm working as a backend dev at a company right now, they have their own custom build frontend + backend framework written in perl that is at max 20 years old, and we've been reimplementing tons of stuff into kotlin (currently running hybrid), the entire thing is a monolith with nearly a thousand sql tables serving hundred thousands of users, and we are a team of 2 to 10 people, I'm shock when I first got into our company and see this insanity but it works. 😂
I learned perl from a book that advocated "use strict", "use warnings" and readable code, pretty close to what is normal in your average language. But then realized that the perl monks were actually proud of cryptic code, the attitude being if you don't understand it, you're not a perl monk. It's hard to imagine working on a legacy perl code base.
@@atomsRnot4717 man oh man, my language server works 50% of the time (could be misconfiguration if that's the case that's on me), its just pain through and through, how do you debug? bebugger? haha you must be kidding we print the shit out of everything here, the old fashion way. Code navigation doesn't exist, intellisense pretty much broken, I'm a junior, my senior understand the pain... and this is why we're trying to escape to the kotlin land right now, jetbrain builds some state of the art IDE, loving it so far. At the moment I'm just using vscode and intellij to switch between 2 languages, and both of them with their own vim emulators installed, not the authentic vim but hey, better than nothing, ain't got time to configure everything I just need a few shortcuts and start working. I burned out the 3rd month I joined lol, but yeah nearly 1 year in I'm managing it now. It's totally fine (place burning down image)
I have worked with a big microservice product when it was cutting-edge. My first experience. I thought it was the best way to do software. Now I work in a big monolith and I changed my mind. Monolith is the perfect solution for the product. Actually, changing it to microservices would mean the end of the company. I am surprised how powerful "old good concepts" can be. I totally agree with the article.
If anyone in dev communities looked out of the programming bubble for a second, they would learn that there is some serious shit going on right now. Memes like „end of free money” are real boys, we are entering the age of the true cost cutting. Only best prepared will survive.
Many engineers, both newcomers and veterans, need to read this article. I've been trying to keep it simple as of late, it saves tons of time and energy you could've spent building something useful.
Many need to go through the journey, the pain and hopefully learn from it. The number of engineers that I speak to that always want to do "the other thing Martin Fowler or Sam Newman" said is unreal. When sharing my experiences, it's always answered with something like "well we can do it better" ya know... because that's how "world-class" developers do things. Months go by and the solution presented is a system full of DB sharing, closely coupled systems that must all be the same version or they won't start and changes that require cross-service code changes to add simple functionality. No metrics or logs, it wasn't in the spec. Oh, and no API contract or version strategy between systems. Nice! When questioning some of these things you get "well we control what's running so it should all be on the same version..." right... nice one 🙈
I worked with a client to review their modernisation strategy. "The result was to stop breaking things up into services" there is no value and more complexity in doing this. Put things where they belong and where they are needed most. Stop trying to break things up by functionality and start understanding your end to end processes and start breaking up the monolith around those processes. For this reason I like DDD but even this is not an easy thing to get right.
One of the hardest things as a developer for me is to apply the YAGNI principle, almost everytime I code things with the mindset "This may be useful in the future" I figure out and rollback because, well, the majority of time that's not a fact xD, I guess I'm not the single one with that problem, at least, at this moment I haven't made a company lost money implementing microservices yet (yet, yet...) because you now they're fashionable and a silver bullet for every development problem.
We had a simple CRUD microservice. With DDD and clean architecture where each layer had it's own DTOs. And we also applied a CQRS pattern. Most of the code was moving the data between the layers. It could have been solved with simple MVC app.
I can confirm, as an Observability engineer at a company with many micro services. Trying to correctly observe and trace all the requests through the different services is very difficult. Every service has a different team and, for some reason, devops configures the web servers differently because they’re also on different teams!
Found out your channel just recently. Watched around 20 videos. This was by far your best video. Honest and quick opinions. More real talk. Less pauses (maybe because you were the one reading instead of random guy doing 3 minute intro). This article is also fire. I am a tiny startup owner and I was in middle of deciding if I should use monolith or distributed systems to begin for my planned project.
[Author] Thank you! I've been writing and researching this for over a year. I am glad it is making an impact. This industry needed some RealTalk. I am unemployed right now so I don't have to deal with the consequences professionally - yet :)
There are problems with monoliths tho: - Builds take ages. Pipelines become very long and painful to work with. - Deployments... Teams cannot deploy their services independently, they often have to coordinate with other teams. - Bugs propagate much further. If one dev breaks something it might impact the rest of the company. Of course, microservices have a high cost. Is it worth it? Dunno...
There are incremental rebuilds. Why would you need to coordinate more than now? Only reason is if you keep mergeing shit code and have make sure there isn't someone elseses crap that might not work. Why would a monolith system be more prone to failiur then the microservices? How is try catch some function different from await some http request? Bugs propagate either way, you get the same response from a function call and an http request assuming function call matches http request. It only makes sense if you run each microservice on a different machine, which from what I see very very very small number of them actually are.
1-2 instances of a monolith should be enough for most use-cases. If you feel the need to scale a particular part of the pipeline separately because it's really demanding, consider just parallelizing the work. If it can run in a microservice, then it can also run in a thread or in a separate process. If you need to make something DRY, consider solutions that don't require a network call, such as libraries, modules, or just have a separate program run on the same machine. The main advantage to microservices (that's not specific to massive corporations) is that you can change them extremely quickly. Make a commit, deploy, and just like that your microservice can be changed for all users within minutes. That is technically possible with libraries through CI/CD and remote configuration, but that's going to add a ton of complexity and come with its own headaches.
Not only are we dealing with more microservices than servers, but they're now all contained in a Monorepo with a deployment strategy the complexity of which would put Napoleon to shame. It's like we almost came full circle but right at the last minute missed the beginning and now we're just on an endless spiral.
I'm at a small place. Most of our services are one off things to solve small problems. So we use simple lambda's that kick off for everything that are named and documented. You want to know our processes? read the list of lambdas or the queues that are exactly named the same as the lambda they match to. Microservices were mandated at my place, so I tried to make it as simple as possible.
I hate when they say Erlang monolith, because it misses the point that Erlang is a Distributed Language, it comes out of the box with all the batteries included for distributed systems like the OTP framework for its own supervisor trees of its own custom green threads made to avoid corrupting data with independent stacks, no heaps, organizes code by modules and each module can be loaded in 2 versions, it comes with a in memory database called *mnesia* for distributed tables. Has its own discovery protocol for other instances in the same network and you can list the instances in the erlang repl and send processes and run them and kill remote processes and for each of those processes is transparent to send messages to each other even among different instances, of course the time of the response is going to be longer. the only and main issue with Erlang is the lack of types. It has static analysis tool called dialyzer that checks for errors in the code, including type errors but its no way as cool type system as Haskell's
I work at a startup that stores data in elastic. Our dev that has worked on the team for I think 2.5 years, couldnt push data in the correct basic format. He pushed 10 timestamps and 10 values into a single document. Like this is elastic 101 stuff. Production is reliant on something in dev working, and our dev doesnt work so we test on production. Since dev, staging and prod are 3 seperate systems that have all unique versions, variables and settings I have to test running docker images on prod, because we have an error only showing on prod. I am applying for new jobs.
I really like the microservice separation, but I hate deplying and maintaining it. Modularity is the way to go - keep codebase somewhat separate within one service. Split into microservices if the need arises.
We went from 5 monolith-ish services with 30 devs ... To now 10 devs (layoffs) splitting things up into 10 microservices where 1 dev is building 1 service each. Let the fun begin!
I think the biggest issue I see is when people start with a micro service architecture without knowing what they are even building and who their users are. I'm not a programmer but I work in infrastructure and a lot of companies build a crazy microservice driven app that mostly no one understands how it works. If there is an outage, in azure for example, it takes 4 to 8 hours to bring something back up because they hadn't accounted for something and now everything is a mess and micrososervices depend on each other etc etc. The best ones are the companies that use a monolith architecture with basic redundancy and some auto scaling on systems that run it. Usually, during a major cloud outage, they are back up and running quickly and if they pay for redundancy, not even go down at all.
When I first worked on microservices and get to know we will have seperate DBs per per sevice it gave me the heebie jeebies. All the ACID principals of RDBMS went out of window, I still feel really uneasy about seperate DBs for each service. Why not have a single db instance connected to all the services, we can easily make multiple copies of DBs so if one goes down there would be another like we used to do earlier. We neither have to violate ACID nor have to manage 100's of DB instances.
i literally said the same thing the other day about the git video with that dev giving Google sloppy toppy about all their "sci-fi" tech/workflows, ex-fanng devs getting startups to build the most expensive solution that doesn't solve any problem
Microservice observability was really nice when we used AWS Xray to its fullest. Pass a single request uuid through the system and it tags your logs and request tracing and it all sorta correlates together. But like... If you don't add the wrapper to do that for you *properly in the way it wants you to* it is rough
I have a character service that extends the bool service to find the right integer service. My security service writes its own security requisites each hour. You cannot hack what you don’t understand.
As someone who is trying to break into programming properly and learning how things work etc this article speaks to me. From building simple web apps for smaller users to trying to build something thats fully sellable SAAS and trying to "do things right" just leads me to feel that the whole developer space is complicated for the sake of being complicated and a classic case of too muck knowledge is a bad thing. Its a weird kind of gate keeping.
As a friend of mine said: complexity is great because you can monetize it. And the more I work in modern IT special when you don’t work in big FANG companies the more I notice that we are just stuck to vendors, subscriptions models, bundles, SaaS, SLA agreements, integration engineers, … etc… nothing is really engineered and crafted it is just cobbled together SaaS with some Enterprise contract veneer.
I feel sorry for the newcomers. Learning programming was so much fun before this. Now, the freshmen are taught that they MUST be able to write planet-scale software on day one. It is ABSURD.
This is a great article. There is an amount of unnecessary overcomplicating. Not just in architecture like microservices but also in the tooling and products we use to create software. And on TOP of all of that, the organization's policies, procedures, thought process adds much more complexity to manage. "Keep it simple" works. Even in the large scale of things like distributed systems, "Keep it simple" is the best approach. Think of it like an AK47, few large reliable parts, never fails.
I have seen project which was 1 big monolith in C# (BE/FE). The JS FE dev needed machine with 64GB of ram, because just to run it (so you can develop) it took ~50GB of memory.
There's many ways around that, you don't need the IDE to be actively analysing and caching the BE code if you're a FE developer for example. IntelliJ IDEs give you the ability to pick and choose which parts of the project should be analysed and which should be ignored. Now if their app itself running on the console takes 50GB of RAM to execute that's not a monolith problem at all, that's a bad software problem, a massive memory leak that they're not addressing for whatever reason, my company has monoliths with thousands of folders and millions of lines of code and i can still run my React packages' serve cmd in seconds and everything works fine on my 8GB RAM machine
Starting out with a server side rendering framework is absolutely fine . The trick is to keep it clean, so you have a way out of it once the volume increase. Either by splitting, distribute or whatever. Just so you are ABLE to. Worst case scenario is when someone have just mashed shit into the ORM, never refactored it and start fixing behaviour in the controller. Have fun scaling that shit out.
it seems we should invent and apply to the development process something similar to a re-balancing algo (as in red-black tree), which would constantly survey the complexity level of a system and come up with re-organization schemes to keep the level low. for example, we started with a system based on 15 microservices, but in a year they rose to 30, -- here the surveyor should propose turning 4 sub-groups of these 30 into 4 monolithic services.
ปีที่แล้ว
When I hear somebody saying "mircoservices are cool" and I am in a good mood, I just feel like deploying everything into a JBoss WildFly container, when I am in a bad mood I just want to deploy everything into WebLogic and quit the job just after
2:23 I indentify as Young Golang Backend Blockchain Half-Stack Ops-Negative Frontend-Negative BareMetal-Specialized Software engeneer. Deal with it and with mistakes Im gonna introduce on your frontend every single time you force me to see CSS.
I remember when Elon took over Twitter, one of the first things he said was that he was axing butt loads of useless microservices. Now I understand why, lol
After 30 years as a one-man-army developer, I can unequivocally say that I hate other developers who say “cloud” or “scale” to me. I would say that 80% of companies can serve all of their customers needs with 1-2 dedicated servers.
I've only worked for small start-ups so far, but trust me when I say, doesn't matter whether we do monoliths or microservices or anything in between - we're still gonna suck at it :p
The one thing that I will do with a monolothic architecture is pretend its actually several microservices for ops reasons. For instance for services that deal with endpoint management, I'll spin up copies of the same monolith and split traffic based on whether a human or a machine is making the request so that if machines go crazy humans are less affected. But beyond that this is mostly true.
The closest thing to a monolith architecture i know is the centerpiece of a microservices architecture that worked on, that actually became very monolithic itself.
Once my was wanted me to draw a diagram of infrastructure used in one project I wos working on. He was so proud how complicated it was. Not, that it worked well, or users was happy, but how complicated infrastructure it used.
THere are two types of systems: One that someone got fired for rolling out on the heels of weeks of burning their soul to improve in their free time but the suits didnt want to commit to something the CEOs nephew, or should I say the DC superviser, didnt understand, and yet after that brazen dev walked out fingers in the air ignoring the threats of legal action it slowly became clear the new system is better. And the other is the absolute train wreck that system becomes after nephew tries to improve upon perfection....
Like Prime I was burned long ago by the CQRS pattern. k8s? Literally witnessed the misery of those Google engineers first hand. KISS is the only thing that matters? Probably
k8s is and was a google trojan to win cloud customers. sadly google is still third for so many reasons, but they refuse to provide good customer service and great documentation.
How simple a solution can be will change over time but to start with a “world scale solution” to quote old stevey is a bit like me buying a bed pan now… who knows I might need it one day and wouldn’t I look silly if I didn’t have one…
On the topic of taking down production with a single misconfigured value, don't you love it when your templating engine coerces "false" to true because it's a truthy string thereby toggling a whole test feature in production?
Long live the monolith. My police is: I only detach a part of a server when: a) is something that I have to do in multiple😂 systems, like authentication. R b) is something this does a lot lf heavy lifting and demands more resources than the main application ( reporting generating for example)
Yes. Generally, your "main" service should be completely independent from the others. Data analytics service is down? No users are affected. Notification service has hiccups? Ok, the emails will go out eventually. But for the love of God, do not make your critical paths like user registration call 5 essential services that never get tested as one.
I came into my current job a bit microservice happy. They have a few services, but a lot of things just revolve around a Django monolith and it works shockingly well. I think they have it right. Prior to that I was working with like 70 microservices in Go which also worked shockingly well but in hindsight 70 was way too many. Though, to be fair they had the amount of users and cash flow such that it was definitely sustainable at least.
1000 microservices is 1 milliservice
I’m sure the US would come up with an equally stupid imperial unit.
1milliservice is equivalent to 3.37 servums.
1 kiloservice
Our company has created the most complex architecture in the world! We are running 1 MILLION microservices all year round. We are proud to announce that we finally run... 1 service.
@@Telhiastop tier
🤯
Maybe every function call doesn't need to traverse the network.
Next up, statically linking our micro-services into a single binary called an executable.
I hear RAM is pretty quick, would probably help your novel concept
Oh, yes, monoized-micro-services
Damn, why hasn't anyone ever thought of this before?
@@Kane0123 can you imagine a microservice architecture that communicates through memcached.. lmfao
@@Kane0123you know L1 is much quicker i heard. we definitely need a microservice-to-llvm compiler and a bundler to link them up
microservices is just a sneaky way for big devops to make us all into devops engineers
this. My take is: they wanna save on devops, so they wanna push to backend. Shit storm.
I think the whole microservice idea just shot over the target. Its useful, to have not one big service but is it really necessary to have that much?
@@gbb1983 do devops engineers get paid more? I fail to see how creating more devops work, no matter how it's distributed, saves money on devops
The "cosplaying that you're Google or Amazon" part is so spot on. Some years ago, my company was hired for a client that wanted to migrate to a microservices architecture. It seemed really unnecessary because their userbase was ~10,000 users and their performance (and costs) were fine. Still, discussions were held about how "microservices are the future" and "to scale, we will definitely need microservices". In the end, I helped them to move from a traditional monolith to a modular monolith, which worked much better for their company and use case and prevented the need of hiring additional people and changing their entire infra DX.
What good resources would you recommend so that I can understand modular monolith design better?
@@darekmistrz4364 The app was a .NET app and was built using the aspnetboilerplate project (open source on GitHub).
Regardless of the stack, we researched many articles regarding this architecture and in the end, it came down to the principle that you have a core monolithic architecture that handles features that any app or module would be built on top of, and separately, you have modules that are dependent of the core projects (or other modules). The core projects should not be dependent on modules, since these should be interchangeable (and removable) without consequences. Using this architecture, and if needed, you could still deploy feature specific instances if needed to balance the load of specific modules.
I firmly believe there are a few subsystems that do well as their own services. Those are: Recommendations. Search. Login (if complicated). Maybe I forgot one.
What makes them good candidates is that in the first two cases they need completely different data structures working over the same data, which it makes no sense to share. It reduces complexity in the main app if you update your search database overnight or something. And in the login case its data is completely separate from the rest of the app which doesn't care how many 2FA codes or SMS numbers the user has set up.
For example Stack Overflow has a standard tiered setup: web tier, application tier, database tier. Each set scalable and redundant.
And then, off to the side, you have these search and tagging servers that are completely separate. Why? Because search and tagging are completely different from the rest of the application logic and they use a lot of resources by themselves.
I'm curious here. Haven't heard a lot about the modular monolith concept. Any good references to get started ?
@@darekmistrz4364 django and nest js help you to write modular monolith just by reading their docs you should be able to understand how to build good modular monolith
We solved this problem 30 years ago, with the compiler and the linker. Individual C/C++ files get compiled into object files, that are like binary puzzle pieces, and the linker's job is to complete the puzzle, creating a complete program.
We threw decades of hardware efficiency and toooooling away to come up with a shittier, slower linker, all because we couldn't bother to teach the young heads how a computer actually works.
How well I remember linking all my objs and shoehorning everything into extended vs. expanded memory (or the other way around!). Now it's assumed resources are infinite; performance is an afterthought/who cares.
Oh I wish someone came up with a shittier slower linker, but the truth is everyone made there own for this one.
More like not teaching engineers how to index and use databases effectively. Also that joins are evil at scale.
*all because micro services and API calls are monetizable and create more monetizable jargon
It is all about the Benjamins... Sad sad sad
When I was in uni they talked about premature optimization. These days, everyone should talk about premature microservicing. First build it, then optimize it, then distribute it.
wouldn't distribution be optimization?
And then, once you need to distributed it you consider all options available to you and only use microsevices as a last resort. Databases can distribute on their own already, your service is probably handling http requests and there is a fair change they can just be load balanced across multiple machines. You might just be able to scale horizontally without changing a single line of code.
@@foobar8894Database distribution is not a free lunch. Blindly sharding incurs consistency and durability consequences, and the “mathematically correct” ways to do it are computationally and network non-trivial (Google Spanner, CalvinDB, FoundationDB) and that’s not to say of those who do it “wrong” and introduce a transaction oracle bottleneck anyways (Google Percolator, PostgresXL)
No no no, first do a low-code solution! And then realize your low-code solution has a total cost of ownership of 10x and is so ridiculously convoluted that you can't replace it and nobody dares to touch it oh and also you can't unit test it or run it locally so yeah you're basically screwed and can't even afford microservices even if you wanted to. That's how we get rid of premature microservicing!
@@fulconandroadcone9488 Any distribution has a pretty significant overhead(serialization + deserialization + moving data through entire network stack+ encryption/decryption if it is over https + various negotiation and leader election algos in truly distributed systems). You are trading performance for scalability or fault tolerance.
My real comment is stuck between the comment spellcheck service and the language determination service.
yt should build a comment unstacker microservice
And don’t you dare usar varios idiomas
"some startups have gone as far as create a service for each function" LOL you don't know how many startups does this. one time we had a service for uploading file to s3 bucket 😁
tracing a microservice pipeline is like putting together the fruits you put into a blender after blending it
95% of SAAS companies wouldn't need more than postgres and a thin layer of application code on the server.
Choosing the simplest and most standard technologies which are enough to solve the real problems we are facing right now, is my decision making algorithm when choosing technology stacks.
I agree. Dotnet all the way baby!
@@Kane0123 Blazor in dotnet 8 ❤
More I lean to that same approach, more I thought to myself "this is probably the most engineer-like thing to do anyways"
Microservices are great if they work. But they don’t work without logging, async, retry, failure queues, message busses, reprocessing and a team who understands it all.
Told my boss about this and i dont have a job next month
No, even if they work microservices aren't great. You point out the added complexity yourself, on top of that passing data between servers over the network is insanely slow compared to sharing memory in the same process (not to mention CPU caches). It's a valid solution sometimes, but rarely and you better have a really really solid justification for as to way it is worth it for your product.
@@foobar8894You don't pass data between servers much. For example with Slack you can have a microservice to process chat messages and another one to handle voice calls. They will barely ever exchange data. In some cases there is data exchanged. For example with a shop, you maybe have a microservice for sellers to enter their product data and another one for handling the buyer process. So if a seller changes a product it's enough to publish a message and the seller service is done. The buyer service receives the change eventually and updates their elastic search and database entries. Sure that can for a small shop be in the same database. But for a bigger shops it makes sense to separate it, because different teams work on the seller and buyer processes and separating this concerns with clearly defined events helps them to be more independent. So the team working on the seller side can for example add an file import for products and push them as events to the same message broker, so they don't have to think about they buying process. Yes, you can achieve the same separating in monoliths, but the teams need to be much more disciplined then, else there will connections happen and then a change on the seller side brings down the buying side and the developers get scared to change anything and the development process slows down, etc. Also the seller side can deploy their changes any time without speaking with the buyers side.
Couple of years ago I started working at a startup that was rewriting their monolith to microservices (just cause hype, they have like 200 active users).
First days in, I tried to understand how they structure their code. I asked something like "so how the code in each microservice is structured? you probably have a class for X, and then another class for Y, and a class for Z, right?" to which they replied "noooo, it's microservices dude, don't you know? each class should be it's own microservice! Microservices should be as small as possible, like a script". So right from the start they actually planned to split their monolith into hundreds of microservices. For a relatively simple CRUD-y API with some medium-complexity business logic.
Complete insanity. I tried to convince them that this is bs, but they just went to the manager saying that I criticize their work and that i'm "unfriendly", so I left the company after a week.
I did consultancy for multiple startups over the years and similar things happen everywhere.
if it pays well, just go with the flow
Yeah, I often give up and don't "revolutionize" too often to not be seen as the "unfriendly" guy or to look less competent because some guy on a little higher position sells this BS to management, and I know all of that could be done simpler with less code and faster. I do it their way, maybe learn something new along the way. If they pay for my time and they want that, so it'll be.
Become the microservices whisper, and invent a audit checklist of unreachable qualifications required to validate using microservices for the company ROI
Then sell companies on failing the audit, change it from "microservices should be small as possible" to "we don't meet audits criteria for microservices."
Because the former is conflating a rule of thumb into a circular rhetorical lie, it's like "Build the wall and mexico will pay for it" it's no longer about whether the thing should be done but how it should be done by hiding the presumption that microservices must be used therefore must be small therefore if small therefore must be used ➿, sidestep that silliness.
@sikor02 been there, done that, dont recommend
Jesus Christ.
finally people speaking up about this
"It took that many people to copy Twitter's open code base?"
HAHAHHA
yes, because there had to create over a thousand microservices from twitter's open code base!
Probably had a hundred thousand repos. I hear it’s a real problem for most companies.
Haha made chuckles as well 😆
At the last company I worked for they took serverless to the extreme. We ended up having a 50 person team to manage the mess for a shitty internal employee system which would have at tops 5000 users but day to day traffic was less than 100. Like shit could have been done on one server
Mess has nothing to do with microservices, you can easily make a messy monolith and i have seen plenty. It is about not making the mess and not about monolith or microservice.
A Raspberry Pi would be overkill for 100 requests a day to a database with some paint on it.
One server? A mobile phone could (probably) serve that.
Honestly take that bag 💰 learn and leave
@@amotriuc that is very true, I've also seen it on both sides.
Junior developer: Why can't
I just make it a monolith?
Mid-level developer: Why can't I have 300 containers provisioned?
Senior developer: Why can't I just make it a monolith?
Microservices is all about turf wars. So each person/team keeps their job. No one has to talk to each other, we can just silo off and surf the web. It also avoids having the system to hang together as a cohesive app.
That's a management problem, Managers shouldn't be Made to feel that they are part of a coding team. They are management and Should see all the other managers as their teammates. If a microservice ends then the manager should be confident that their position will simply be moved to something new. A decent manager wouldn't be allowing the employees to just search the web for no reason as they should be managing their teams under them. You've just described a badly managed company that's all...
I am in full recruting interview mode now, aaaand job descriptions are just full of lots of things to know and i'm like "but are you using those things at max capacity?" (8 years of experience on the web doing back and front)
Been preaching this for the past year at the start-up I work for.
At one point we had more services than we had engineers. I've successfully de-brainwashed my team about micro services though.
I always thought it's funny how the more powerful cpus become, engineers are making it seem like the opposite with all these microservices... (especially on 99% of platforms that attempt microservice arch right out of the gate) we have like 128+cores in cpus now lmao, it's not like 10+ years ago where we were limited to 4 or 8.. the vast majority of monoliths have a LONGGGGGG way to go before ever truly needing to scale out now a days. Diagonal scaling you get the best of both worlds in my opinion.
Coming from a nerdy-student perspective, I find it much more fun to try and build your own pieces in a backend rather than put some weird puzzle pieces together on AWS where you accidentally find a $200 fee sitting there a month later because you didn't configure it properly.
I know that in a production/enterprise environment there are valid reasons to want to use AWS or the cloud or whatever, but as a C/C++ nerd I have been indoctrinated with the belief that you should stubbornly build what you need yourself
Yeah man, I was thinking of just building auto scaling and fault tolerance for my app using c# instead of using aws or something like kubernetes.
C# is obviously way better than K8 and AWS put together… don’t need serverless anything or auto scaled containers if I just deploy my .Net Framework solution to a single Windows Server 2019
10:49 we all know that misoginy is when happy marrige. At least that's what terminaly online twitter users think
This was a fantastic article/video. Made me think about how I have been thinking of a pure microservices setup for the smallest things.. and how what really makes sense is a monolith for those bits of calls/work that dont need much scale ever, and then microservices for those parts that need scale. Makes total sense.
One thing I have noted with the move to "full-stack" engineers is that the database side takes a major hit. I worked at one company where the dev database server ran out of diskspace and the solution was to double the diskspace capacity. I took a look at the database server and ran shrink operation on about 3 or 4 of the databases on the server and reclaimed over a TB of memory for the server. One of the issues is that practice for adding new data to some of the tables, was to complete delete all the data in the table and then re-insert the data along with the new changes.
Yep. I've seen this. In the same company, team A writes a micro server and publishes an API for it. Their requirement is to take some data, process it, and pass it on to another service which is maintained by Team B. Team B publishes an API to document the format in which they want their inputs, and also documents the outputs. Team A, being conscientious, decide to add some validation to their logic to ensure it meets Team B's API requirements before it is passed to Team B. Team B decides that it has a "policy" of validating all inputs passed to their service before processing it and passing it to Team C. So, the data is validated twice - once on the way out of A, and on the way in to B. Now team C, which takes input from team B decides to do the same thing because "it's good practice". And this is one of the ways (just one) that performance sinks in microservice systems. It's more related to the psychology of humans than it is to the systems themselves. A team of programmers will invent complexity for the sake of it just to show how leet they are, and to give them something to do.
Your just mad your services only processes requests…
@@Kane0123😂
Not sure why there would be validation on the output, seems a bit odd.
But do agree it adds validation on every input that you'd not usually do in a monolith as the data would not have been at rest, but most of the time you'd still validate it in a monolith if you e.g. put it on/off a queue for resiliency, so doesn't really change anything there.
On one of the last bits. I'm actually inclined to break off authentication and registration only because it can be a ddos vector, even rate limiting and the use of heavy hashing on passwords can be a lot of overhead. Usually it's in a small server, but still separate so not to kill the main app or API servers.
Just my own take on this one.
**Laughs in Integrated Haskell Platform**
The other problem the article missed is that NodeJS devs are so removed from (and afraid of) physical hardware, they fundamentally have no understanding of just how powerful a modern server is. A Rust Axum+PSQL monolith running on a machine with several TB of RAM and hundreds of cores is hilariously fast. It can easily outperform $100k+/mo of AWS microservices in NodeJS. And you will never match the latency, because: physics.
We have been using trunk and branches approach and it worked really well for machine learning classification tasks. A single lambda function ran a BERT model and would easily scale to fit the load of incoming data. We ran it with ONNX on python.
"it must be scalable because we need high availability"
Target should be zero downtime, because downtime most of the time cost money.
Yet nobody esrimated how expensive zero downtime gets in overall costs.
One MODULAR monolith is so much sweeter than a bunch of microservices, but when that monolith becomes a ball of mud, holy shit it's annoying to work with.
I said in one of the streams like a month ago that sending json is faster then sending html when Prime was talking about HTMX, been spending the last couple of days being mind blown about how everything is easier when you adopt HTMX, the amount of complexity we added over the years to handle relatively simple websites is honestly such an embarrassment for the industry.
its one of the reasons I moved away from frontend dev... I used to love it but things got so unnecessarily complex I started hating it
Well, it is a trade of, for most web apps HTMX sounds good, for something that needs a lot of updates that are large, you might save quite a bit sending less data like stock market or what not.
@@fulconandroadcone9488 there are trade-offs in everything, ofc, but I just want to say that I have a very strong feeling that the industry as a whole has been making some bad trade offs for quite some time now.
Isn't HTMX just loading more fragments into the page? Surely loading fragments works for some applications like blogs, but I imagine others need much more complexity. For example if you have a calendar to find a date reloading new fragments from the server when all the information could already be in the browser seems to be not that helpful.
I want to go back to monolith and htmx it will be so nice....You can easily organise a monolith to split certain parts of into branch services.
I'm working as a backend dev at a company right now, they have their own custom build frontend + backend framework written in perl that is at max 20 years old, and we've been reimplementing tons of stuff into kotlin (currently running hybrid), the entire thing is a monolith with nearly a thousand sql tables serving hundred thousands of users, and we are a team of 2 to 10 people, I'm shock when I first got into our company and see this insanity but it works. 😂
opting into service constellation feels like optimizing something that we don't know for sure in advance.
I learned perl from a book that advocated "use strict", "use warnings" and readable code, pretty close to what is normal in your average language. But then realized that the perl monks were actually proud of cryptic code, the attitude being if you don't understand it, you're not a perl monk. It's hard to imagine working on a legacy perl code base.
@@atomsRnot4717 man oh man, my language server works 50% of the time (could be misconfiguration if that's the case that's on me), its just pain through and through, how do you debug? bebugger? haha you must be kidding we print the shit out of everything here, the old fashion way. Code navigation doesn't exist, intellisense pretty much broken, I'm a junior, my senior understand the pain... and this is why we're trying to escape to the kotlin land right now, jetbrain builds some state of the art IDE, loving it so far. At the moment I'm just using vscode and intellij to switch between 2 languages, and both of them with their own vim emulators installed, not the authentic vim but hey, better than nothing, ain't got time to configure everything I just need a few shortcuts and start working. I burned out the 3rd month I joined lol, but yeah nearly 1 year in I'm managing it now. It's totally fine (place burning down image)
a modern machine with current gen silicon could server 100k+ reqs/second. idk why are these companies over complicate everything.
each user got one pod of that user registration service, must be blazing fast
"a clouder of Patagonia vests" is savage
I have worked with a big microservice product when it was cutting-edge. My first experience. I thought it was the best way to do software. Now I work in a big monolith and I changed my mind. Monolith is the perfect solution for the product. Actually, changing it to microservices would mean the end of the company. I am surprised how powerful "old good concepts" can be. I totally agree with the article.
most of us don't work at startups, and yet every advice article about this topic is directed towards what is best for unsuccessful startups.
If anyone in dev communities looked out of the programming bubble for a second, they would learn that there is some serious shit going on right now. Memes like „end of free money” are real boys, we are entering the age of the true cost cutting. Only best prepared will survive.
grug brain developer fears complexity spirit demon
Many engineers, both newcomers and veterans, need to read this article.
I've been trying to keep it simple as of late, it saves tons of time and energy you could've spent building something useful.
Many need to go through the journey, the pain and hopefully learn from it. The number of engineers that I speak to that always want to do "the other thing Martin Fowler or Sam Newman" said is unreal. When sharing my experiences, it's always answered with something like "well we can do it better" ya know... because that's how "world-class" developers do things. Months go by and the solution presented is a system full of DB sharing, closely coupled systems that must all be the same version or they won't start and changes that require cross-service code changes to add simple functionality. No metrics or logs, it wasn't in the spec. Oh, and no API contract or version strategy between systems.
Nice! When questioning some of these things you get "well we control what's running so it should all be on the same version..." right... nice one 🙈
I worked with a client to review their modernisation strategy. "The result was to stop breaking things up into services" there is no value and more complexity in doing this. Put things where they belong and where they are needed most. Stop trying to break things up by functionality and start understanding your end to end processes and start breaking up the monolith around those processes.
For this reason I like DDD but even this is not an easy thing to get right.
One of the hardest things as a developer for me is to apply the YAGNI principle, almost everytime I code things with the mindset "This may be useful in the future" I figure out and rollback because, well, the majority of time that's not a fact xD, I guess I'm not the single one with that problem, at least, at this moment I haven't made a company lost money implementing microservices yet (yet, yet...) because you now they're fashionable and a silver bullet for every development problem.
We had a simple CRUD microservice. With DDD and clean architecture where each layer had it's own DTOs. And we also applied a CQRS pattern. Most of the code was moving the data between the layers. It could have been solved with simple MVC app.
Maybe reading needs to be a separate microservice for Prime.
I can confirm, as an Observability engineer at a company with many micro services. Trying to correctly observe and trace all the requests through the different services is very difficult. Every service has a different team and, for some reason, devops configures the web servers differently because they’re also on different teams!
At my company we have 70 million users and we still use a rails monolith
Found out your channel just recently. Watched around 20 videos. This was by far your best video. Honest and quick opinions. More real talk. Less pauses (maybe because you were the one reading instead of random guy doing 3 minute intro). This article is also fire. I am a tiny startup owner and I was in middle of deciding if I should use monolith or distributed systems to begin for my planned project.
[Author] Thank you! I've been writing and researching this for over a year. I am glad it is making an impact. This industry needed some RealTalk. I am unemployed right now so I don't have to deal with the consequences professionally - yet :)
I swear that the ease of developing distributed fault-tolerant systems peaked with Erlang systems. Hello, Mike; goodbye, Joe.
Its true you have to have the right toolchain to use micro services correctly and that can be a lot of work to implement.
There are problems with monoliths tho:
- Builds take ages. Pipelines become very long and painful to work with.
- Deployments... Teams cannot deploy their services independently, they often have to coordinate with other teams.
- Bugs propagate much further. If one dev breaks something it might impact the rest of the company.
Of course, microservices have a high cost. Is it worth it? Dunno...
There are incremental rebuilds.
Why would you need to coordinate more than now? Only reason is if you keep mergeing shit code and have make sure there isn't someone elseses crap that might not work.
Why would a monolith system be more prone to failiur then the microservices? How is try catch some function different from await some http request?
Bugs propagate either way, you get the same response from a function call and an http request assuming function call matches http request.
It only makes sense if you run each microservice on a different machine, which from what I see very very very small number of them actually are.
15:22 even at Amazon, there are micro services built to serve max:5 customers every 5 minute interval… including internal users. scale is a myth
" it took that many people to copy twitter's opensource code base!" hillarious 🤣🤣🤣🤣
1-2 instances of a monolith should be enough for most use-cases. If you feel the need to scale a particular part of the pipeline separately because it's really demanding, consider just parallelizing the work. If it can run in a microservice, then it can also run in a thread or in a separate process. If you need to make something DRY, consider solutions that don't require a network call, such as libraries, modules, or just have a separate program run on the same machine.
The main advantage to microservices (that's not specific to massive corporations) is that you can change them extremely quickly. Make a commit, deploy, and just like that your microservice can be changed for all users within minutes. That is technically possible with libraries through CI/CD and remote configuration, but that's going to add a ton of complexity and come with its own headaches.
Ah yes, truly one of the honest videos of all time!
Not only are we dealing with more microservices than servers, but they're now all contained in a Monorepo with a deployment strategy the complexity of which would put Napoleon to shame. It's like we almost came full circle but right at the last minute missed the beginning and now we're just on an endless spiral.
13:15 "It took that many people to copy Twitter's open source codebase?"
LMFAO
I'm at a small place. Most of our services are one off things to solve small problems. So we use simple lambda's that kick off for everything that are named and documented. You want to know our processes? read the list of lambdas or the queues that are exactly named the same as the lambda they match to.
Microservices were mandated at my place, so I tried to make it as simple as possible.
Wtf i distinctly remember the presentation that pioneered the era of micro services and it was a presentation from a developer at Netflix...
I hate when they say Erlang monolith, because it misses the point that Erlang is a Distributed Language, it comes out of the box with all the batteries included for distributed systems like the OTP framework for its own supervisor trees of its own custom green threads made to avoid corrupting data with independent stacks, no heaps, organizes code by modules and each module can be loaded in 2 versions, it comes with a in memory database called *mnesia* for distributed tables. Has its own discovery protocol for other instances in the same network and you can list the instances in the erlang repl and send processes and run them and kill remote processes and for each of those processes is transparent to send messages to each other even among different instances, of course the time of the response is going to be longer.
the only and main issue with Erlang is the lack of types. It has static analysis tool called dialyzer that checks for errors in the code, including type errors but its no way as cool type system as Haskell's
Wow. Erlang seems like a great tool build to do specific job.
I work at a startup that stores data in elastic. Our dev that has worked on the team for I think 2.5 years, couldnt push data in the correct basic format. He pushed 10 timestamps and 10 values into a single document. Like this is elastic 101 stuff. Production is reliant on something in dev working, and our dev doesnt work so we test on production. Since dev, staging and prod are 3 seperate systems that have all unique versions, variables and settings I have to test running docker images on prod, because we have an error only showing on prod. I am applying for new jobs.
The Litmus test for the domain being partitioned into a new microservice is independent deployability.
I really like the microservice separation, but I hate deplying and maintaining it. Modularity is the way to go - keep codebase somewhat separate within one service. Split into microservices if the need arises.
We went from 5 monolith-ish services with 30 devs ...
To now 10 devs (layoffs) splitting things up into 10 microservices where 1 dev is building 1 service each.
Let the fun begin!
Microservices is like communisim, sounds good in paper.
Works with Liberalism too
I think the biggest issue I see is when people start with a micro service architecture without knowing what they are even building and who their users are. I'm not a programmer but I work in infrastructure and a lot of companies build a crazy microservice driven app that mostly no one understands how it works. If there is an outage, in azure for example, it takes 4 to 8 hours to bring something back up because they hadn't accounted for something and now everything is a mess and micrososervices depend on each other etc etc.
The best ones are the companies that use a monolith architecture with basic redundancy and some auto scaling on systems that run it. Usually, during a major cloud outage, they are back up and running quickly and if they pay for redundancy, not even go down at all.
When I first worked on microservices and get to know we will have seperate DBs per per sevice it gave me the heebie jeebies. All the ACID principals of RDBMS went out of window, I still feel really uneasy about seperate DBs for each service. Why not have a single db instance connected to all the services, we can easily make multiple copies of DBs so if one goes down there would be another like we used to do earlier. We neither have to violate ACID nor have to manage 100's of DB instances.
i literally said the same thing the other day about the git video with that dev giving Google sloppy toppy about all their "sci-fi" tech/workflows, ex-fanng devs getting startups to build the most expensive solution that doesn't solve any problem
Microservice observability was really nice when we used AWS Xray to its fullest.
Pass a single request uuid through the system and it tags your logs and request tracing and it all sorta correlates together.
But like... If you don't add the wrapper to do that for you *properly in the way it wants you to* it is rough
This is mainly why I use VMs and a few containers, I don't need to scale like Google and neither do I t need 100% of my infrastructure as code.
You could eat after the video you know... Just plain respect for your viewers.
LOL "There are start-ups with more servers than users" excellent !
I have a character service that extends the bool service to find the right integer service. My security service writes its own security requisites each hour. You cannot hack what you don’t understand.
As someone who is trying to break into programming properly and learning how things work etc this article speaks to me. From building simple web apps for smaller users to trying to build something thats fully sellable SAAS and trying to "do things right" just leads me to feel that the whole developer space is complicated for the sake of being complicated and a classic case of too muck knowledge is a bad thing. Its a weird kind of gate keeping.
As a friend of mine said: complexity is great because you can monetize it.
And the more I work in modern IT special when you don’t work in big FANG companies the more I notice that we are just stuck to vendors, subscriptions models, bundles, SaaS, SLA agreements, integration engineers, … etc… nothing is really engineered and crafted it is just cobbled together SaaS with some Enterprise contract veneer.
I feel sorry for the newcomers. Learning programming was so much fun before this. Now, the freshmen are taught that they MUST be able to write planet-scale software on day one. It is ABSURD.
This is a great article. There is an amount of unnecessary overcomplicating. Not just in architecture like microservices but also in the tooling and products we use to create software. And on TOP of all of that, the organization's policies, procedures, thought process adds much more complexity to manage. "Keep it simple" works. Even in the large scale of things like distributed systems, "Keep it simple" is the best approach. Think of it like an AK47, few large reliable parts, never fails.
I have seen project which was 1 big monolith in C# (BE/FE). The JS FE dev needed machine with 64GB of ram, because just to run it (so you can develop) it took ~50GB of memory.
There's many ways around that, you don't need the IDE to be actively analysing and caching the BE code if you're a FE developer for example. IntelliJ IDEs give you the ability to pick and choose which parts of the project should be analysed and which should be ignored.
Now if their app itself running on the console takes 50GB of RAM to execute that's not a monolith problem at all, that's a bad software problem, a massive memory leak that they're not addressing for whatever reason, my company has monoliths with thousands of folders and millions of lines of code and i can still run my React packages' serve cmd in seconds and everything works fine on my 8GB RAM machine
Starting out with a server side rendering framework is absolutely fine . The trick is to keep it clean, so you have a way out of it once the volume increase. Either by splitting, distribute or whatever. Just so you are ABLE to. Worst case scenario is when someone have just mashed shit into the ORM, never refactored it and start fixing behaviour in the controller. Have fun scaling that shit out.
it seems we should invent and apply to the development process something similar to a re-balancing algo (as in red-black tree), which would constantly survey the complexity level of a system and come up with re-organization schemes to keep the level low. for example, we started with a system based on 15 microservices, but in a year they rose to 30, -- here the surveyor should propose turning 4 sub-groups of these 30 into 4 monolithic services.
When I hear somebody saying "mircoservices are cool" and I am in a good mood, I just feel like deploying everything into a JBoss WildFly container, when I am in a bad mood I just want to deploy everything into WebLogic and quit the job just after
2:23 I indentify as Young Golang Backend Blockchain Half-Stack Ops-Negative Frontend-Negative BareMetal-Specialized Software engeneer.
Deal with it and with mistakes Im gonna introduce on your frontend every single time you force me to see CSS.
I remember when Elon took over Twitter, one of the first things he said was that he was axing butt loads of useless microservices. Now I understand why, lol
After 30 years as a one-man-army developer, I can unequivocally say that I hate other developers who say “cloud” or “scale” to me. I would say that 80% of companies can serve all of their customers needs with 1-2 dedicated servers.
They are okay with main branches instead of master, so they should be okay with ParentBoard instead of MotherBoard.
I got it. MAIN BOARD.
Minute in: Bridges were burned.
I've only worked for small start-ups so far, but trust me when I say, doesn't matter whether we do monoliths or microservices or anything in between - we're still gonna suck at it :p
The one thing that I will do with a monolothic architecture is pretend its actually several microservices for ops reasons. For instance for services that deal with endpoint management, I'll spin up copies of the same monolith and split traffic based on whether a human or a machine is making the request so that if machines go crazy humans are less affected. But beyond that this is mostly true.
Love to hear him sounding like Michael Scott when he gets excited
The closest thing to a monolith architecture i know is the centerpiece of a microservices architecture that worked on, that actually became very monolithic itself.
Once my was wanted me to draw a diagram of infrastructure used in one project I wos working on. He was so proud how complicated it was. Not, that it worked well, or users was happy, but how complicated infrastructure it used.
THere are two types of systems: One that someone got fired for rolling out on the heels of weeks of burning their soul to improve in their free time but the suits didnt want to commit to something the CEOs nephew, or should I say the DC superviser, didnt understand, and yet after that brazen dev walked out fingers in the air ignoring the threats of legal action it slowly became clear the new system is better. And the other is the absolute train wreck that system becomes after nephew tries to improve upon perfection....
Like Prime I was burned long ago by the CQRS pattern. k8s? Literally witnessed the misery of those Google engineers first hand. KISS is the only thing that matters? Probably
k8s is and was a google trojan to win cloud customers. sadly google is still third for so many reasons, but they refuse to provide good customer service and great documentation.
How simple a solution can be will change over time but to start with a “world scale solution” to quote old stevey is a bit like me buying a bed pan now… who knows I might need it one day and wouldn’t I look silly if I didn’t have one…
On the topic of taking down production with a single misconfigured value, don't you love it when your templating engine coerces "false" to true because it's a truthy string thereby toggling a whole test feature in production?
My mom walk in my room at 13:12 and had to immediately turn off the computer
Long live the monolith. My police is: I only detach a part of a server when: a) is something that I have to do in multiple😂 systems, like authentication. R b) is something this does a lot lf heavy lifting and demands more resources than the main application ( reporting generating for example)
They aren't independent from each other at all. There's literally 1 service required after another in the most basic functionality I've seen.
Yes. Generally, your "main" service should be completely independent from the others. Data analytics service is down? No users are affected. Notification service has hiccups? Ok, the emails will go out eventually. But for the love of God, do not make your critical paths like user registration call 5 essential services that never get tested as one.
I came into my current job a bit microservice happy. They have a few services, but a lot of things just revolve around a Django monolith and it works shockingly well. I think they have it right.
Prior to that I was working with like 70 microservices in Go which also worked shockingly well but in hindsight 70 was way too many. Though, to be fair they had the amount of users and cash flow such that it was definitely sustainable at least.
1000+ Microservices is spaghetti coding reborn
I work in a 2 men team, we have 17 microservices... and that's just data delivery.
I'm loving these series of articles review. It's very enjoyable. Where do you get these? Nice work