Here is the problem Primeagen: > Netflix has a specific issue > Netflix comes with a solution for their issue > 0.25% of the world companies benefit from the solution the same way cause only a handful of companies in the world at a certain scale can face the issue. > blogposts and vlogs on youtube starting flying around about it cause INFLUENSOOOOOOOORS have to cREatE cONTent to live. > Mediocre companies adopt the Netflix solution pitching something resembling this at an abstract level: "hei boi, we use what netflix is using, come claim clout for your CV with us." > 99.75% of the companies worldwide adopt the same complex solution for their CRUD app cause they need to aTTrAcT tALenT > ????? > rekt
Currently working on fixing another team's code where it goes through 4 or 5 different microservices just to... download a file from Azure Blob Storage. Sometimes it takes over 30 seconds just to download a 3KB file. Mind you the complexity and number of microservices are only one of several (dozen) issues with the code.
Exactly and to solve the issues they bring in more complexity like CQRS, DDD and this thing is an absolute monstrosity which no one wants to touch with a 10nfoot pole
Every microservice adds on a layer of monitoring, alerting, logging, retry processes, synchronization, idempotency requirements, authorization, deployment, and dead letter queues to every “micro” service. Worth it sometimes but go in with eyes open.
Microservices also demand tracing, without them good luck stitching multiple logs together to figure out how a request flowed through the system and where it went wrong
@@MrEnsiferum77 how does writing things without docker remove these requirements? Having the same wrapper around all this could help but I’ve yet to see any important production service that doesn’t need this. Have been burnt by trying to skip any of them.
@@thegrumpydeveloper simplifies the process of building distributed apps... those languages like erland have baked kind of 'vm' inside those running processes... u don't need k8s or similar crap...
I've been thinking lately, that literally every approach is shitty, and there is no holy grail. Your role as an architect is to pick the least shitty option, not the best one.
I've wondered the same thing. That or people can't accept that a problem is X hard, and make it 2X hard believing the added complexity made it x/2 hard.
Thanks on reading all these articles and putting it on youtube. Your personal take stemming from experience, as a many times more experienced engineer than I am, is truly the value I am looking for in youtube tech content.
The whole industry is driven by trends. Everybody is doing it - so it cannot be wrong. This ends up in those waves of going into one direction and then in the other. We should focus us on the use case, take over the responsibility and find the best approach for our solution.
@@user-qr4jf4tv2x its only sudden because it takes time to develop a good opinion on software design. people tried it and just realized that it wasn't what they were looking for
@@user-ge2vc3rl1n Tbf, it's only bad because it's overused and applied were it's not suitable because devs, architects and even managers get sold on the concept having heard about in seminars as opposed to it being a product of careful research and consideration. It's like the shiny new JS framework of software architecting.
"There could have been an Amazon competitor back in the days that, instead of building a monolith and solving customers problems, they build a distributed system (a micro service architecture)... That might be the reason, you have never heard of them..."
Personally i love it when my boss forces me to spend 5 days to develop something using microservices, when i could do it in 5 hours in Django, and the app only has 1,000 internal users.
The major part why I have found some form of "services" are valuable (how "micro" they should be is a question very specific to every particular project and company) is independent release cycle. If you have a modularized monolith, you still are tied into common release cycle, and it tends to be quite heavy, reducing your ability to release as often as you like. Dealing with release cycles if you're in the same process as 20-30 other teams often ends in a situation where your release cycle is "maybe we'll get something on prod in a week or so, depending on the volume of issues". Now, this assumes that you can release as often as you like. If you have external limitations to your release frequency (certifications, security/compliance audits, if you're tied to particular firmware patch releases etc.) that math changes a lot.
I've been working for quite some time in the banking industry and I remember a peculiar example of a project some colleagues of mine worked on: a big bank commissioned a multi-million dollars architecture based on microservices and events, at the end of a 3 years period of development they decided to throw away the whole thing and go back to a monolithic non-event architecture. Maybe it was just badly designed, but I think the problem is the very nature of event-driven stuff and I say that because I've worked on another big bank repository a couple of years ago with a similar kind of architecture and it was an event spaghetti and each request caused and untraceable amount of data retrievals and it was ugly to work on and difficult to maintain. Hundreds of people worked on that repository over a 20 years period and it was a huge pile of crap.
@@philsburydoboy Agile is for liberal developers who think they building something valuable, but they rewrite the same crap apps on the web with new framework, while actually using waterfall
Build modules until you need microservices, for scaling, then the modules can be jettisoned as a microservice with minimal changes in code as necessary.
And while you're running the code as modules, add artificial sleep(2ms) to every call to the module to be prepared for the extra latency you'll suffer when you convert that module to microservice. If you feel that your system is getting too slow, remove that artificial sleep call but understand that you cannot ever change that part as microservice either because the latency would be too bad.
The biggest problem for me with microservices is the persistence/data layer, there are a lot of queues, a lot of streams, a lot of replication, a lot of inconsistent data and so on. Apart from a whole tooling structure that has to be built to sustain production. I don't, sometimes i just believe that a monolith well written, well modularized, works well well damn.
@@manualautomaton100% agree. The Sell is that smaller applications can be iterated on very quickly- the reality is that within a large ecosystem of services like this require the same amount of efforts more. It’s a poorly constructed confederate monolith.
Many years ago, I worked on a system that hot-swapped DLL libraries from disk gracefully when they were updated with new versions without restarting the application. Now I think about it, that was pretty damn close to micro-services - only faster due to being in memory communication, not via the network stack!
Except microservices are exactly about the "via the network stack". The fact that you solved zero-downtime upgrades on a single machine is not very related to services that are independently scalable across many separate machines.
For us the biggest issue by far has been network latency. Even the most basic read operations take hundreds of milliseconds to complete due to having to fetch data (often sequentially) from different microservices. Also 90% of our alerts are now request timeouts between microservices. My stance today would be that microservices only make sense for fire-and-forget applications. Anything that needs to respond to a user request should be a monolith.
I mostly agree. Another option is to make every microservice to have hard deadline that results in good enough user experience. I would say that would be about 10 ms because you often need to combine multiple microservices for a single user visible response and the user experience starts to get worse when total response time goes above 100 ms. When you write a microservice with 10 ms deadline, you practically have to write it in systems programming language such as C, C++ or Rust. Do you have a team for it? Does that team agree that splitting everything into small microservices is the best way forward? If you use Java, Go or other managed language, every time the garbage collection runs stop the world step, all the requests for that microservice instantly fail their deadlines. So obviously you cannot use any of those languages without insane amount of hacks to avoid GC at all times.
I read Sam Newman’s monolith to microservices. He actually recommends starting with monolith with logical services that could be broken out if it needs to independently scale.
I recently attended a workshop with Sam in Copenhagen about this, and it was great as he is a very engaging and funny bloke. It also confirmed that I was on the correct path with my own system. I started with a monolith (start-up company 15 years ago), and now are starting to identify parts to break out as microservices as the company has grown immensely.
I think about how to modularize my app code often especially for my hobbyist projects as a game dev. Unity uses (a mangled) C# and I can achieve isolation by declaring assembly defs. However; I often discover I’m bad at setting boundaries. I get into circular dependency arguments with my compiler. It’s fun stuff to consider.
My hot take is that people want the flexibility to compose a system from smaller components, so they have a separate repo per component. It seems easier to make each component a service and compose those services to avoid cross-repo dependencies. Monorepos -- the /new/ (relative to microservices) buzzword for many of us non-googlers -- make composing components into a single service much easier, and as more orgs adopt them, people are less incentivized to ignore the downsides of making every component a service and therefore are presented with the decision to compose subsets of components into a service or a distributed system on a case by case basis. Don't write eventually consistent distributed monorepos
I have always advocated for modular architecture over premature micro-service. One thing I always maintain is strong cohesion between a module components and weak adhesion to other part of the app. With this, I can always copy the folder that houses the module code to any base project, link one or two files and start a new micro-service when the module becomes too big and the time is ripe.
cargo workspaces for rust are a god send for this. Keep your data layer (module) as a agreed upon ORM or Raw SQL standard how to call your database. After that for each workspace member imports the shared domain. After that if you want scalability just docker container each set of routes. For example you have 3 workspace members for a twitter clone auth, status and timeline. Each one has the domain workspace and can just call what they need from the database. This allows for code reuse, scalability and low latency. There are somethings I suggest to clients to microservice out. Auth system are usually a good idea and even email/sms message relay servers. 9 times out of 10 thought I don't recommend it. I have found no reason for it, keeping teams in separate bubbles is just as easy as well. As long as you group each set of routes correctly you shouldn't have confusion on what team 'owns' which data. I do this by doing status_media or status_comment in the domain tables. If you need a change ask them to update the layer so their code still works or you can add a new function to the layer and send it over for approval.
I would argue that microservices is more about optimizing for development process speed (by promoting more losely coupled and high cohesion modules) at the cost of less optimal technology choices imposed by the distributed system that follow by it. Short Time to market is more important today than optimal technology choices. We trade off one upside for another downside here. And regardless of any current regime, nothing can replace strong engineering practices in the organization. Without it you will eventually end up with big ball of mud regardless of whichever architectural regime you follow. The return of the monolith will hence solve nothing on it self. It is just another technology choice. Strong engineering practices is key for any architecture to work and evolve over time in respons to changes in requirements.
What about API Gateways to manage interactions between various versions? Conceptually this allows for multiple parallel versions to operate simultaneously and allow other service teams to iterate at their own pace.
I work with hundreds of git submodules. You have to write automation scripts to make it manageable. When sharing libraries across projects, it's faster to develop with submodules than republishing npm packages, so they have a place.
For serverless your codebase functions like a monolith for the developer working on it, however the deployment chunks out your defined functions into individual microservices. In your regression test harness you can capture the behavior of each service post-deployment. IAC in a single system is the best of both worlds.
@@HonestCode this should be taken into consideration when designing the data flows for various events. check the medium article “AWS Lambda battle 2021: performance comparison for all languages (cold and warm start)” for the coldstart comparisons for various runtimes. Ultimately, you probably want certain operations to be faster than others and so your prioritize your architecture to be “close to the data” when returning responses. Fan out what needs to be fanned out. It’s one paradigm, among many (such as cost optimization) that public cloud has to deal with. Most cloud apps are crud with minimal processing, anyway.
Probably not in Python or somethin that manages dependencies like Python :)) At one of jobs, I saw how folks made modules with C# instead of microservices. I liked it. They had projects organized like multiple libraries (some of them were kinda old, even in older versions of the C#), they could manage every librarie's dependencies separately. Even have "libraryA 1.0" in one place and "libraryA 1.1" in another (I am NOT saying they did it for some purpose, I am just saying that that it caused no trouble at all). Correct me if I am wrong, but when I think about the same approach for Python - I think it will be a dependency hell, because when project becomes large, one module obligates all the other modules be in the same context and fit into dependencies.
That isUTCDate function looks a lot like my function I used at work to determine if the material number for a pick order was provided as: an SAP material number, an *outdated* SAP material number that needed to be translated to the newest version (cause they were too dumb to use the versioning system in SAP) the document number that actually appears on blueprints, the customer document number, or some terribly formatted version of any of the above where periods were changed to underscores or someone forgot to change their excel files so they don't drop trailing zeros or just some made-up-ass number that didn't exist in any of our warehousing systems at all. Glad to know that despite the fact that I am very much an armature programmer I can hang with the best of them in the pro world ;)
7:14 change management is always hard, that's one of the most hard things ever in software. software ate the world because it can change and adapt, but on the other hand, its also its biggest disadvantage, sometimes its too easy to change, then things break.
Unison language nailed this, by rebuilding a way to achieve dependency or module injection through ability or algebraic effect. All your needed implementation became libraries or modules. And the deployment process they are building in immaculate.
I asked ChatGPT about the fallacy and it said it was the "hasty generalisation" fallacy, which is also known as the "unrepresentative sample" fallacy. The wiki entry for it appears to be "Faulty_generalization".
Enforcing module boundaries in a monolith is incredibly difficult imo.. so I do like some type of “microservice” structure to enforce them. But that doesn’t mean you need to separate things in such a way that you incur such huge dependency chains - you can still have big modules
A few weeks ago I spoke to a guy building a startup, building microservices for months. I told him that he should not be doing it this way. He wanted to keep code "modern". What he should actually be doing is to build the core functionality and ship the code fast. Writing a modular monolith is usually a better idea for startups. A big mistake they do is to think that they know the requirements.
You should have a handful of independent modules/monoliths/Microservices and a dozen satellite Microservices that do highly specific and technical tasks. The general architecture should implement a "one-hop" architecture. Meaning that every user request must be finished with a single call to one backend service
I’d subscribe to more insightful tech talk discussions. I understand the funny guy live takes but I’m genuinely interested in these topics but it’s hard to process the randomness and sidebars.
Scalability is a performance optimization. Premature optimization is usually a big mistake. Effective performance optimization starts with performance data. So if you are building out a microservice infrastructure for scalability and you don't have real world performance data, then you are doing a premature performance optimization. If you want to organize your code for development purposes, you should do so in a way that doesn't have significant performance implications. If you want scalability, you should start with real performance data and use that to inform your design decisions.
33:33 In my experience in working in many organizations, Software engineering is as much a creative process as it is technical (at least thus far). Especially true for “senior engineers”. Just like artists rarely enjoy going back to their old work to adapt it - and generally have a deep dislike of modifying someone else’s previous work, most developers love to “start new things”, but tend to dislike maintaining existing software, and particularly dislike working on someone else’s code. We’re at our happiest when we “write our own from scratch”. Just sharing an observation.
The article thesis is defended and correct. One repo, one tested/compiled/coupled unit. Incompatible interfaces must fail to compile. Modules can be separately scaled and deployed as microservices, but maintained, verified, compiled, and tested together.
I see few reasons for microservices: 1. Legal constraints. If you are working with an external software development company to integrate their software into your system. 2. if the service has already been developed because it is a common problem, such as authorization, reporting, user management, etc. - everything except business logic. You could even go so far as to title backup and monitoring as a "microservice".
You know, there are micro services we all sort of agree upon, Like data storage on a database and files on something like s3/minio, payment systems (not sure if it can be considered "micro" and we often outsource it but still) maybe even that an identity provider should be a separate service as authentication can be a cause for security concern which you might just want up and ready as a service.
Actchually, they are pretty hard to maintain if you are dropped into a new project. Deploying the app is actually deploying 50 different apps which can all fail separately.
Microserves would be a great solution to many problems if communication wasn't limited by the speed of light and physical interconnects. When you physically separate all the computation, you are forcing processing to happen in serialized form and overall latency for a given request gets worse.
I think the following is more reasonable than Micoservices even for larger corps: 1. one team, one module 2. a team can own dedicated datastorage if they have to deal with write heavy traffics. More specifically, imagine we are building TH-cam. There would be 'chat team' who's developing backend system for chat. It's a very bad idea to let them use RDB for developing chat functionality since RDBs are not scalable for write heavy tasks. Instead, they should use scalable NoSQL or NewSQL like DynamoDB or CloudSpanner to store chat data. Using their own datastorage doesn't mean they have to own dedicated codebase (repository) or application server. They can just develop a module or client library or API endpoints which utilize the datastorage while sharing codes and application servers with other teams.
> Isolated data storage + server sharing I like that this is the opposite of what many "microservices" implementations end up being, where several services share the same database
~ 30:30 about the git submodules… I feel your pain 😂 We have a Yocto build setup using submodules, and it sounded like a fantastic idea when I was reading up on it getting started, but in practice it’s definitely had some pain points…
I'm convinced microservices exist (partially) because JavaScript has no concept of "visibility" or access modifiers. One of the things microservices offer is a well-defined boundary between "modules", which is obviously useful for large projects, making up for a shortcoming of the language. What should be a very lightweight function call, becomes a very expensive remote/network call.
Definitely think whether you go monolith of microservice depends on the project. Recently I was only in an ancillary capacity involved in a project where I strongly advocated for a microservice system. My major reasoning was that certain components in the project will have to be replaced in a one or two year horizon. My advice was to have an API to handle these system components with a clear plan to have it replaced with a newer system component when that replacement horizon arrives. But, having a well planned API will mean that the component can be replaced without impacting the rest of the architecture. Also, by having it this way, my argument was that I could scale the system between on-prem (no internet connectivity needed) to hybrid or cloud.
Really it just comes down to how easily you can make changes. A real monolith is simple, but can be difficult to deploy when lots of people are working on it and they don't always guarantee that it's in a deployable state, and that can lead to long lived feature branches, painful merges, slow release cycles, etc. Microservices solve an organisational problem, not a technical problem. They allow pieces of the software to be deployed quickly and independently, and allow teams to focus on their own areas without having to be an expert in the entire system, but that comes at the cost of many network calls that may fail. You accept that complexity in order to scale the number of people that can work on the system. A distributed monolith with shared modules is the worst of both worlds. You make a change in a module and then have to deploy multiple services that use it.
9:39 linkerd is an excellent, open-source service mesh that gives observability and control across the entire call stack. it essentially runs an in-pod, transparent proxy for k8s service traffic.
Micro services are always the wrong choice because it's a technical consideration, not a product consideration. Getting a high quality product out to your customers is more important than following the next big tech fad, and microservices does not guarantee you a strong, stable or flexible base to work off. Microservices solve a very specific scalability problem that you (statistically) probably do not have. By picking Microservices as a starting point, you've compromised the product for your own self-satisfaction.
"The key is to establish that common architectural backplane with well-understood integration and communication conventions, whatever you want or need it to be" -> *Solid foundation for the tech* "The key is to give the team the direction and goal, the autonomy to accomplish it, and the clarion call to get it done" -> *Solid foundation for the people*
Chat: "Oh no, he remembered to disable alerts this time. Lets repeatedly try to break his concentration with highlighted questions completely unrelated to the post he's reading!"
I enjoy the arguments about all these companies switching to microservices, kafka, and mongo as magic on improving their applications... when it was actually the fact they took 10+ years of lessons learned to just rewrite the application. Switching stacks and patterns was an excuse. They would have had just as good performance improvements had they taken the same stack they were already on and used lessons learned to get rid of all the old garbage.
Only because you mentioned obs, this is better than the obs overlay imo. White text overlaid on often other white text in the blogs being read is basically impossible to keep up with.
My role as an architect somedays feels like i'm captain hingsight e.g "yes your app is behaving poorly, because you put the wrong solutuion in, heres what you should have done... Oh you wont/cant change for various reasons? ohwell"
Best advantage of microservices is the ability to deploy changes without relying on another team and also not needing to work in whatever service language + frameworks another team chose.
One detail I also believe is often overlooked, microservices does not mean communicating over a network using HTTP, for latency critical a sidecar strategy using pipes could also be used if the code separation is worth the effort.
At my job the management has us just kicking the can down the road, with sufficient force to ensure it won't be our can anymore by the time we get back to it.
In the film, Gaslight he was upstairs rooting around in a boarded up attic, using the gas lights upstairs which caused the lights downstairs to dim. When his wife brought this up, he convinced her that she was going mad. He also hid things and accused her of hiding them in the pursuit of getting her getting her locked up in an asylum and getting the house.
27:52 Just here to clarify that n orders of magnitude is 10^n , so "five to seven orders of magnitude" is 100000x - 10000000x, not merely 10x (which is ONE order of magnitude)
It sounds like NetFlix had a time assignment problem then, and need a standard library for each language used that is then used by all devs so time stamps are consistent. I guess there would need to be a company wide developer update in it, but it would be worth it long run.
What's funny is this is all possible with monolithic apps too. especially webserver monoliths. you only pay for processing.. and unused http end point on the same server costs you nothing additional. And it will have faster spin up time for the rare instance it is actually required.
It's the same problem than thinking only the vocal revolutionaries have to be heard, we have monoliths silently working great for decades but "because aren't exciting" we don't gear about.
I think half the problem is people dont know where to start/end a microservice's purpose. I like modules and pulling out as you see one part needs different scale
I am remaking a real estate crm I have from 0. I think that I am only going to have the images on other serivice. The backend code and database is going to be on the same place I don't want the frontend to wait for the backend to wait for an external call to the service where the db is, no way.
Lets say I want modules: I want them to be memory isolated for security (because opensource) so they need to be in a seperate process. I then need them to communicate, I can use a raw datastream but then I have to define protocols. Lets say I grab something off the shelf to help provide structure to the interface. Oh wait, no I have an RPC framework. congratz, you now have microservices.
when MS were in vogue, I attended an MS focus conference. Even the people that presented MS solutions you could feel they knew they were selling horsesh&t with all the hacks you had to do just to make simple things work. Do people really still think this is a good idea?
People hate large monoliths because they take forever to build, test, and deploy. Often if one person messes up, it blocks everyone. Modules offer a better authoring experience than microservices, but microservices have great CICD. I think the ideal setup is a monorepo with shared tooling+packages, and granular build/test/deploy.
The saying you're thinking of is something like "creating the disease to match the cure", I think. "We discivered a cure for the disease we created, " or something?
Doing insights, monitoring, alerting and tracing on pub-sub structures that are very large can become a daunting task. It is all fun and games until you have to filter through 1_000_000_000+ events.
what is that crap name... that's the whole purpose of DDD... with DDD u can have modular monolith for ages, before migrating to microservices... i'm sick and tired on stupid new trends...
Time's the worst, try data engineering. Multiple client databases all in different time zones on instances that can have their own time zone. Time and nulls are always the worst.
Strawman, but also, people are looking for a silver bullet. If there's a problem with something it's obviously not a silver bullet, so time to use something else.
I agree with the article in general but once you have different tech stacks interacting you need microservices. Example: Mobile App with flutter or whatever, Web App with Laravel or whatever -> how do they share data? -> through APIs, fine no proble, just let them call each other. But what if there are more apps? You want all these apps to call each other directly through apis? Good luck having ANY overview once u reach more than 3 or 4 apps. Microservices solves this problem. Unfortunately micro services are introduced way too early, including the company that I work at
Stateless modular monoliths are essentially the sweet spot; easy to do large re-factorings and modules allow you to manage it like microservices without the crippling cost of redefining a logical boundary. Microservices in a mono repo can make sense in some cases if you need that level of autonomy. The 'for real' polyglot, bring your own pipeline, infra, monitoring etc microservices architecture is rarely the correct answer in my experience.
Leading the witness is what I'm thinking you mean. But I think strawmanning is also appropriate because the opposite is steelmanning which would be representing something in the best of cases. And as we've said, there are great monolithic repos that have been expertly designed. The same way that there are exquisitely crafted non-SPA websites that are better than SPA's in every way. You can strawman non-SPA's as being bad at things that they don't have to be bad at, but commonly are. Same goes for monoliths or monolithic repos.
Stonehenge is a distributed monolith, everyone loves it. But I build a distributed monolith, I get fired. It's not fair.
How well did yours scale vertically? That's an important feature of Stonehenge.
Hahahaha 😂
And stonehenge requires zero maintenance
Its definitely microservices tho lol.
@@Brunoenribeiro Must be why there are parts of it that have collapsed and haven't been fixed for quite some time.
Here is the problem Primeagen:
> Netflix has a specific issue
> Netflix comes with a solution for their issue
> 0.25% of the world companies benefit from the solution the same way cause only a handful of companies in the world at a certain scale can face the issue.
> blogposts and vlogs on youtube starting flying around about it cause INFLUENSOOOOOOOORS have to cREatE cONTent to live.
> Mediocre companies adopt the Netflix solution pitching something resembling this at an abstract level: "hei boi, we use what netflix is using, come claim clout for your CV with us."
> 99.75% of the companies worldwide adopt the same complex solution for their CRUD app cause they need to aTTrAcT tALenT
> ?????
> rekt
LOL
That's weirdly accurate
Currently working on fixing another team's code where it goes through 4 or 5 different microservices just to... download a file from Azure Blob Storage. Sometimes it takes over 30 seconds just to download a 3KB file. Mind you the complexity and number of microservices are only one of several (dozen) issues with the code.
@@r1konTheAutomator that's not accurate, that's *THE TRUTH*
Exactly and to solve the issues they bring in more complexity like CQRS, DDD and this thing is an absolute monstrosity which no one wants to touch with a 10nfoot pole
Every microservice adds on a layer of monitoring, alerting, logging, retry processes, synchronization, idempotency requirements, authorization, deployment, and dead letter queues to every “micro” service. Worth it sometimes but go in with eyes open.
Microservices also demand tracing, without them good luck stitching multiple logs together to figure out how a request flowed through the system and where it went wrong
U can write code in dockerless languages like erlang...
@@MrEnsiferum77 how does writing things without docker remove these requirements? Having the same wrapper around all this could help but I’ve yet to see any important production service that doesn’t need this. Have been burnt by trying to skip any of them.
@@dandogamer unique transaction or trace id definitely needed good point.
@@thegrumpydeveloper simplifies the process of building distributed apps... those languages like erland have baked kind of 'vm' inside those running processes... u don't need k8s or similar crap...
Holy shit, that time format rant turned from joke to a full blown PTSD episode real quick.
yeah... i am a programmer, it happens
It's funny, I've heard the same time format rant from a coworker of mine (I'm not really a programmer, I just like learning about it).
I felt your pain
Hegelian dialectic
Never talk to
- a boomer about ‘nam
- a New Yorker about 9/11
- East & South East Asians about the Japanese
- Developers about dates and times
I've been thinking lately, that literally every approach is shitty, and there is no holy grail. Your role as an architect is to pick the least shitty option, not the best one.
100%. Prime has said the same on his videos multiple times.
Yeah, writing software sucks. If it wasn't my only source of income, then I'd be happy if AI took over.
I'm gonna add, to be more "precise", the less shitty option depending on the context you have at hands and what you're expected to provide.
You gotta keep the median shittiness below a threshold
I've wondered the same thing. That or people can't accept that a problem is X hard, and make it 2X hard believing the added complexity made it x/2 hard.
Thanks on reading all these articles and putting it on youtube. Your personal take stemming from experience, as a many times more experienced engineer than I am, is truly the value I am looking for in youtube tech content.
Just remember, nothing is quite so permanent as a temporary solution.
At the start of the project, just ask "what if we try a modular monolith for the first couple of weeks?" 🤫
Just like government policies
What about a temporary government program
Didn't realise Milton Friedman was a software developer?
I converted a micro-services system to monolith, it resolved about 90% of their issues. The company went on to being acquired for $500 mill
The whole industry is driven by trends. Everybody is doing it - so it cannot be wrong. This ends up in those waves of going into one direction and then in the other. We should focus us on the use case, take over the responsibility and find the best approach for our solution.
Breaking a system into distributed systems is arguably the best possible way to make it complex and hard to maintain.
i like this sudden switch just like american politics
I turned my ass into a microservice so that I can replicate it when I need to take a big shit, I call it sharting
@@user-qr4jf4tv2x its only sudden because it takes time to develop a good opinion on software design. people tried it and just realized that it wasn't what they were looking for
@@user-ge2vc3rl1n Tbf, it's only bad because it's overused and applied were it's not suitable because devs, architects and even managers get sold on the concept having heard about in seminars as opposed to it being a product of careful research and consideration. It's like the shiny new JS framework of software architecting.
Indeed, makes sense for very very big software projects - but that's like what, 0.01% of actual projects?
"There could have been an Amazon competitor back in the days that,
instead of building a monolith and solving customers problems, they
build a distributed system (a micro service architecture)...
That might be the reason, you have never heard of them..."
Personally i love it when my boss forces me to spend 5 days to develop something using microservices, when i could do it in 5 hours in Django, and the app only has 1,000 internal users.
The major part why I have found some form of "services" are valuable (how "micro" they should be is a question very specific to every particular project and company) is independent release cycle. If you have a modularized monolith, you still are tied into common release cycle, and it tends to be quite heavy, reducing your ability to release as often as you like.
Dealing with release cycles if you're in the same process as 20-30 other teams often ends in a situation where your release cycle is "maybe we'll get something on prod in a week or so, depending on the volume of issues".
Now, this assumes that you can release as often as you like. If you have external limitations to your release frequency (certifications, security/compliance audits, if you're tied to particular firmware patch releases etc.) that math changes a lot.
In my team we have more microservices than developers and nobody outside our team uses it
bruh same.
I've been working for quite some time in the banking industry and I remember a peculiar example of a project some colleagues of mine worked on: a big bank commissioned a multi-million dollars architecture based on microservices and events, at the end of a 3 years period of development they decided to throw away the whole thing and go back to a monolithic non-event architecture. Maybe it was just badly designed, but I think the problem is the very nature of event-driven stuff and I say that because I've worked on another big bank repository a couple of years ago with a similar kind of architecture and it was an event spaghetti and each request caused and untraceable amount of data retrievals and it was ugly to work on and difficult to maintain. Hundreds of people worked on that repository over a 20 years period and it was a huge pile of crap.
DDD, microservices not help u creating lousselly coupled software, DDD does... Without proper DDD u are ending having monolith microservices...
@@MrEnsiferum77 actually the latter was domain-driven still sucked
@@FredoCorleone Not properly implemented...
@@MrEnsiferum77 Just like communism, agile, and everything else that is great in theory!
@@philsburydoboy Agile is for liberal developers who think they building something valuable, but they rewrite the same crap apps on the web with new framework, while actually using waterfall
You need a microservice to get time
Will it support ISO timestamps? We're currently blocked on showing our users their birthday.
@@ricardoamendoeira3800 talk to PO
better off distributing an "offline microservice" aka a library
also, this comment is funny. i've seen folks suggest this. smh. makes me question wth we're even doing in this industry.
@@ricardoamendoeira3800 We're blocked okay? We're blocked, you sad product manager
Build modules until you need microservices, for scaling, then the modules can be jettisoned as a microservice with minimal changes in code as necessary.
And while you're running the code as modules, add artificial sleep(2ms) to every call to the module to be prepared for the extra latency you'll suffer when you convert that module to microservice.
If you feel that your system is getting too slow, remove that artificial sleep call but understand that you cannot ever change that part as microservice either because the latency would be too bad.
The biggest problem for me with microservices is the persistence/data layer, there are a lot of queues, a lot of streams, a lot of replication, a lot of inconsistent data and so on. Apart from a whole tooling structure that has to be built to sustain production. I don't, sometimes i just believe that a monolith well written, well modularized, works well well damn.
this
sadly, microservices can also indicate a path around institutional bureaucracy.
This is one of the emerging benefits of the wasm / wasi modal as that layer is owned by the host and independent to the business logic in the wasm.
@@manualautomaton100% agree. The Sell is that smaller applications can be iterated on very quickly- the reality is that within a large ecosystem of services like this require the same amount of efforts more. It’s a poorly constructed confederate monolith.
90% of microservices energy is lost in out-of-process calls and serializations
Many years ago, I worked on a system that hot-swapped DLL libraries from disk gracefully when they were updated with new versions without restarting the application. Now I think about it, that was pretty damn close to micro-services - only faster due to being in memory communication, not via the network stack!
Except microservices are exactly about the "via the network stack". The fact that you solved zero-downtime upgrades on a single machine is not very related to services that are independently scalable across many separate machines.
For us the biggest issue by far has been network latency. Even the most basic read operations take hundreds of milliseconds to complete due to having to fetch data (often sequentially) from different microservices. Also 90% of our alerts are now request timeouts between microservices.
My stance today would be that microservices only make sense for fire-and-forget applications. Anything that needs to respond to a user request should be a monolith.
I mostly agree. Another option is to make every microservice to have hard deadline that results in good enough user experience. I would say that would be about 10 ms because you often need to combine multiple microservices for a single user visible response and the user experience starts to get worse when total response time goes above 100 ms.
When you write a microservice with 10 ms deadline, you practically have to write it in systems programming language such as C, C++ or Rust. Do you have a team for it? Does that team agree that splitting everything into small microservices is the best way forward?
If you use Java, Go or other managed language, every time the garbage collection runs stop the world step, all the requests for that microservice instantly fail their deadlines. So obviously you cannot use any of those languages without insane amount of hacks to avoid GC at all times.
I read Sam Newman’s monolith to microservices. He actually recommends starting with monolith with logical services that could be broken out if it needs to independently scale.
I recently attended a workshop with Sam in Copenhagen about this, and it was great as he is a very engaging and funny bloke. It also confirmed that I was on the correct path with my own system. I started with a monolith (start-up company 15 years ago), and now are starting to identify parts to break out as microservices as the company has grown immensely.
This is the common sense approach. Which is why it is disregarded.
I think about how to modularize my app code often especially for my hobbyist projects as a game dev. Unity uses (a mangled) C# and I can achieve isolation by declaring assembly defs. However; I often discover I’m bad at setting boundaries. I get into circular dependency arguments with my compiler.
It’s fun stuff to consider.
Modularisation is HARD. I have nearly 20 years of experience, and it is still HARD for me.
My hot take is that people want the flexibility to compose a system from smaller components, so they have a separate repo per component. It seems easier to make each component a service and compose those services to avoid cross-repo dependencies.
Monorepos -- the /new/ (relative to microservices) buzzword for many of us non-googlers -- make composing components into a single service much easier, and as more orgs adopt them, people are less incentivized to ignore the downsides of making every component a service and therefore are presented with the decision to compose subsets of components into a service or a distributed system on a case by case basis.
Don't write eventually consistent distributed monorepos
Monolithrepo
I have always advocated for modular architecture over premature micro-service. One thing I always maintain is strong cohesion between a module components and weak adhesion to other part of the app. With this, I can always copy the folder that houses the module code to any base project, link one or two files and start a new micro-service when the module becomes too big and the time is ripe.
cargo workspaces for rust are a god send for this. Keep your data layer (module) as a agreed upon ORM or Raw SQL standard how to call your database. After that for each workspace member imports the shared domain. After that if you want scalability just docker container each set of routes. For example you have 3 workspace members for a twitter clone auth, status and timeline. Each one has the domain workspace and can just call what they need from the database. This allows for code reuse, scalability and low latency. There are somethings I suggest to clients to microservice out. Auth system are usually a good idea and even email/sms message relay servers. 9 times out of 10 thought I don't recommend it. I have found no reason for it, keeping teams in separate bubbles is just as easy as well. As long as you group each set of routes correctly you shouldn't have confusion on what team 'owns' which data. I do this by doing status_media or status_comment in the domain tables. If you need a change ask them to update the layer so their code still works or you can add a new function to the layer and send it over for approval.
I only just compiled ffmpeg from source for the first time last week. I felt so proud I went for a walk just to strut and smirk in public lol
I would argue that microservices is more about optimizing for development process speed (by promoting more losely coupled and high cohesion modules) at the cost of less optimal technology choices imposed by the distributed system that follow by it. Short Time to market is more important today than optimal technology choices. We trade off one upside for another downside here. And regardless of any current regime, nothing can replace strong engineering practices in the organization. Without it you will eventually end up with big ball of mud regardless of whichever architectural regime you follow. The return of the monolith will hence solve nothing on it self. It is just another technology choice. Strong engineering practices is key for any architecture to work and evolve over time in respons to changes in requirements.
What about API Gateways to manage interactions between various versions? Conceptually this allows for multiple parallel versions to operate simultaneously and allow other service teams to iterate at their own pace.
Which one of your teams manages the API Gateway?
I love how the chat was gaslighting the shit out of you while trying to explain gaslighting hahahaha
That was priceless. 🤣
I work with hundreds of git submodules. You have to write automation scripts to make it manageable. When sharing libraries across projects, it's faster to develop with submodules than republishing npm packages, so they have a place.
For serverless your codebase functions like a monolith for the developer working on it, however the deployment chunks out your defined functions into individual microservices. In your regression test harness you can capture the behavior of each service post-deployment. IAC in a single system is the best of both worlds.
Serverless + microfrontends...
@@MrEnsiferum77 heck yes!
Except you have to wait a million years for cold starts
@@HonestCode this should be taken into consideration when designing the data flows for various events. check the medium article “AWS Lambda battle 2021: performance comparison for all languages (cold and warm start)” for the coldstart comparisons for various runtimes. Ultimately, you probably want certain operations to be faster than others and so your prioritize your architecture to be “close to the data” when returning responses. Fan out what needs to be fanned out. It’s one paradigm, among many (such as cost optimization) that public cloud has to deal with. Most cloud apps are crud with minimal processing, anyway.
Probably not in Python or somethin that manages dependencies like Python :))
At one of jobs, I saw how folks made modules with C# instead of microservices. I liked it. They had projects organized like multiple libraries (some of them were kinda old, even in older versions of the C#), they could manage every librarie's dependencies separately. Even have "libraryA 1.0" in one place and "libraryA 1.1" in another (I am NOT saying they did it for some purpose, I am just saying that that it caused no trouble at all).
Correct me if I am wrong, but when I think about the same approach for Python - I think it will be a dependency hell, because when project becomes large, one module obligates all the other modules be in the same context and fit into dependencies.
That isUTCDate function looks a lot like my function I used at work to determine if the material number for a pick order was provided as: an SAP material number, an *outdated* SAP material number that needed to be translated to the newest version (cause they were too dumb to use the versioning system in SAP) the document number that actually appears on blueprints, the customer document number, or some terribly formatted version of any of the above where periods were changed to underscores or someone forgot to change their excel files so they don't drop trailing zeros or just some made-up-ass number that didn't exist in any of our warehousing systems at all. Glad to know that despite the fact that I am very much an armature programmer I can hang with the best of them in the pro world ;)
7:14 change management is always hard, that's one of the most hard things ever in software. software ate the world because it can change and adapt, but on the other hand, its also its biggest disadvantage, sometimes its too easy to change, then things break.
Unison language nailed this, by rebuilding a way to achieve dependency or module injection through ability or algebraic effect. All your needed implementation became libraries or modules. And the deployment process they are building in immaculate.
I asked ChatGPT about the fallacy and it said it was the "hasty generalisation" fallacy, which is also known as the "unrepresentative sample" fallacy. The wiki entry for it appears to be "Faulty_generalization".
Enforcing module boundaries in a monolith is incredibly difficult imo.. so I do like some type of “microservice” structure to enforce them. But that doesn’t mean you need to separate things in such a way that you incur such huge dependency chains - you can still have big modules
You know that image of Homer looking great, but all the fat flaps were tied up behind his back? That's UML for me.
It's all about granularity depending on what is actually needed and practical. That spectrum ranges from a combination of monoliths to micro services.
"My tool should not exist in a year"... there's nothing more permanent than a temporary solution.
one hundred percent
A few weeks ago I spoke to a guy building a startup, building microservices for months. I told him that he should not be doing it this way.
He wanted to keep code "modern".
What he should actually be doing is to build the core functionality and ship the code fast. Writing a modular monolith is usually a better idea for startups.
A big mistake they do is to think that they know the requirements.
You should have a handful of independent modules/monoliths/Microservices and a dozen satellite Microservices that do highly specific and technical tasks. The general architecture should implement a "one-hop" architecture. Meaning that every user request must be finished with a single call to one backend service
I’d subscribe to more insightful tech talk discussions. I understand the funny guy live takes but I’m genuinely interested in these topics but it’s hard to process the randomness and sidebars.
Scalability is a performance optimization. Premature optimization is usually a big mistake. Effective performance optimization starts with performance data. So if you are building out a microservice infrastructure for scalability and you don't have real world performance data, then you are doing a premature performance optimization.
If you want to organize your code for development purposes, you should do so in a way that doesn't have significant performance implications.
If you want scalability, you should start with real performance data and use that to inform your design decisions.
33:33 In my experience in working in many organizations, Software engineering is as much a creative process as it is technical (at least thus far). Especially true for “senior engineers”. Just like artists rarely enjoy going back to their old work to adapt it - and generally have a deep dislike of modifying someone else’s previous work, most developers love to “start new things”, but tend to dislike maintaining existing software, and particularly dislike working on someone else’s code. We’re at our happiest when we “write our own from scratch”. Just sharing an observation.
The article thesis is defended and correct. One repo, one tested/compiled/coupled unit. Incompatible interfaces must fail to compile. Modules can be separately scaled and deployed as microservices, but maintained, verified, compiled, and tested together.
Google changing their microsevices all the time without caring about the impact 🤡
I see few reasons for microservices: 1. Legal constraints. If you are working with an external software development company to integrate their software into your system. 2. if the service has already been developed because it is a common problem, such as authorization, reporting, user management, etc. - everything except business logic. You could even go so far as to title backup and monitoring as a "microservice".
Go has been getting daily love and I'm here for it😂
I FUCK with Go
@@sprinklehomie5811 LFG
You know, there are micro services we all sort of agree upon, Like data storage on a database and files on something like s3/minio, payment systems (not sure if it can be considered "micro" and we often outsource it but still) maybe even that an identity provider should be a separate service as authentication can be a cause for security concern which you might just want up and ready as a service.
Actchually, they are pretty hard to maintain if you are dropped into a new project. Deploying the app is actually deploying 50 different apps which can all fail separately.
"You know how annoying it is to try to figure out what time you're in?"
Yes. We all do.
5:36 bro's face is so symmetrical
Microserves would be a great solution to many problems if communication wasn't limited by the speed of light and physical interconnects. When you physically separate all the computation, you are forcing processing to happen in serialized form and overall latency for a given request gets worse.
I think the following is more reasonable than Micoservices even for larger corps:
1. one team, one module
2. a team can own dedicated datastorage if they have to deal with write heavy traffics.
More specifically, imagine we are building TH-cam.
There would be 'chat team' who's developing backend system for chat.
It's a very bad idea to let them use RDB for developing chat functionality since RDBs are not scalable for write heavy tasks.
Instead, they should use scalable NoSQL or NewSQL like DynamoDB or CloudSpanner to store chat data.
Using their own datastorage doesn't mean they have to own dedicated codebase (repository) or application server.
They can just develop a module or client library or API endpoints which utilize the datastorage while sharing codes and application servers with other teams.
> Isolated data storage + server sharing
I like that this is the opposite of what many "microservices" implementations end up being, where several services share the same database
~ 30:30 about the git submodules… I feel your pain 😂
We have a Yocto build setup using submodules, and it sounded like a fantastic idea when I was reading up on it getting started, but in practice it’s definitely had some pain points…
I present to you the monolith: POSTFIX if only it was modular and almost never broke.
I'm convinced microservices exist (partially) because JavaScript has no concept of "visibility" or access modifiers. One of the things microservices offer is a well-defined boundary between "modules", which is obviously useful for large projects, making up for a shortcoming of the language. What should be a very lightweight function call, becomes a very expensive remote/network call.
Definitely think whether you go monolith of microservice depends on the project. Recently I was only in an ancillary capacity involved in a project where I strongly advocated for a microservice system.
My major reasoning was that certain components in the project will have to be replaced in a one or two year horizon. My advice was to have an API to handle these system components with a clear plan to have it replaced with a newer system component when that replacement horizon arrives. But, having a well planned API will mean that the component can be replaced without impacting the rest of the architecture.
Also, by having it this way, my argument was that I could scale the system between on-prem (no internet connectivity needed) to hybrid or cloud.
Really it just comes down to how easily you can make changes.
A real monolith is simple, but can be difficult to deploy when lots of people are working on it and they don't always guarantee that it's in a deployable state, and that can lead to long lived feature branches, painful merges, slow release cycles, etc.
Microservices solve an organisational problem, not a technical problem. They allow pieces of the software to be deployed quickly and independently, and allow teams to focus on their own areas without having to be an expert in the entire system, but that comes at the cost of many network calls that may fail. You accept that complexity in order to scale the number of people that can work on the system.
A distributed monolith with shared modules is the worst of both worlds. You make a change in a module and then have to deploy multiple services that use it.
9:39 linkerd is an excellent, open-source service mesh that gives observability and control across the entire call stack. it essentially runs an in-pod, transparent proxy for k8s service traffic.
So in other words we just reinvented CORBA architecture, which has been around since 1991.
Micro services are always the wrong choice because it's a technical consideration, not a product consideration. Getting a high quality product out to your customers is more important than following the next big tech fad, and microservices does not guarantee you a strong, stable or flexible base to work off.
Microservices solve a very specific scalability problem that you (statistically) probably do not have. By picking Microservices as a starting point, you've compromised the product for your own self-satisfaction.
"The key is to establish that common architectural backplane with well-understood integration and communication conventions, whatever you want or need it to be" -> *Solid foundation for the tech*
"The key is to give the team the direction and goal, the autonomy to accomplish it, and the clarion call to get it done" -> *Solid foundation for the people*
Chat: "Oh no, he remembered to disable alerts this time. Lets repeatedly try to break his concentration with highlighted questions completely unrelated to the post he's reading!"
Few years later “Microservices > Modules”. The only ever lasting answer is “it depends”!
When I created microservices in 2000, I started thinking exactly like that in 2002.
Me being ahead of the curve and writing my code modular for over 3 years
Or 15 years behind the curve. The pendulum swings back and forth constantly ;-)
As a dev and architect I will argue for distributed monolith or microservices as the best way, because the targeted scaling you can do is great
I enjoy the arguments about all these companies switching to microservices, kafka, and mongo as magic on improving their applications... when it was actually the fact they took 10+ years of lessons learned to just rewrite the application. Switching stacks and patterns was an excuse. They would have had just as good performance improvements had they taken the same stack they were already on and used lessons learned to get rid of all the old garbage.
Only because you mentioned obs, this is better than the obs overlay imo. White text overlaid on often other white text in the blogs being read is basically impossible to keep up with.
My role as an architect somedays feels like i'm captain hingsight e.g "yes your app is behaving poorly, because you put the wrong solutuion in, heres what you should have done... Oh you wont/cant change for various reasons? ohwell"
Best advantage of microservices is the ability to deploy changes without relying on another team and also not needing to work in whatever service language + frameworks another team chose.
One detail I also believe is often overlooked, microservices does not mean communicating over a network using HTTP, for latency critical a sidecar strategy using pipes could also be used if the code separation is worth the effort.
pipes are still significantly more expensive than a call to a function
At my job the management has us just kicking the can down the road, with sufficient force to ensure it won't be our can anymore by the time we get back to it.
In the film, Gaslight he was upstairs rooting around in a boarded up attic, using the gas lights upstairs which caused the lights downstairs to dim. When his wife brought this up, he convinced her that she was going mad. He also hid things and accused her of hiding them in the pursuit of getting her getting her locked up in an asylum and getting the house.
The company I work at attempted using submodules...and later abandoned it for NPM modules approach
27:52 Just here to clarify that n orders of magnitude is 10^n , so "five to seven orders of magnitude" is 100000x - 10000000x, not merely 10x (which is ONE order of magnitude)
It sounds like NetFlix had a time assignment problem then, and need a standard library for each language used that is then used by all devs so time stamps are consistent. I guess there would need to be a company wide developer update in it, but it would be worth it long run.
What's funny is this is all possible with monolithic apps too. especially webserver monoliths. you only pay for processing.. and unused http end point on the same server costs you nothing additional. And it will have faster spin up time for the rare instance it is actually required.
7:28 go to your LB and add a redirect, keep doing that until your LB dies or you loose your sanity.
And today I learned about numeric separators in javascript- 2023_0000 - neat!
It's the same problem than thinking only the vocal revolutionaries have to be heard, we have monoliths silently working great for decades but "because aren't exciting" we don't gear about.
Thesis, Antíthesis, Synthesis. That's what you were looking for at 23:30
I think half the problem is people dont know where to start/end a microservice's purpose.
I like modules and pulling out as you see one part needs different scale
I am remaking a real estate crm I have from 0. I think that I am only going to have the images on other serivice. The backend code and database is going to be on the same place I don't want the frontend to wait for the backend to wait for an external call to the service where the db is, no way.
Lets say I want modules:
I want them to be memory isolated for security (because opensource) so they need to be in a seperate process.
I then need them to communicate, I can use a raw datastream but then I have to define protocols. Lets say I grab something off the shelf to help provide structure to the interface. Oh wait, no I have an RPC framework.
congratz, you now have microservices.
in. five years we'll be circling back to microservices again
when MS were in vogue, I attended an MS focus conference. Even the people that presented MS solutions you could feel they knew they were selling horsesh&t with all the hacks you had to do just to make simple things work. Do people really still think this is a good idea?
People hate large monoliths because they take forever to build, test, and deploy. Often if one person messes up, it blocks everyone. Modules offer a better authoring experience than microservices, but microservices have great CICD. I think the ideal setup is a monorepo with shared tooling+packages, and granular build/test/deploy.
gosh hitting up 100 services to deliver one request?! That's one fine technical debt! chop chop..
10:22
The saying you're thinking of is something like "creating the disease to match the cure", I think.
"We discivered a cure for the disease we created, " or something?
Timezones is the black hole of programming. Nothing escapes that level of complexity.
hey, sorry about the question, do you have somwhere the list of plugins you use for your Vim? (autocompletion and so on) thanks and great video!
Doing insights, monitoring, alerting and tracing on pub-sub structures that are very large can become a daunting task. It is all fun and games until you have to filter through 1_000_000_000+ events.
I suppose the modular monolith is becoming the "new trendy thing".
what is that crap name... that's the whole purpose of DDD... with DDD u can have modular monolith for ages, before migrating to microservices... i'm sick and tired on stupid new trends...
you can call DDD a "stupid new trend" as well
Very new trend from 2003... Fresh ;).
Time's the worst, try data engineering. Multiple client databases all in different time zones on instances that can have their own time zone. Time and nulls are always the worst.
Can confirm, I'm a data engineer. I hate dealing with time in different time zones.
Strawman, but also, people are looking for a silver bullet. If there's a problem with something it's obviously not a silver bullet, so time to use something else.
I agree with the article in general but once you have different tech stacks interacting you need microservices. Example: Mobile App with flutter or whatever, Web App with Laravel or whatever -> how do they share data? -> through APIs, fine no proble, just let them call each other. But what if there are more apps? You want all these apps to call each other directly through apis? Good luck having ANY overview once u reach more than 3 or 4 apps. Microservices solves this problem. Unfortunately micro services are introduced way too early, including the company that I work at
Stateless modular monoliths are essentially the sweet spot; easy to do large re-factorings and modules allow you to manage it like microservices without the crippling cost of redefining a logical boundary. Microservices in a mono repo can make sense in some cases if you need that level of autonomy. The 'for real' polyglot, bring your own pipeline, infra, monitoring etc microservices architecture is rarely the correct answer in my experience.
Awesome video man, can you do a full video about your editor setup?
Leading the witness is what I'm thinking you mean. But I think strawmanning is also appropriate because the opposite is steelmanning which would be representing something in the best of cases.
And as we've said, there are great monolithic repos that have been expertly designed. The same way that there are exquisitely crafted non-SPA websites that are better than SPA's in every way. You can strawman non-SPA's as being bad at things that they don't have to be bad at, but commonly are. Same goes for monoliths or monolithic repos.