My personal preference is complete isolation. By that I mean; use the message bus. I push events or commands when I need stuff to happen. The REST endpoints I do expose are purely for the end user to interact with my systems. When I build services I make every service completely isolated and it only communicates by accepting messages from the bus. Every service in the project is completely unaware of any other services anywhere. There are occasions when I have to make calls out to 3rd party services - and I keep these as minimal as possible and use REST. I'd avoid gRPC at all costs to be fair. As soon as you chain REST or gRPC calls, you introduce resilience problems that just go against the whole point of microservices for me.
I think there's a ton of nuance to this that can get glossed over. Decoupling like this, as with all things, should be done with care. When you implement something like this, you're just shifting the complexity of resolving the final result of the computation to the consumer. The "big" thing to remember when you're "simplifying" an architecture is "you're just shifting complexity from A to B". In-process complexity is a whole lot easier to manage and troubleshoot than infrastructure complexity. Direct service-to-service communication (built in a way that enforces location transparency), IMO, should be the preferred method. From there, move to an async pattern when it's warranted. If your communication enforces location transparency, then shifting to an async (or half-async) request/response workflow can be accomplished in a reasonable amount of time. Enterprise Integration Patterns and Reactive Design Patterns are both excellent tomes and wellsprings of knowledge here.
Appreciate the comment. I don't think you're moving complexity, it's about managing coupling. Direct service to service (RPC) is keeping the same degree of coupling, just adding the network to the mix.. Which is IMO worse than if it were just in-process. In other words, if you can't control coupling in-process, you'll be worst off moving it out of process via RPC. Can you get into a shitstorm with messaging? Absolutely. Ultimately if you can't define service boundaries based on functional cohesion, you're going to end up in a world of hurt regardless because of coupling.
@@CodeOpinion I don't something. In the example on the video for S2S, where the request was originated from a client. It needs to be synchronous. The client is waiting for data. How different is that from Query Composition?
@@igor9silva I have the same question as Igor. I don't see how replacing direct communication for an intermediary queue system solves the issue of having to deal with distributed transactions in case of a backend failure to process the request. Sure, it will minimize cases in which a request ends up in a temporarily faulty instance of the backend, but if the entire service is not able to respond, the problem remains, and you still need to put in place distributed rollbacks, except now you rely on timeouts for that.
I don't really follow-gRPC calls are inherently synchronous, it uses HTTP2 under the hood and is not meant to solve any of the problems message queues solve. If an event based architecture doesn't make sense for your system then debating gRPCs merit vs messaging is like assessing a fork vs a spoon for eating soup. The value of gRPC is in bidirectional streaming and multi-language support via generated thin-clients using Protobuf RPC & message definitions which can be a huge benefit for a variety of reasons-none of which are to do with async communication.
That's the point of the video! Don't use something synchronous where you don't need to. Don't use something asynchronous that is inherently synchronous. gRPC is just the means to illustrate it.
@@CodeOpinion Could you relate the second and third sentences to the video? Is the "something synchronous when you don't need to", implementing gRPC to make a string of service to service calls? what is the topic in the video where you are using "something async that is inherently synchronous"? are you saying don't use messaging for creating views?
I'm using gRPC on my current project more or less the same way you described in the 'Query Composition' section of the video and I'm more than happy with it so far! The UI is calling a graphQL gateway (HTTP) but the gateway talks to other services internally via gRPC. The performance is quite good and the gateway can generate the clients during a build which makes the development workflow streamlined. Other than that only some maintenance cron jobs are calling services directly (using gRPC as well) where a failure is not a big deal, other than that I went with the 'no direct calls principle' and utilizing commands and events for communicating between boundaries. Anyway, good talk as always!
It really comes down to a preference. You can achieve the same result with all 3. With gRPC Web there is basically no difference with your regular JSON API except it uses protobuf and there is official tooling that generates code so your services can be easily consumed from basically any language with no additional effort. I would say for web apps it's a very solid choice might be even better than REST and GraphQL in developer experience. The only downside is lack of built-in decimal support.
one important difference in JSON/REST is that (versioned-)schemas are completely optional, and usually not implemented or enforced properly between teams. But due to protobuf, gRPC "forces" teams to think about and implement interface schemas from the get-go. This is overhead inside one single team, but it can be very helpful in the long run across multiple teams / different parts of an organization.
In good old days we just make direct call and if it fails we just show a message that server currently enable to handle request, please try again later. In case of asynchronous queries, you have to deal so many things. For example user can do other actions, you should somehow check if this action is allowed and that is its not inferring with pending actions. This is really crazy stuff
gRPC endpoints make sense where you today have rest endpoints. It has much better performance, allowing you to handle more with less, and subsequently even more with more. Use it on the edges of your service where performance matters. Offer it as integration protocol to third parties etc.
I use gRPC for populating new services, when I add a new service with a blank database, I call the source of truth and ask for the data that the new service needs. This way I am not flooding the message bus with requests and it is not as slow as HTTP.
The issue of scale and obserability are not addressed. The idea of a request and response queue fails hard when we introduce message validation at scale.
I have an implementation example. I have a fleet of cars for example. I need to provide an endpoint over an API to allow 3rd parties to request that I start the car and the response needs to be either 200 or 4XX. Internally there is a service that handle the communication directly with the car, one that abstracts the interaction between different brands of cars (vehicle service) and one that tracks trips. I need the ability to query the state of the car or vehicles, but I also need to be listening for the events without having 100% coupling. Would you still use an event queue? If the user’s rest client terminates or if the car goes offline then that message to start can’t end up in a dead letter queue; the trip/rental needs to be canceled
Even with message queue if a required service is offline the client still won't get back the data it requested which for the end user still feels like a failed call.
This is where rabbitmq RPC or fire/forget comes in. This is what ClientRMQ and ServerRMQ is built on in NestJS. By using RXJS over cables you can incorporate retries and timeouts on any message transmission
good explanation. I have a query regarding where to use? 1. If we use gRPC, are we not repeating model information? one in our .NET or Java application and same model in proto buff file? 2. If I use in in Authentication api, so is flow like below Ocelot -> Authentication Microservice -> gRPC client code -> gRPC server ?
How would messaging solve 'on the fly complex calculations' on data which belongs within the originating service boundary and where those calculation logic also naturally belongs inside that originating service? Putting plain data on the bus is easy but i don't know about calculations based on dynamic input from downstream services. Would you duplicate and move this logic to the downstream services? Publish a nuget with that logic from the originating service to be used in downstream services on top of the data that was published on the bus? Isn't an api much easier in that case? Of course there's the trade off that there is again temporal coupling.
Yes, there are many situations where request/response is required. Not everything can be async or just don't fit it. In those places, I generally always have a fallback incase the service being called is unavailable.
In case of message queues when one service is calling other and then second calls another one and thus 3rd or 4th service fails? Then what's the difference in failure handling here and between grpc. Isn't state going to be inconsistent for those services as well that called each other using message queues?
No because you're designing the workflow/process to be async. If the 4th service fails, you can orchestrate a timeout (delayed delivery) that if the entire workflow hasn't completed in X period of time, to then start issuing compensating actions. How you handle failures and being resilient is drastically different if you're using something like gRPC.
@@CodeOpinion You can also orchestrate a timeout when you call a grpc service. So let's say you call service A and then service A call service B and etc, then it gets to service D and it takes too much time, service A can cancel the operation without any problem because there is a timeout parameter you can set when you call a grpc method. So I'm not sure why you say that it's less better to call grpc service to another grpc service, it's exactly the same thing as if you would use a message queue because you can put a timeout parameter... ? You also say that you're designing the workflow/process to be async, but when you call a grpc service B from a grpc service A, it is still a async process (you use await), and in the meantime the grpc service A, if it receives other requests, it can still complete other requests in parallel so how is it different?
To decouple you just need to define some rules and types of services. A "regular" service can't call other gRPC services. A "meta" service can only call "regular" gRPC services. Maybe there's a "super meta" service type which can only call "meta" services but not regular or other super meta services. Examples: Regular - a user service Meta - messaging service Super meta - a debugging tool Would love to hear feedback about this idea that just popped onto the top of my head
Very well said about distributed monolith, makes perfect sense. Thank you for taking a deeper look into things, it's always refreshing to see this level of critical thinking and analysis, very much needed in the field. IMHO: We have to think about design decisions before service separation to certain granularity. It's a critical skill to draw boundaries, and these concepts fall into place on a more case to case basis
It would be helpful if you posted a part 2. gRPC is great for internal use. REST is great for external use. gRPC has superior performance. gRPC supports streaming. gRPC has superior support for message definition and service endpoints. gRPC has superior interoperability. gRPC has superior support for TLS...so much so that it is encouraged. Like anything else one would need to conduct a proof of concept within an organization to determine best fit.
If it's for external consumers, sure gRPC. Internally, is as I mentioned in the video. UI composition and infrastructure, not service to service (where applicable).
You can also setup your gRPC to provide REST. One problem with gRPC is to pass big amount of data.. It should be solved custom
ปีที่แล้ว
To simple conclusions. Superior to what? Only REST? But rest helps with other things. If you want superior performance you would use UDP or similar. But then again, that again depends.
Question : GRPC is binary format. So it is strict serialization as compare to Json serialization ?. Minor change in contract will change the binary sequence with Client?
Hello guys! I do have interest to know pro&contra of monolith/gRPC/graphQL/messanger_services, especially implementing AI/LLM/RAG. Please, don't hesitate to recommend me fresh IT architecture books/cookbooks with examples. Thanks!
gRPC and rest can go hand in hand. for the same endpoints you have rest you can add grpc for added performance. grpc struggles with tooling, much better tools available for rest at this point in time.
it seems like something you'd use when you don't want the bloat of HTTP but don't want to implement a crazy non-standard protocol over raw TCP sockets but it seems like such a pain in the ass due to the protocol buffer ordeal that I'm inclined to think it's bettere to just have that slow http and call it a day.
The video is ... trash. gRPC is a remote procedure call, yes. But, with proper implementation, you can think of it simply as "does everything rest does, but better in every single way". I won't spend too much time on this, just a quick list: - Encodes messages in binary, compresses them. Think of a json, but you don't need to send the key value pair of name and value. You send the name once, and shove as many values as you want. Repeating values can be refferenced to save bandwidth - HTTP 2.0 streaming (tl;dr - even faster, loads of binary compressed data) - Decodes the message on the other end - Generates objects for your given language - Loads the data into those objects - Handles instanciation and communication - offers hooks for you to modify behaviour It's not like I expose a method in my code and just let you push buttons, no, I do exactly what you do when you build a rest client. Mine's just faster, more reliable*, less bandwdith. * I'm actually not sure it's more reliable. I'm too lazy to google right now. So disclaimer.
DDD I would use grpc mainly for client calls or calls depending on speed. Domain events are should not be that depending on speed. If you still want speed for domain events grpc should be good aswell. I do not agree with the asumption grpc is not a good choice for client due to failure. You can handle those failures
I think the main concern here is that when you communicate between services using grpc, you don't really know how many extra grpc calls would been made behind the inital grpc call you started, because one can hardly know the implement details of all those grpc calls beyond the microservice that one developed himself. staring from 5:34, you mentioned how the situation would be different when the originating is started from an event, because you can always retry before ack that message, however, I would still avoid using grpc to communicate between services even if it is originated from an event, because the same problem of latency piling up still exists. I would prefer to use grpc wihtin the boundary of the service, but communicate across different processors, in the case of k8s, it would be different pods. as for the query composition part, how do you feel about using event sourcing to compose the read model into projections so that no brunch of grpc calls to different services are needed for every query to the read model.
Regarding to if its originating from a message, absolutely I'd still about it to other INTERNAL services for the exact reasons outlined. External services are a different story as you don't usually have much of a choice. Where this is applicable internally sometimes if you're doing callbacks to a service to fetch data. Eg, instead of doing event-carried state transfer.
I think gRPC is great when you need data immediately to compose a UI or to grab data instantly and if the situation does not allow delivery through a back channel like websockets. Of course there are different situations for that. It's a no-no if the gRPC request would span multiple instances, but on the other hand, trying to squeeze every situation into a message broker or request/reply through a message broker also feels like searching nails when you have a hammer. It has its use cases, but I would agree that most of the time, event based approaches are superior. I assume another good distinction when to use it and when not to would also be to see whether the response would still make sense if a service would break down. If you have a query the customer wanted right away, would your service have a feasible way to redeliver it with an event which might got stuck in between?
So in general ,using a sync protocoll (rpc,http)for communication between services within the same perimeter ,shall be avoided ,it is an antipattern, Isn't it?
Not necessarily, it is the simplest form of integration. It is this gathering of scattered data across the perimeters you want to avoid. Ideally the data you require for your service is close to your service and you dont do call all over the place, this holds true for all protocols, soap, rest, grpc.
Yes. My rule of thumb is use messaging wherever possible for mutating data across services (for all the reasons mentioned in the video). For querying data I try to keep any data I need close to the service via replication. For data that’s less important (perhaps nice to have, but the service can partially function if not available), then I’ll make the cross domain query . Of course this is a rule of thumb and sometimes the requirement is that you need to make a synchronous call, if for instance you cannot tolerate any stale data (and if you run into this situation often , you probably separated domains that shouldn’t have been separated ).
I have a problem with the title of this video. The problem presented isn’t with gRPC but with synchronous workflows. If you are thinking about gRPC you’re likely looking to replace for example a Restful API. This video isn’t that helpful in comparing the two. Side note with message queuing, it was mentioned that with a distributed workflow it is difficult to maintain atomicity without a distributed transaction coordinator. I don’t see how a message queue helps this unless your strategy is to force rolling forward the operation. My experience is that asynchronous workflows add a tremendous amount of complexity in cases where if there was a problem, sometimes the solution is simply to retry the operation.
Yes, absolutely this is about synchronous workflows. I called out gRPC because of other video/blogs that reference it as the "standard" for service to service communication.
Instead of comparing gRPC, REST and GraphQL you're actually comparing different communication architectures. If the broker would accept requests via gRPC it would be a very valid combination of a broker architecture and gRPC.
Hey, I remember this from when I started programming in the 80's! Looks just like Sun-RPC, CORBA, DCE, but different. I guess when you need to feel like you've done something it doesn't hurt to reinvent the wheel. 😁
I'm only going to mention that it's irrelevant whether it's gRPC or any other P2P comm, however I'd like to comment on some conceptual statements you've made in this video. The stressing on where the communication originates from is misleading in my opinion. It does not matter what initiates a process - RPC always brings in temporal and spatial coupling (which is bad). Your point of "if Service A handles a message then it's OK for it to RPC Service B" (apart from temporal and spatial coupling concerns) leads to a question - how do you deal if Service A or Service B emit messages or make calls to other services or make changes to datastores as part of this request handling? Say, Service B calls its datastore and increments "access count" value in there and then publishes a message, but Service A fails and the whole chain of changes need to be rolled back? Well, there is no way you can do that, period. I can think of many other similar problems here. To me, what you're saying here is akin to "Distributed transactions are OK if you use them for use case A, but not for use case B". No, distributed transactions are never OK, you will never conquer them, you will never be 100% free of side effects and sooner or later they will bite you bad. What you are *not* clearly explaining in this video, I believe, is the notion of autonomy and boundaries. All of this comes back to Domain Driven Design, Bounded Context and, most of all, Aggregate concept. When aggregates are properly designed, all of change happens within aggregates with no external communication to other aggregates. This is how you get "automicity" for your transaction. Then you can announce the changes you made to your state to everyone else (hello EDA) and let other aggregates worry about themselves. For complex processes requiring multiple aggregates there is a concept of Saga (which I see you've got a video on), but that again encapsulates any change performed by a single aggregate to its internal logic. If I may suggest - please do not simplify concepts that should not be simplified. Those who lead should be able to understand these intricacies because it's a matter of success vs failure for large distributed systems. Well, there are many other ways to fail at that, but not being able to understand autonomy is a big one.
Thanks for the (long) comment. I could reply, but I have a lot number of videos that I think would do better job. A good number of them talking about logical boundaries, event choreography, orchestration, coupling, cohesion, the list goes on.
@@CodeOpinion Yes, I looked at some of the other videos just to see what your wider views are. What would be helpful to those who learn from your videos is for you to have references to particular details explained elsewhere - eg. "for this watch that video" sort of comments. If I were to watch only this video then I'd be getting a wrong impression of what's what with this subject. That's why I've added my comment here - simply to highlight the inconsistencies I've noticed while watching this video and, hopefully, make others continue looking in order to form a broader understanding of the subject.
To me it seems like they choose gRPC because it is supposed to be a "more efficient" alternative to HTTP/JSON. So they pick this before asynchronous messaging, mainly because they don't get Event based architectures as a way of reasoning about your system. They are still in the realm of synchronous calls. I would rather do RPC via broker, using MassTransit, just to keep everything consistent. Why add another protocol into the mix? Yes. I'm sometimes lazy, sometimes I prefer Web APIs for simple fetches instead of adding a new consumer. But my views are evolving.
@@CodeOpinion True. That is how programming is being taught: Procedural. It Is another model that takes some thought and time to get into. And programmers are lazy, choosing what is familiar or, in case of new stuff, popular and tested by others, in particular by big firms. That is why so many chose React for UI.
@@CodeOpinion our use case is 2 way sync with external data (salesforce). I suppose the gRPC server host could be considered a sidecar, but its really the heavy lifter in our system. The problem we are trying to solve with gRPC is the communication of formula results as data is passed between salesforce (where a ton of business logic and internal controls exist) and the services that do different things (decision engines that cant live in salesforce). We cant exactly poll salesforce from the decision engines AND keep data synced. So that gRPC channel stays open until the server closes it. Make sense?
Better performance aside, gRPC has one thing I feel a lot of people gloss over and that is easily shared data models. Instead of having to represent models in different languages (if your architecture is structured that way) you can have a singular representation of you models and have each type easily compiled into each language. Represent the data once then let the proto gen cross “compile” it into the language you are using.
Worked at a company where we moved all our internal endpoints to gRPC, because the staff engineer that proposed it wants our backend communication protocol because it is language agnostic while our whole backend stack is written Java or in the process of migrating to Java. The engineer that proposed it got a promotion while the migration to gRPC provided no business and engineering value other than costing the company millions in lost engineering hours.
Downvoted. Your distributed turd pile is not a reason to not use gRPC, It's simply a reason to not write tightly coupled services. This opinion is conflating technologies with architectures and literally everything you've described here can be accomplished with or without grpc, with or without the queues. And as an aside - let's just ignore the fact that request/reply with async messaging is infinitely more complex than simply "request and reply queues".
Thanks for the comment. Indeed I'm not actually talking about gRPC at all. The point was to "service" to "service" rpc. That could be anything besides gRPC. Absolutely, messaging has its complexities and is not intended to be used where sync request/response is more appropriate. However I still stand that service (authority of business capabilities) to service is not generally appropriate because of the tight coupling. I don't mind the down vote as you entirely got the gist of the video.
Possibly. But I'm addressing direct questions and comments I get from viewers specifically towards gRPC. While in a vacuum I understand your point. In the context of all of the videos on my channel and comments, where I talk more abstract on purpose, you might see why I'm calling it out specifically.
Honestly, I do appreciate the feedback and your time commenting. It is difficult to have a video that conveys my intent. If you're someone who watches a bunch of my videos, I think you'll get my point. Unfortunately I also need to think of new viewers and how they might interpret it. It's a balancing act and I'm trying. Appreciate the feedback
My personal preference is complete isolation. By that I mean; use the message bus. I push events or commands when I need stuff to happen. The REST endpoints I do expose are purely for the end user to interact with my systems. When I build services I make every service completely isolated and it only communicates by accepting messages from the bus. Every service in the project is completely unaware of any other services anywhere. There are occasions when I have to make calls out to 3rd party services - and I keep these as minimal as possible and use REST. I'd avoid gRPC at all costs to be fair. As soon as you chain REST or gRPC calls, you introduce resilience problems that just go against the whole point of microservices for me.
I think there's a ton of nuance to this that can get glossed over. Decoupling like this, as with all things, should be done with care. When you implement something like this, you're just shifting the complexity of resolving the final result of the computation to the consumer. The "big" thing to remember when you're "simplifying" an architecture is "you're just shifting complexity from A to B". In-process complexity is a whole lot easier to manage and troubleshoot than infrastructure complexity. Direct service-to-service communication (built in a way that enforces location transparency), IMO, should be the preferred method. From there, move to an async pattern when it's warranted. If your communication enforces location transparency, then shifting to an async (or half-async) request/response workflow can be accomplished in a reasonable amount of time. Enterprise Integration Patterns and Reactive Design Patterns are both excellent tomes and wellsprings of knowledge here.
Appreciate the comment. I don't think you're moving complexity, it's about managing coupling. Direct service to service (RPC) is keeping the same degree of coupling, just adding the network to the mix.. Which is IMO worse than if it were just in-process. In other words, if you can't control coupling in-process, you'll be worst off moving it out of process via RPC. Can you get into a shitstorm with messaging? Absolutely. Ultimately if you can't define service boundaries based on functional cohesion, you're going to end up in a world of hurt regardless because of coupling.
@@CodeOpinion I don't something. In the example on the video for S2S, where the request was originated from a client. It needs to be synchronous. The client is waiting for data. How different is that from Query Composition?
@@igor9silva I have the same question as Igor. I don't see how replacing direct communication for an intermediary queue system solves the issue of having to deal with distributed transactions in case of a backend failure to process the request. Sure, it will minimize cases in which a request ends up in a temporarily faulty instance of the backend, but if the entire service is not able to respond, the problem remains, and you still need to put in place distributed rollbacks, except now you rely on timeouts for that.
I don't really follow-gRPC calls are inherently synchronous, it uses HTTP2 under the hood and is not meant to solve any of the problems message queues solve. If an event based architecture doesn't make sense for your system then debating gRPCs merit vs messaging is like assessing a fork vs a spoon for eating soup.
The value of gRPC is in bidirectional streaming and multi-language support via generated thin-clients using Protobuf RPC & message definitions which can be a huge benefit for a variety of reasons-none of which are to do with async communication.
That's the point of the video! Don't use something synchronous where you don't need to. Don't use something asynchronous that is inherently synchronous. gRPC is just the means to illustrate it.
title of the video is misleading
@@CodeOpinion Could you relate the second and third sentences to the video? Is the "something synchronous when you don't need to", implementing gRPC to make a string of service to service calls? what is the topic in the video where you are using "something async that is inherently synchronous"? are you saying don't use messaging for creating views?
@@CodeOpinion this guy as no idea what he is talking about
I'm using gRPC on my current project more or less the same way you described in the 'Query Composition' section of the video and I'm more than happy with it so far! The UI is calling a graphQL gateway (HTTP) but the gateway talks to other services internally via gRPC. The performance is quite good and the gateway can generate the clients during a build which makes the development workflow streamlined. Other than that only some maintenance cron jobs are calling services directly (using gRPC as well) where a failure is not a big deal, other than that I went with the 'no direct calls principle' and utilizing commands and events for communicating between boundaries. Anyway, good talk as always!
Glad it's working out for you! Thanks for the comment. It's nice to hear what folks are doing and the result.
My current workplace currently has a number of the issues described here. The pain is real.
That's unfortunately you're dealing with it, but glad you commented to at least show this video is accurate 😂
It really comes down to a preference. You can achieve the same result with all 3. With gRPC Web there is basically no difference with your regular JSON API except it uses protobuf and there is official tooling that generates code so your services can be easily consumed from basically any language with no additional effort. I would say for web apps it's a very solid choice might be even better than REST and GraphQL in developer experience. The only downside is lack of built-in decimal support.
one important difference in JSON/REST is that (versioned-)schemas are completely optional, and usually not implemented or enforced properly between teams. But due to protobuf, gRPC "forces" teams to think about and implement interface schemas from the get-go. This is overhead inside one single team, but it can be very helpful in the long run across multiple teams / different parts of an organization.
pactflow.io/blog/what-is-contract-testing/
In good old days we just make direct call and if it fails we just show a message that server currently enable to handle request, please try again later.
In case of asynchronous queries, you have to deal so many things. For example user can do other actions, you should somehow check if this action is allowed and that is its not inferring with pending actions. This is really crazy stuff
it sounds like a bloated version of midi over tcp
gRPC endpoints make sense where you today have rest endpoints. It has much better performance, allowing you to handle more with less, and subsequently even more with more. Use it on the edges of your service where performance matters. Offer it as integration protocol to third parties etc.
Ya I wanted to mention that but totally forgot, as the inbound for 3rd parties.
I use gRPC for populating new services, when I add a new service with a blank database, I call the source of truth and ask for the data that the new service needs. This way I am not flooding the message bus with requests and it is not as slow as HTTP.
The issue of scale and obserability are not addressed. The idea of a request and response queue fails hard when we introduce message validation at scale.
I have an implementation example. I have a fleet of cars for example. I need to provide an endpoint over an API to allow 3rd parties to request that I start the car and the response needs to be either 200 or 4XX. Internally there is a service that handle the communication directly with the car, one that abstracts the interaction between different brands of cars (vehicle service) and one that tracks trips. I need the ability to query the state of the car or vehicles, but I also need to be listening for the events without having 100% coupling. Would you still use an event queue? If the user’s rest client terminates or if the car goes offline then that message to start can’t end up in a dead letter queue; the trip/rental needs to be canceled
i think you are discussing here messaging based implementations What about the rest api where the service is communicating to each other
Even with message queue if a required service is offline the client still won't get back the data it requested which for the end user still feels like a failed call.
This is where rabbitmq RPC or fire/forget comes in.
This is what ClientRMQ and ServerRMQ is built on in NestJS.
By using RXJS over cables you can incorporate retries and timeouts on any message transmission
good explanation.
I have a query regarding where to use?
1. If we use gRPC, are we not repeating model information? one in our .NET or Java application and same model in proto buff file?
2. If I use in in Authentication api, so is flow like below
Ocelot -> Authentication Microservice -> gRPC client code -> gRPC server ?
The main point here is I think. If you cut your domain wrong it it doesnt matter what you use. It will be hard in any case 😀
what about between a regular service and a proxy
Sure. I'd think of that as infrastructure.
How would messaging solve 'on the fly complex calculations' on data which belongs within the originating service boundary and where those calculation logic also naturally belongs inside that originating service?
Putting plain data on the bus is easy but i don't know about calculations based on dynamic input from downstream services.
Would you duplicate and move this logic to the downstream services? Publish a nuget with that logic from the originating service to be used in downstream services on top of the data that was published on the bus?
Isn't an api much easier in that case?
Of course there's the trade off that there is again temporal coupling.
Yes, there are many situations where request/response is required. Not everything can be async or just don't fit it. In those places, I generally always have a fallback incase the service being called is unavailable.
In the situation where he talks about messaging being preferred... is it clear that gRPC can be used for messaging? a la Pub/Sub?
Hi Derek, what tool do you use to create diagrams?
For my video slides? Just powerpoint.
In case of message queues when one service is calling other and then second calls another one and thus 3rd or 4th service fails? Then what's the difference in failure handling here and between grpc. Isn't state going to be inconsistent for those services as well that called each other using message queues?
No because you're designing the workflow/process to be async. If the 4th service fails, you can orchestrate a timeout (delayed delivery) that if the entire workflow hasn't completed in X period of time, to then start issuing compensating actions. How you handle failures and being resilient is drastically different if you're using something like gRPC.
@@CodeOpinion Understood Thanks 👍
@@CodeOpinion You can also orchestrate a timeout when you call a grpc service. So let's say you call service A and then service A call service B and etc, then it gets to service D and it takes too much time, service A can cancel the operation without any problem because there is a timeout parameter you can set when you call a grpc method. So I'm not sure why you say that it's less better to call grpc service to another grpc service, it's exactly the same thing as if you would use a message queue because you can put a timeout parameter... ?
You also say that you're designing the workflow/process to be async, but when you call a grpc service B from a grpc service A, it is still a async process (you use await), and in the meantime the grpc service A, if it receives other requests, it can still complete other requests in parallel so how is it different?
To decouple you just need to define some rules and types of services. A "regular" service can't call other gRPC services. A "meta" service can only call "regular" gRPC services. Maybe there's a "super meta" service type which can only call "meta" services but not regular or other super meta services.
Examples:
Regular - a user service
Meta - messaging service
Super meta - a debugging tool
Would love to hear feedback about this idea that just popped onto the top of my head
Very well said about distributed monolith, makes perfect sense. Thank you for taking a deeper look into things, it's always refreshing to see this level of critical thinking and analysis, very much needed in the field.
IMHO: We have to think about design decisions before service separation to certain granularity. It's a critical skill to draw boundaries, and these concepts fall into place on a more case to case basis
It would be helpful if you posted a part 2. gRPC is great for internal use. REST is great for external use.
gRPC has superior performance.
gRPC supports streaming.
gRPC has superior support for message definition and service endpoints.
gRPC has superior interoperability.
gRPC has superior support for TLS...so much so that it is encouraged.
Like anything else one would need to conduct a proof of concept within an organization to determine best fit.
If it's for external consumers, sure gRPC. Internally, is as I mentioned in the video. UI composition and infrastructure, not service to service (where applicable).
You can also setup your gRPC to provide REST. One problem with gRPC is to pass big amount of data.. It should be solved custom
To simple conclusions. Superior to what? Only REST? But rest helps with other things. If you want superior performance you would use UDP or similar. But then again, that again depends.
Question : GRPC is binary format. So it is strict serialization as compare to Json serialization ?. Minor change in contract will change the binary sequence with Client?
Yes, versioning requires due diligence. But that's the case with any backward compat.
Hello guys! I do have interest to know pro&contra of monolith/gRPC/graphQL/messanger_services, especially implementing AI/LLM/RAG.
Please, don't hesitate to recommend me fresh IT architecture books/cookbooks with examples. Thanks!
Why are you using gRPC over rest in the example of api gateway or message broker to service to service, is it because gRPC is faster?
How about gRPC vs RESTful?
gRPC and rest can go hand in hand. for the same endpoints you have rest you can add grpc for added performance. grpc struggles with tooling, much better tools available for rest at this point in time.
it seems like something you'd use when you don't want the bloat of HTTP but don't want to implement a crazy non-standard protocol over raw TCP sockets but it seems like such a pain in the ass due to the protocol buffer ordeal that I'm inclined to think it's bettere to just have that slow http and call it a day.
What are your thoughts on RSocket?
None really. Haven't used it. Have you used it?
so, using gRPC from a service to an external svc (third party API) would be cool right
Yes, it's the best use case
Yes, likely a good candidate.
thx for the reply to both!
Would be nice to have an overview of the gRPC behind the scene. Then we can understands some +/-
The video is ... trash. gRPC is a remote procedure call, yes. But, with proper implementation, you can think of it simply as "does everything rest does, but better in every single way". I won't spend too much time on this, just a quick list:
- Encodes messages in binary, compresses them. Think of a json, but you don't need to send the key value pair of name and value. You send the name once, and shove as many values as you want. Repeating values can be refferenced to save bandwidth
- HTTP 2.0 streaming (tl;dr - even faster, loads of binary compressed data)
- Decodes the message on the other end
- Generates objects for your given language
- Loads the data into those objects
- Handles instanciation and communication
- offers hooks for you to modify behaviour
It's not like I expose a method in my code and just let you push buttons, no, I do exactly what you do when you build a rest client. Mine's just faster, more reliable*, less bandwdith.
* I'm actually not sure it's more reliable. I'm too lazy to google right now. So disclaimer.
DDD
I would use grpc mainly for client calls or calls depending on speed. Domain events are should not be that depending on speed. If you still want speed for domain events grpc should be good aswell. I do not agree with the asumption grpc is not a good choice for client due to failure. You can handle those failures
Can gRPC not be used in an Event Bus?
I think the main concern here is that when you communicate between services using grpc, you don't really know how many extra grpc calls would been made behind the inital grpc call you started, because one can hardly know the implement details of all those grpc calls beyond the microservice that one developed himself.
staring from 5:34, you mentioned how the situation would be different when the originating is started from an event, because you can always retry before ack that message,
however, I would still avoid using grpc to communicate between services even if it is originated from an event, because the same problem of latency piling up still exists.
I would prefer to use grpc wihtin the boundary of the service, but communicate across different processors, in the case of k8s, it would be different pods.
as for the query composition part, how do you feel about using event sourcing to compose the read model into projections so that no brunch of grpc calls to different services are needed for every query to the read model.
Regarding to if its originating from a message, absolutely I'd still about it to other INTERNAL services for the exact reasons outlined. External services are a different story as you don't usually have much of a choice. Where this is applicable internally sometimes if you're doing callbacks to a service to fetch data. Eg, instead of doing event-carried state transfer.
I think gRPC is great when you need data immediately to compose a UI or to grab data instantly and if the situation does not allow delivery through a back channel like websockets.
Of course there are different situations for that. It's a no-no if the gRPC request would span multiple instances, but on the other hand, trying to squeeze every situation into a message broker or request/reply through a message broker also feels like searching nails when you have a hammer. It has its use cases, but I would agree that most of the time, event based approaches are superior.
I assume another good distinction when to use it and when not to would also be to see whether the response would still make sense if a service would break down. If you have a query the customer wanted right away, would your service have a feasible way to redeliver it with an event which might got stuck in between?
Yes, when you immediately need a response, gRPC. The "need" part is tricky part.
So in general ,using a sync protocoll (rpc,http)for communication between services within the same perimeter ,shall be avoided ,it is an antipattern, Isn't it?
Not necessarily, it is the simplest form of integration. It is this gathering of scattered data across the perimeters you want to avoid. Ideally the data you require for your service is close to your service and you dont do call all over the place, this holds true for all protocols, soap, rest, grpc.
Bingo!
Yes. My rule of thumb is use messaging wherever possible for mutating data across services (for all the reasons mentioned in the video). For querying data I try to keep any data I need close to the service via replication. For data that’s less important (perhaps nice to have, but the service can partially function if not available), then I’ll make the cross domain query .
Of course this is a rule of thumb and sometimes the requirement is that you need to make a synchronous call, if for instance you cannot tolerate any stale data (and if you run into this situation often , you probably separated domains that shouldn’t have been separated ).
good talk derek!
Thanks!
Would love a video on your take on viewmodel composition when sourcing data from various services
@@orialmog Yup it's on my list of topics.
I have a problem with the title of this video. The problem presented isn’t with gRPC but with synchronous workflows. If you are thinking about gRPC you’re likely looking to replace for example a Restful API. This video isn’t that helpful in comparing the two.
Side note with message queuing, it was mentioned that with a distributed workflow it is difficult to maintain atomicity without a distributed transaction coordinator. I don’t see how a message queue helps this unless your strategy is to force rolling forward the operation.
My experience is that asynchronous workflows add a tremendous amount of complexity in cases where if there was a problem, sometimes the solution is simply to retry the operation.
Yes, absolutely this is about synchronous workflows. I called out gRPC because of other video/blogs that reference it as the "standard" for service to service communication.
Instead of comparing gRPC, REST and GraphQL you're actually comparing different communication architectures. If the broker would accept requests via gRPC it would be a very valid combination of a broker architecture and gRPC.
Correct. Temporal Coupling ultimately.
i love this content!
Hey, I remember this from when I started programming in the 80's! Looks just like Sun-RPC, CORBA, DCE, but different. I guess when you need to feel like you've done something it doesn't hurt to reinvent the wheel. 😁
I'm only going to mention that it's irrelevant whether it's gRPC or any other P2P comm, however I'd like to comment on some conceptual statements you've made in this video.
The stressing on where the communication originates from is misleading in my opinion. It does not matter what initiates a process - RPC always brings in temporal and spatial coupling (which is bad). Your point of "if Service A handles a message then it's OK for it to RPC Service B" (apart from temporal and spatial coupling concerns) leads to a question - how do you deal if Service A or Service B emit messages or make calls to other services or make changes to datastores as part of this request handling? Say, Service B calls its datastore and increments "access count" value in there and then publishes a message, but Service A fails and the whole chain of changes need to be rolled back? Well, there is no way you can do that, period. I can think of many other similar problems here. To me, what you're saying here is akin to "Distributed transactions are OK if you use them for use case A, but not for use case B". No, distributed transactions are never OK, you will never conquer them, you will never be 100% free of side effects and sooner or later they will bite you bad.
What you are *not* clearly explaining in this video, I believe, is the notion of autonomy and boundaries. All of this comes back to Domain Driven Design, Bounded Context and, most of all, Aggregate concept. When aggregates are properly designed, all of change happens within aggregates with no external communication to other aggregates. This is how you get "automicity" for your transaction. Then you can announce the changes you made to your state to everyone else (hello EDA) and let other aggregates worry about themselves. For complex processes requiring multiple aggregates there is a concept of Saga (which I see you've got a video on), but that again encapsulates any change performed by a single aggregate to its internal logic.
If I may suggest - please do not simplify concepts that should not be simplified. Those who lead should be able to understand these intricacies because it's a matter of success vs failure for large distributed systems. Well, there are many other ways to fail at that, but not being able to understand autonomy is a big one.
Thanks for the (long) comment. I could reply, but I have a lot number of videos that I think would do better job. A good number of them talking about logical boundaries, event choreography, orchestration, coupling, cohesion, the list goes on.
@@CodeOpinion Yes, I looked at some of the other videos just to see what your wider views are. What would be helpful to those who learn from your videos is for you to have references to particular details explained elsewhere - eg. "for this watch that video" sort of comments. If I were to watch only this video then I'd be getting a wrong impression of what's what with this subject. That's why I've added my comment here - simply to highlight the inconsistencies I've noticed while watching this video and, hopefully, make others continue looking in order to form a broader understanding of the subject.
To me it seems like they choose gRPC because it is supposed to be a "more efficient" alternative to HTTP/JSON. So they pick this before asynchronous messaging, mainly because they don't get Event based architectures as a way of reasoning about your system. They are still in the realm of synchronous calls.
I would rather do RPC via broker, using MassTransit, just to keep everything consistent. Why add another protocol into the mix?
Yes. I'm sometimes lazy, sometimes I prefer Web APIs for simple fetches instead of adding a new consumer. But my views are evolving.
I think some of it is just thinking in a procedural way as to why synchronous calls are common.
@@CodeOpinion True. That is how programming is being taught: Procedural. It
Is another model that takes some thought and time to get into. And programmers are lazy, choosing what is familiar or, in case of new stuff, popular and tested by others, in particular by big firms. That is why so many chose React for UI.
we are using gRPC clients from all services to one service gRPC server that is responsible for messaging.
Interesting. So a sidecar in some sense but that's used for all services.
@@CodeOpinion our use case is 2 way sync with external data (salesforce). I suppose the gRPC server host could be considered a sidecar, but its really the heavy lifter in our system. The problem we are trying to solve with gRPC is the communication of formula results as data is passed between salesforce (where a ton of business logic and internal controls exist) and the services that do different things (decision engines that cant live in salesforce). We cant exactly poll salesforce from the decision engines AND keep data synced. So that gRPC channel stays open until the server closes it. Make sense?
Better performance aside, gRPC has one thing I feel a lot of people gloss over and that is easily shared data models. Instead of having to represent models in different languages (if your architecture is structured that way) you can have a singular representation of you models and have each type easily compiled into each language. Represent the data once then let the proto gen cross “compile” it into the language you are using.
OpenAPI does the same for rest apis
Now I'm more confused than 10 minutes ago.
Sorry?
Worked at a company where we moved all our internal endpoints to gRPC, because the staff engineer that proposed it wants our backend communication protocol because it is language agnostic while our whole backend stack is written Java or in the process of migrating to Java.
The engineer that proposed it got a promotion while the migration to gRPC provided no business and engineering value other than costing the company millions in lost engineering hours.
amazing thanks!
Thanks for watching it
Nice!
Downvoted. Your distributed turd pile is not a reason to not use gRPC, It's simply a reason to not write tightly coupled services. This opinion is conflating technologies with architectures and literally everything you've described here can be accomplished with or without grpc, with or without the queues. And as an aside - let's just ignore the fact that request/reply with async messaging is infinitely more complex than simply "request and reply queues".
Thanks for the comment. Indeed I'm not actually talking about gRPC at all. The point was to "service" to "service" rpc. That could be anything besides gRPC. Absolutely, messaging has its complexities and is not intended to be used where sync request/response is more appropriate. However I still stand that service (authority of business capabilities) to service is not generally appropriate because of the tight coupling.
I don't mind the down vote as you entirely got the gist of the video.
@@CodeOpinion might wanna rethink the title and the comments in the about "when not to use grpc" then
Possibly. But I'm addressing direct questions and comments I get from viewers specifically towards gRPC. While in a vacuum I understand your point. In the context of all of the videos on my channel and comments, where I talk more abstract on purpose, you might see why I'm calling it out specifically.
Honestly, I do appreciate the feedback and your time commenting. It is difficult to have a video that conveys my intent. If you're someone who watches a bunch of my videos, I think you'll get my point. Unfortunately I also need to think of new viewers and how they might interpret it. It's a balancing act and I'm trying. Appreciate the feedback
Great video. Totally agree.
Miller Margaret Robinson Thomas Hernandez William
HELP
Help?
@CodeOpinion author, you have a mess in your head. you compared warm with soft.
jRPC anyone?
not a very good explanation - no comparison to Rest