We were saving the infra cost by reducing the number of instances in blue bucket once green appears stable. From the backward compatibility POV, if you applications are creating entries in any queue or event bus then you must have make sure those events are also backward compatible. I have observed a huge mess because of this.
Thanks Arpit for this in-depth explanation. If you don’t mind can you briefly explain about the “handling of shared services across blue green”. What exactly it is?
when we provision/shift to green infra, is DB also replicated generally? Or its same for both instances - considering there can be schema/migration changes edit - had commented mid way, turns out DB is covered in limitations :) thanks!
Hey Arpit, I have one question. What happened to the existing requests(the requests sent to the old green) when the reverse proxy switches the servers and blue becomes the new green? Do they fail?
Gracefully handled. When we say 100 percent of request moving to New setup it implies that 100 percent of the new request moving to New fleet. The old ones will continued to be served from old one. Once they are completed and we see 0 request from proxy to old fleet, is when we know that we do not have any dependency on the old fleet and that it can now be terminated.
@@AsliEngineering Thank you! Also, how do we know if a request is complete or not? Is that the job of Reverse proxy? And when exactly do we send signal to terminate old cluster?
Hi Arpit, I have few questions. 1. Does blue-green deployment makes sense if product is going live for the first time? I mean there will be no old blue or green server. 2. Let's say an application is running having image version - v2.0 and it's the blue server. Deployment is scheduled with latest image version - v2.1 and now green server is up with this latest image. So in case I need to rollback to blue server then do I really need backward compatibility? Because anyway old image is running in the blue server.
Warm means that servers don't start or boot up when needed as that can take time. That is called cold start. Warm start means servers are already running and ready to accept traffic as soon as application is deployed.
We were saving the infra cost by reducing the number of instances in blue bucket once green appears stable.
From the backward compatibility POV, if you applications are creating entries in any queue or event bus then you must have make sure those events are also backward compatible. I have observed a huge mess because of this.
Great Indepth Video Sir just one request can you please provide the notes for every video in a zip file it will be helpful for us
Thanks for video was looking for something on Deployment
As usual great explanation thank you.
Thanks Arpit for this in-depth explanation. If you don’t mind can you briefly explain about the “handling of shared services across blue green”. What exactly it is?
Very detailed explanation
Hi Arpit, Loved the depth of this video. Can you please talk about AB deployment as well?
Awesome 😍
Hi Arpit. Thanks for the valuable information. How can we handle kafka consumption in case of blue green deployment
when we provision/shift to green infra, is DB also replicated generally?
Or its same for both instances - considering there can be schema/migration changes
edit - had commented mid way, turns out DB is covered in limitations :) thanks!
How you are syncing db data? In case it's postgres having schema changes?
Will liquibase help here, if we have schema changes?
What happens to a request(s) which is being served by the Blue fleet and we flip the switch to the green fleet?
Graceful termination. We never abruptly terminate the connection. We flip, and new connection goes to new cluster. We wait old ones to drain.
Hey Arpit, I have one question. What happened to the existing requests(the requests sent to the old green) when the reverse proxy switches the servers and blue becomes the new green? Do they fail?
Gracefully handled. When we say 100 percent of request moving to New setup it implies that 100 percent of the new request moving to New fleet.
The old ones will continued to be served from old one. Once they are completed and we see 0 request from proxy to old fleet, is when we know that we do not have any dependency on the old fleet and that it can now be terminated.
@@AsliEngineering Thank you! Also, how do we know if a request is complete or not? Is that the job of Reverse proxy? And when exactly do we send signal to terminate old cluster?
Hi Arpit,
I have few questions.
1. Does blue-green deployment makes sense if product is going live for the first time? I mean there will be no old blue or green server.
2. Let's say an application is running having image version - v2.0 and it's the blue server. Deployment is scheduled with latest image version - v2.1 and now green server is up with this latest image. So in case I need to rollback to blue server then do I really need backward compatibility? Because anyway old image is running in the blue server.
1. No. You do not need a fancy setup on day 0
2. Your code always needs to be backward compatible irrespective of the deployment strategy used
8:51 - Prepare a fleet of servers that is warm - Din't understand the 'warm' meaning here. Can you please elaborate ?
Warm means that servers don't start or boot up when needed as that can take time. That is called cold start. Warm start means servers are already running and ready to accept traffic as soon as application is deployed.
@@komalthecoolk thanks for the clarification!
I didn't see any diff bwn BG and Canary, except that in Canary, we do A/B testing. Is that correct Arpit?
Canary you are using a few servers from your existing server infra. BG you run an entire duplicate infra for new and old version.