This video was very helpful, and I was able to implement this myself in a dev environment. I have a few key questions for issues I ran into: 1) With the multiple ARR nodes it seems that you are unable to use the “monitoring and management” features (make server unavailable gracefully, make sever unavailable immediately, mark server as unhealthy, etc) because these settings do not replicated with the shared config. I believe they are all handled in memory as part of the w3wp.exe process. 2) we have a lot of websites (30-50). It seems like in order to take advantage of the health check/ failover features for each site, you need to setup a server farm per site. (This is not practical or usable for 30-50 sites through). I’ve set up a single farm with 2 severs, but I won’t be able to failover a single site if only that site is failing. If a single site fails that whole sever will be marked as unhealthy and all 30-50 sites will failover to the other sever. Is this your experience with both of these questions? What are your thoughts? Thanks!
Thinking about this a little more, maybe you are not running into issue #1 because you are using these as Active/Passive nodes? Whereas I am using them as Active/Active. Is this the case?
I was also running active/active and your observations are correct. 1 - When you mark a server in the monitoring and management it is not replicated to the other host. I assume this is by design so you can have more granular control of which nodes are in rotation on which load balancers. 2 - If you want to health check each site, each one will need to have its own "farm", rewrite rules and designated servers, the bindings can still all be added under one site. Other wise you have exactly what you said 1 site check controlling servers hosting multiple sites, when the health check fails, all of the sites go down as the node is pulled out of rotation. While this is more of a pain to configure, it's worth while...I would also look to leverage powershell/appcmd to to help simplify the deployments and ensure consistency.
Rob Willis thank you for your reply! 1) i suppose it’s granular but it makes managing manual failovers a pain and error prone. If I wanted to make a web sever unavailable gracefully, It needs to be done on both ARR serves, if one was forgotten or done later web requests would still be going to the web server through the other ARR server. It would be nice to the the “make unavailability gracefully” option to slowly drain requests out instead of setting the web server offline at the server farm level which would immediately terminate requests and any in sessions state. How did you manage this with active/active? 2) would you recommend the separate farms even for 50+ sites? It just seems like way to much complexity, routing rules and maintenance. If all the sites only use the 2 web servers in active/active and I want to take 1 web sever offline for maintenance, it would have to do that one by one on every web farm. I suppose a Powershell script with appcmd could help but even those scripts may need to be maintained, updated, and tested as sites are removed, created, or changed. Did you do this for many site? Or just not really run into the issue? Are you aware of any more recent on premise solutions to web server high availability? It seems Microsoft has moved away from offering any solution for quite a few years now. Web Farm Framework is dead, ARR not being updated anymore, IIS seems to be becoming more limited and less focused on. ¯\_(ツ)_/¯ Sorry to bother you, I’m just curious as I’ve been working on this solution for a few weeks now, I’m just unsure, and want it to be usable and approachable for other sys admins who are less familiar.
It's one of those things, with added complexity in the environment comes added administrative over head. When this happens you can continue to do things manually (not scalable) or you can automate them. Anytime I run into issues like this I go right to PowerShell. If you want to drain the requests gracefully on both nodes without manually using the GUI on each node, then PowerShell is the way to go. Anything that you want to be done consistently and is repeated, should be automated or scripted. I've never pushed ARR to 50+ web farms, but I've had around 20+ farms configured with out issue. I'm not sure what the limit is. But if you want the health check for each one, that is the only way I know to do it. These types of issues aren't exclusive to ARR, if you were using Nginx or even a physical load balancer you are still going to see same issues crop up - multiple sites, multiple pools, and multiple health checks. When it comes to web farms if you are looking for an "easy" button, you are not going to have a good time. :)
Thank you. Could you clarify rather this is a setup similar to what sites such as Facebook use? Thanks for the video, it was very interesting and very helpful.
Thanks man! This is a very basic and simplified web farm that would be suited for a small to medium sized site. A large site like Facebook would still follow the same basic principles - load balancers, web/app servers and databases but the configuration is a lot more complex and broken out. Also you will tend to see more open source/linux used in those types of environments. There is also a heavy reliance on automation at that point as well.
Thanks, Rob. I have always wanted to know this information and finally thought of the right way to type it into Google, which brought up your video, and others. I have been watching them and trying to get everything together, to run some test servers on my Intel Server. If I can get this down, what you showed here, and once I get in my servers this summer, I should be good. I hope. If I have any questions during the process of setup, would it be OK to contact you off YT? Let me know, and continue doing what you do. Love the videos. Wayne
Quick question. Everything seems to be going well, and with that, I have hit a trouble spot. You know how you have both servers pages up, and they answer to the single domain? Mine will only show the main IIS Server, but not the other IIS2 Server page when loading it from the network, but it works like it supposed to, when loading the page on the ARR server itself. What would cause it to only work on the local ARR Server? But, not work on the local network?
Nice! First thing to check would be the local firewall to make sure it is not blocking the traffic. Next, I would check the binding in IIS to make sure it is configured correctly.
Rob, hopefully, a quick question for you. In the video, you have two ARR Servers with NLB and a single IP. I have installed two ARR Servers Installed NLB on both the servers. I have set up the ARR-01 Server, with the Single IP. 192.168.2.22 And the ARR-02 Server also shares the same Single IP. 192.168.2.22 My question is this. #1: Did I do the right thing having both ARR Servers with NLB installed on both, with both utilizing the same .22 IP Address? #2: When routing to these ARR Servers, how do you send traffic to them? Do I need two Routers, in order for the dual ARR Setup to work? (If the answer is Yes, then I will have to get rid of the second one, as I only have a single Modem and Router in-line at the current moment. Wayne
Each server should have its own IP and then an additional IP is needed for the NLB VIP. If you run an ipconfig on each server, you will see 2 IPs - one for the host and then one for the VIP. Forward the traffic from the modem/router to the VIP and NLB handles the rest.
good job! it helped me very much in the configuration of my setup ! merci !
This video was very helpful, and I was able to implement this myself in a dev environment. I have a few key questions for issues I ran into:
1) With the multiple ARR nodes it seems that you are unable to use the “monitoring and management” features (make server unavailable gracefully, make sever unavailable immediately, mark server as unhealthy, etc) because these settings do not replicated with the shared config. I believe they are all handled in memory as part of the w3wp.exe process.
2) we have a lot of websites (30-50). It seems like in order to take advantage of the health check/ failover features for each site, you need to setup a server farm per site. (This is not practical or usable for 30-50 sites through). I’ve set up a single farm with 2 severs, but I won’t be able to failover a single site if only that site is failing. If a single site fails that whole sever will be marked as unhealthy and all 30-50 sites will failover to the other sever.
Is this your experience with both of these questions? What are your thoughts? Thanks!
Thinking about this a little more, maybe you are not running into issue #1 because you are using these as Active/Passive nodes? Whereas I am using them as Active/Active. Is this the case?
I was also running active/active and your observations are correct.
1 - When you mark a server in the monitoring and management it is not replicated to the other host. I assume this is by design so you can have more granular control of which nodes are in rotation on which load balancers.
2 - If you want to health check each site, each one will need to have its own "farm", rewrite rules and designated servers, the bindings can still all be added under one site. Other wise you have exactly what you said 1 site check controlling servers hosting multiple sites, when the health check fails, all of the sites go down as the node is pulled out of rotation. While this is more of a pain to configure, it's worth while...I would also look to leverage powershell/appcmd to to help simplify the deployments and ensure consistency.
Rob Willis thank you for your reply!
1) i suppose it’s granular but it makes managing manual failovers a pain and error prone. If I wanted to make a web sever unavailable gracefully, It needs to be done on both ARR serves, if one was forgotten or done later web requests would still be going to the web server through the other ARR server. It would be nice to the the “make unavailability gracefully” option to slowly drain requests out instead of setting the web server offline at the server farm level which would immediately terminate requests and any in sessions state. How did you manage this with active/active?
2) would you recommend the separate farms even for 50+ sites? It just seems like way to much complexity, routing rules and maintenance. If all the sites only use the 2 web servers in active/active and I want to take 1 web sever offline for maintenance, it would have to do that one by one on every web farm. I suppose a Powershell script with appcmd could help but even those scripts may need to be maintained, updated, and tested as sites are removed, created, or changed. Did you do this for many site? Or just not really run into the issue?
Are you aware of any more recent on premise solutions to web server high availability? It seems Microsoft has moved away from offering any solution for quite a few years now. Web Farm Framework is dead, ARR not being updated anymore, IIS seems to be becoming more limited and less focused on. ¯\_(ツ)_/¯
Sorry to bother you, I’m just curious as I’ve been working on this solution for a few weeks now, I’m just unsure, and want it to be usable and approachable for other sys admins who are less familiar.
It's one of those things, with added complexity in the environment comes added administrative over head. When this happens you can continue to do things manually (not scalable) or you can automate them. Anytime I run into issues like this I go right to PowerShell.
If you want to drain the requests gracefully on both nodes without manually using the GUI on each node, then PowerShell is the way to go. Anything that you want to be done consistently and is repeated, should be automated or scripted.
I've never pushed ARR to 50+ web farms, but I've had around 20+ farms configured with out issue. I'm not sure what the limit is. But if you want the health check for each one, that is the only way I know to do it.
These types of issues aren't exclusive to ARR, if you were using Nginx or even a physical load balancer you are still going to see same issues crop up - multiple sites, multiple pools, and multiple health checks.
When it comes to web farms if you are looking for an "easy" button, you are not going to have a good time. :)
Thank you.
Could you clarify rather this is a setup similar to what sites such as Facebook use?
Thanks for the video, it was very interesting and very helpful.
Thanks man! This is a very basic and simplified web farm that would be suited for a small to medium sized site. A large site like Facebook would still follow the same basic principles - load balancers, web/app servers and databases but the configuration is a lot more complex and broken out. Also you will tend to see more open source/linux used in those types of environments. There is also a heavy reliance on automation at that point as well.
Thanks, Rob.
I have always wanted to know this information and finally thought of the right way to type it into Google, which brought up your video, and others.
I have been watching them and trying to get everything together, to run some test servers on my Intel Server.
If I can get this down, what you showed here, and once I get in my servers this summer, I should be good. I hope.
If I have any questions during the process of setup, would it be OK to contact you off YT?
Let me know, and continue doing what you do.
Love the videos.
Wayne
Quick question.
Everything seems to be going well, and with that, I have hit a trouble spot.
You know how you have both servers pages up, and they answer to the single domain?
Mine will only show the main IIS Server, but not the other IIS2 Server page when loading it from the network, but it works like it supposed to, when loading the page on the ARR server itself.
What would cause it to only work on the local ARR Server?
But, not work on the local network?
Nice! First thing to check would be the local firewall to make sure it is not blocking the traffic. Next, I would check the binding in IIS to make sure it is configured correctly.
very good job !
Great video
Rob, hopefully, a quick question for you.
In the video, you have two ARR Servers with NLB and a single IP.
I have installed two ARR Servers
Installed NLB on both the servers.
I have set up the ARR-01 Server, with the Single IP. 192.168.2.22
And the ARR-02 Server also shares the same Single IP. 192.168.2.22
My question is this.
#1: Did I do the right thing having both ARR Servers with NLB installed on both, with both utilizing the same .22 IP Address?
#2: When routing to these ARR Servers, how do you send traffic to them?
Do I need two Routers, in order for the dual ARR Setup to work?
(If the answer is Yes, then I will have to get rid of the second one, as I only have a single Modem and Router in-line at the current moment.
Wayne
Each server should have its own IP and then an additional IP is needed for the NLB VIP. If you run an ipconfig on each server, you will see 2 IPs - one for the host and then one for the VIP. Forward the traffic from the modem/router to the VIP and NLB handles the rest.
if you make changes on on Web1 will your content be replicated to Web2? if so how do you set it up
+Rory McManus Yes and there's a few ways to handle that, in this video I used DFS-R but you can also use Web Deploy.
Would you mind making a tutorial with Web Deploy or sending a link to one?
Educational purposes only.
Application Request Routing
Like man
like !