Tuesday, October 11, 2011

Max instances = a really big number (Update)

I am really glad I wrote the Max instances = a really big number blog post back in February. One great thing about it is that it started some really good discussions on this topic. I had some friends call me crazy but when we sat down and really talked through different scenarios with max instances set really high, my recommendations held up. However those discussions also revealed a couple of recommendations that I would like to change (aka. I was wrong). The two changes I would like to make are:
  1. Set idle timeout much lower than 86400
  2. Avoid using capacity on SOC machines
First let’s discuss why I changed my mind on the idle timeout (max time an idle instance can be kept running). If you need a refresher on this setting their online help is a good article on Tuning and configuring services in the online help. I said that I set my idle timeout to 86,400 seconds or one day. I did this so I could set the min instances to 0 but still avoid slow responses when people hit the service first thing in the morning. The trouble is an idle timeout of 86,400 ends up wasting a lot of RAM on the server because unused instance are not cleaned up fast enough. So I have changed my thinking on this and now I think setting min instances to 1 is the best way to avoid those early morning slow responses. Then you can just take the default for idle timeout. Now if you have a service where you want to set the min instances to 0 (maybe it is used very infrequently), then set your idle timeout higher for that service. For example, you might set the idle timeout for these services to 4 hours or (14,400). I wouldn’t go over 8 hours (28,800) for the idle timeout because unused instances just don’t get cleaned up fast enough.
SOC Capacity is the other recommendation that I would change. Again if you need a refresher on this setting their online help is a good article on Limiting the load on the server with the Capacity property in the online help. In my original post I said that you have to set your SOC capacity when setting your max instances to a large number. Now I say, leave your SOC capacity set to unlimited unless you have SOC machines with different levels of capacity. So if one SOC machine has half the CPU and RAM as another SOC machine, I would set the capacity on the smaller SOC machine to avoid outstripping its resources. In most circumstances your SOC machines should be identical. This is another benefit of virtualization where SOC VM’s can be cloned making them identical.
When your SOCs have the same hardware capacity, you should not set the capacity property. When the capacity property is set, pool shrinking occurs when the capacity property is reached. Pool shrinking itself puts a significant load on the system because instances must be destroyed and created to move the instances around. So the act of pool shrinking will itself reduce the throughput of your server. This is the point that I missed in my first post. I didn’t account for the overhead of pool shrinking.
Setting capacity is still a necessary evil in the unbalanced SOC case that I mentioned above and in the case where you don’t have enough RAM on your SOC machines. But in the case where you don’t have enough RAM I say “BUY MORE RAM!” Memory is incredibly cheap and it just doesn’t make sense to try to work around a problem that is so easily fixed with an inexpensive hardware change.
I stick by my recommendation to make the max instances a really big number (like 10,000,000). I even have more reasons why this is good. Take an example where you have a two tier system: web and SOM on tier 1 and SOC on tier 2. This is usually the best configuration for production systems because it allows you scale out your SOC tier without taking the system down. In this case, if you set your max instances based on the number of cores on the system (the most popular calculations are 1xCores for file geodatabase data and 2.5xCores for enterprise geodatabase data) you would have to update every service because the number of cores has changed. So in this scenario you have to have an outage just to update the max instances.
If you can’t scale the system for cost reasons, you could set max instances for a service to keep the system from reaching capacity. But in this case you are avoiding the main problem and you are passing along poor performance to your users. I could see using max instances in this case as a stop gap measure until you have a chance to scale your system but once you scale up the system, make sure to set your max instances back to a big number.
It is important to understand that there are rarely any absolutes when it comes to optimizing your system. You always have to consider the landscape to make a good choice. I just hope this gives you some more information to make the best choice.

1 comment:

  1. Hi, thanks for this very nice and interesting post. I like your writing style, it’s quite unique. Please visit http://goo.gl/YymX7M

    ReplyDelete