Wednesday, February 2, 2011

Max instances = a really big number

First off, sorry for the long delays between posts. This year I plan to be more active.
So recently I got asked about the max instances used in ArcGIS Server. What should I set it to? How does this relate to capacity?
Max instances is the total number of map workers processes that can respond to a map request. so if a map service has more of these then it has more capacity to respond to requests. I use the term processes here loosely. They are actually only separate processes (ArcSOC.exe) when your service is set to high-isolation. when in low isolation the instances are worker threads inside of an ArcSOC.exe processes. Rarely is it a good idea to use low-isolation so lets just stick with the default high-isolation.

So back to the question. What should I set max instances to for a service. My recommendation is 1000000 or 10000000 or something ridiculously big like that. However, if you do this, you MUST also set capacity on each SOC machine. Capacity is the total number of instances that a SOC machine will support. If you don't have a good idea of what to set this to, guess (maybe 50). Then monitor your system. Its really difficult to get this number right out of the gate so system monitoring is really important. If you find yourself at max instance but the machine still has capacity then up the number accordingly. It is probably better to guess on the low side and adjust up then the other way around. 25 or 50 is a good place to start.
Now back to that ridiculously big max instance number. The reason I like to set this number so high is it lets ArcGIS Server adjust the instance to the popular services as it sees fit.

Lets say the capacity on your system is 10 and you have two map services A and B.
Scenario 1: Both A and B have max instances set to 5. Let’s say service A becomes wildly popular but nobody is interested in service B. The best you can do is use half the capacity of your system (5 instance).
Scenario 2: Both A and B have max instances set to 1,000,000. Again service A becomes wildly popular but nobody is interested in service B. Now the system can focus all of its capacity where it is needed, on service A. The total number of instances used will only ever equal your capacity.

So I generally recommend this ridiculously high maximum approach. Mostly because it is extremely difficult to figure out up front, what the overall usage is going to be for a service. This will increase the amount of instance destruction and creation (which is costly) but I think it is worth it to fully utilize your system. You can also adjust your idle timeout to a longer interval to reduce the amount of instance turnover. I like to set my min instances to 0 and max instances to 1000000 and set my idle timeout to about a day (86400). That way you won't dip to 0 instances for a service unless it has gone unused for a day. This could lead to a Monday morning groggy server problem but that could be mitigated with scripts or if you notice the services that are always hit Monday morning, set their minimums to 1.


  1. Usually you want the best throughput, which you get with about (number of available cores) + 1 or 2.
    With higher number of SOCs you get more request computed parallel, but they will all take longer.

  2. Anonymous. The recomendation of # of cores +1 or 2 applies to cache creation, not general use. In the case of cache creation, # of cores +1 is a great place to start when building your test cache. You should monitor the system while the test cache is being built to determine the correct number of instances but # of cores #1 is usually within 2 or 3 instances of being the right number. Now for general use, I stand by my recomendation above. To get the best throughput many more than # of cores +1 should be used. And if your system becomes saturated simply adjust the capacity for the server, not the number of instances for a service.