Newsletter

Maximum utilisation before deployment

02 February 2012

We invite your comment ...

A technical question buried in the Techniques from the road on ring dimensioning article in our July 2011 newsletter has gone largely unnoticed, and a subsequent plan to air the same issue at the STEM User Group Meeting in October 2011 was foiled by the corresponding presentation being side-lined to make way for a number of guest presentations.

So we are now are going to pose the question directly with the intention of prompting comment and suggestions from readers who are sufficiently familiar with the Maximum Utilisation and Deployment mechanisms for inflating the installation of resources in a STEM model.

1. A question of the correct sequence

Consider a resource which you do not wish to load at more than 80% utilisation. Setting the Maximum Utilisation input to 0.8 will cause STEM to treat it as if its capacity were only 80% of its nominal value and install proportionately more units for a given demand.

However, if you also tell STEM that the demand for this resource is distributed over n separate locations using the Deployment mechanism, it will more than likely end up with less than 80% utilisation before the Maximum Utilisation constraint is even considered, with the effect that the utilisation constraint will have no additional impact.

This is fine if you simply want the average utilisation to be less than 80%, but not if your intention is that the utilisation at any individual site should not be more than 80%, which would be the same result as if the resources capacity were indeed 80% of its nominal value.

Our conjecture is that most likely it is the latter interpretation which you would be looking for and would expect. Moreover we suggest that it may be hard to frame a problem where the current behaviour is actually desirable. The Deployment mechanism provides a much more holistic approach to estimating the overhead of slack capacity required for a geographical distribution, whereas the Maximum Utilisation concept is really more suited to the assessment of a single resource.

In the ring model mentioned above, the intention was to cap the load on an individual 10G circuit to 6G. However, because of the issue explained above, the only options were either to set the resource capacity directly to 6 Gbit/s, or to use an extra resource to capture the Maximum Utilisation effect first before passing it onto a separate resource with the Deployment constraint.

Therefore we invite comment as to whether we should (a) add an option to choose the desired behaviour (i.e., sequence of calculations), or (b) simply alter it outright (at the risk of impacting any existing model results where both features have been used together).

Now that we have got you thinking about this topic, the three following, loosely-related questions also often pop up during training and modelling projects.

2. Deployment when the demand is an integral quantity

If demand is an integral quantity such as connections or subscribers, then the Monte Carlo distribution for resource deployment (which by design returns an output of less than one unit per site for very small demands) can produce silly results when the demand is less than one subscriber per site. In the absence of a flag to say that the demand is an integer quantity, it may well require equipment at more than one site when the number of subscribers is just one!

You could argue that such a boundary case is not really a big deal – or you could equally look for a way to flag to STEM that the calculations should be modified in some way (as yet to be specified) so as to avoid such ‘outliers’. Any thoughts or suggestions will be very welcome as we consider the first refinements to the Deployment mechanism in a very long time.

3. Maximum units per site

Another interesting issue arises when, on the one hand you want to get a Monte Carlo overhead of slack per site, but on the other hand you may need to constrain the number of units per site to an arbitrary maximum; e.g., due to limited spectrum.

It is possible to use a transformation to reduce the number of sites as demand approaches the threshold to avoid it installing any units in excess of such a limit, but this kind of approach suggests a failing in the underlying deployment mechanism. Again, any thoughts or suggestions about the merits of such an extension will be very welcome.

4. Allowing for network redundancy

Finally, consider a situation where dual-homed backhaul may be required for n sites in a network. If the number of connections depends on the actual number of switch boxes deployed with a Monte Carlo distribution, then you may need to use one resource to first calculate the actual switch deployment (overhead) before then multiplying by two with a transformation and then using this to drive the backhaul links. Again, it starts to suggest a possible addition of some kind of per-site multiplier. On the other hand, it could be argued that such a multiplier is a real network requirement which should not be buried in a deployment constraint. Comments on this and all of the above ideas are welcome.

© Implied Logic Limited