STEM delivers huge benefits through its ability to rapidly generate reliable and
detailed calculations for dynamically developing model structures. However, this
productivity gain necessitates a ‘black box’ aspect which may concern those used
to tracing calculations line-by-line in a spreadsheet. This latest technical article
explains the meticulous preparation which enables STEM to lead both modeller and
client to the most critical calculations in a model.
Why did it do that?
Business-case models are designed to encapsulate the complexities of service description,
network dimensioning and lifetime costing in order to explore the impact of strategic
decisions on business outcomes. Almost by definition, the results of any significant
model may occasionally defy immediate explanation. In a spreadsheet, you can always
trace through precedent arrows to methodically analyse the relevant calculation
path(s), but this may involve the interpretation and assessment of myriad formulae
and typically ubiquitous cell references, which can be especially challenging if
you are not the author of the sheet in question.
In contrast, the consistent calculations in a STEM model significantly simplify
the domain of immediate influence on a given result. In principle, a clear view
of model structure (almost guaranteed by STEM’s intuitive iconic presentation),
coupled with a thorough understanding of standard algorithms, is all you need to
understand even the most unusual result. However, modern requirements for openness
and review demand that the software should speak for itself when necessary, and
this is the philosophy behind the intrinsic audit system which is built into the
STEM model engine.
Consider the following example. A Service with demand growing by 1000 every year
from 2004 requires a resource with capacity 400. Hovering over each icon in turn
with the mouse reveals an automatic summary of the key inputs, from which the simplest
expectation would be for the number of installed units of the resource to increase
by two or three (1000 / 400) each year.

In fact the results tell a different story, showing that in fact six units are installed
in 2004:

Installed Units is a primitive result, so what to do? Why did it do that? The answer
is better than tracing through formulae; it is closer to “Tell me!” Select Audit
from the resource icon menu, and then simply select Installed Units from the list
of primitive results.

Now run the model again, and immediately a message pops up for 2003. It seems the
modeller has requested some pre-installation of equipment. Press OK to continue.

Planned unit audit for 2003
For 2004 we first see details of the basic incremental demand calculation, but then
also deployment. Yes, this resource is also linked to a Location which describes
a number of geographical sites. But it doesn’t stop there. Other factors, such as
maximum utilisation or supply ratio might influence the installation of the resource,
but the guiding principle is that you don’t have to know how the result is calculated,
STEM will tell you, and why!

Incremental demand and deployment audits for 2004
The comprehensive list of auditable primitive results is complemented by a number
of labels prefixed with an asterisk, such as Incremental Demand. These labels correspond
to specific stages of calculation, and map directly onto the intrinsic audit features.
When you select a result name, you are taking advantage of an internal mapping of
individual results onto each of the stages of calculation which directly affects
it.

Labels with asterisks correspond to specific stages of calculation
By default, the audit system will generate an interactive audit as described above
for each period of the model run, and for all selected results for any number of
elements. There is also a global Audit dialog (select Audit from the Data menu on
the main Editor window) which allows you to confine auditing to a specific range
of run periods, and to suppress the interactive messages. All audit data are written
to a model.audit.txt file which STEM will load into Notepad on completion of the
model run. It is quite instructive to generate a full audit (all results for all
elements for all periods) to get a sense of the breadth of calculations being performed,
even for a simple model!
Where did this come from?
When STEM runs a model, the STEM model engine first creates a calculation structure
for the propagation of demand from Services to Resources and through intermediate
Transformations, and then performs all the necessary incremental demand, revenue
and cost calculations for each period of the model run. These primitive results
are stored to disk each run period, and then sorted into series order after the
model run to create a sorted model results file (.SMR)
Many additional results (especially financial) are then calculated on demand in
the results program, from an extensive set of pre-programmed result definitions
in DEFAULT.CNF. These results are called derived results, and it is this mechanism
which enables you to add your own custom results, automatically instantiated for
all elements (and Collections) of a given type. (Select Define Results… from the
Config menu).
When looking at any graph in the results program, press <F1> to access the
online help. Interpretation is provided for all standard results, together with
a human explanation of how each individual result is calculated, with references
to related results and links through to detailed explanations of particular model
features (algorithms). (No distinction is drawn between the primitive results and
derived results in this context.)

Online help interprets and explains calculation of all standard results
The online help tells you generically how a results is calculated, but not the specific
data, and so this facility is complemented by the Draw Precedents command on the
Graph menu. Again, when looking at any graph in the results program, press <F2>
to graph all of the numbers from which a current result is calculated. The precise
formula used or the calculation is shown in the status bar. If the results is a
network result summed over services or resources, you will be prompted to make a
selection from all of the relevant elements.

Draw Precedents shows the formula for, and data from which, a result is calculated
Thus all derived results can be traced back directly to the set of primitive results
generated by the model engine, dove-tailing perfectly with the run-time audit described
above.
How does it all fit together?
By now you will have the sense that, far from being a black box, all of the calculations
of STEM model results are readily accessible. The most common remaining obfuscation
is unfortunately self-inflicted. It can happen that a result hypothesis is based
on a unalienable assertion that only certain elements are involved in a given calculation.
Multiple views in the Editor are great for breaking up structure into manageable
chunks (indeed one of Robin’s favourite maxims is about avoiding the need for scrolling
individual views) but they can also obscure certain relationships.
When in doubt, the Editor provides a range of commands for creating a temporary
debugging view which guarantees to reveal all structure:
- Select Show In A New View from the Resource icon menu to create a new view surrounding
just this element. (Tip: useful to maximise next.)
- Select Show Precedents from the icon menu. Our original Service is now revealed,
with its requirement for this Resource.

-
Select Show Hierarchy from the Resource icon menu. Now the Location is revealed.
-
Select Show Dependents from the Resource icon menu. A status message reads “0 dependent
elements found”. So now we know that, for example, no Transformations reference
this Resource as an input.
Additional commands allow you to reveal all direct links, or all levels of precedents,
dependents or hierarchy (such as Collections and Functions). This view may quickly
become cluttered, and you will begin to appreciate the elegance of the original
view after all. However, all is not lost. You can arrange either a selection or
all visible icons in order of precedence, just select Arrange Selection/Visible… from the Element menu.

Icons may be laid out horizontally or vertically, and you can choose whether to
consider either run-time dependencies (such as a Resource requirement) or static
dependencies (internal references in formulae in the Editor) or both. In addition,
you can create a comprehensive and ordered view of an entire model by selecting
Show and Arrange All… However, the Editor does not attempt to generate an artistic
transverse layout, so this is generally less helpful than the selective approach.
Footnote
All of the features described in this article are used routinely by STEM support
staff when responding to customer enquiries. At least 95% of all such queries (already
pre-qualified as ‘difficult’) are resolved without recourse to source-level debugging
or undocumented insights. We hope that this article will help you to help yourself
next time you can’t quite figure something out. Then we’ll have to design a predictive
system which allows you to ask, for a given set of desired results, “How could
I do that?”