The manner in which arrays are defined and used is designed to make the process as simple and intuitive as possible. In many cased, you can simply define the arrays you want to use, apply them to the variables you want to use them on, and continue working with little or no change to the model equations that now represent not one, but several values.
Sometimes, however, there may be things you want to do with arrays that are a bit more specialized. One common reason for this is working with arrays that have repeated dimensions, as they may when looking at transition flows from and to different states and also in doing mathematical manipulation of arrays.
Stella and XMILE provide a number of different ways to work with arrays that can be helpful in these situations.
When used inside [] you can simply enter a dimension element name. For example if we have the dimension size when entries small, medium, large and the dimension doneness with entries rare, medium, well then you could use prep_time[medium, medium] to denote how much time it takes to prepare a medium sized meal that will be served medium (between rare and well done) without ambiguity as long as prep_time is arrayed by size and doneness.
If you want to use an element name outside of the [], then you need to specify which dimension the element came from. This is done using a . as in size.medium (the same notation used to qualify names by module).
Using this notation is convenient, because it allows you to make comparisons such as:
IF size < size.large THEN 1 ELSE 0
which could be used to determine is a small warming oven would be sufficient.
Dimension elements specified in this manner are treated as expressions rather than labels. The first label evaluates to 1, the second to 2 and son on.
Note Because dimension_name.element is treated as an expression, no validation of array usage occurs.
Usually, an array is defined by a sequence of unique dimensions (Population[Country,Species,Sex]). If, in the equation for Population, any of Country, Species or Sex were to be used their meaning is unambiguous. Suppose, however, that we are looking at the transition from using one product to using another. Then we might have transitioning[Product,Product] which could have the equation
leaving[Product]*transition_probability[Product,Product]
In this case there are two occurrences of Product in the variable being defined, one in leaving and two in transition_probablity. Stella will match the Product in leaving to the first Product in transitioning, the first product in transition_probability to the first in transitioning and the second Product in transition_probability to the second in transitioning. As long as both transition_probablity and transitioning interpret the first occurrence of Product as the from Product (and the second as the to Product) that will give us the right result. If that is not the case, we can need to make use of the @ notation described below, and even when it is, the @ notation is likely to be clearer.
The general rule that Stella uses when matching dimensions which repeat names is to use their position. Thus in any variable used in an equation the first occurrence of a dimension matches the first occurrence in the variable being defined, the second the second and so on. This generally leads to the expected results, though use of the @ notation may make the equations clearer.
There are two circumstances in which Stella will generate an error rather than use this mapping.
Rather than using an array range, you can use @N where N corresponds to the Nth ordinal position in the dimensions for the left hand side variables. This allows you to disambiguate any repeated dimensions. For example if B[x,x] is being defined with A[x,x] than the equation
A[@2,@1]
would make B the transpose of A (see below for more discussion of transpose).
You can actually use @ in any equation, but the only validity checking it does is to confirm that the number you specify is less than or equal to the number of dimensions of the left hand side variable. For example if B[x,y] is the variable being defined and A[y,x] is used the equation
A[@2,@1]
would make B the transpose of A, but if you were to redefine A to be A[x,y] there would be no error detected, but the results would not make sense. Had you used A[y,x] or A' on the other hand the error would be recognized and reported.
So @ can be very useful when you need to disambiguate repeated dimensions, but should be used with care.
If a matrix has two dimensions A[x,y], you can effectively transpose it in an A2A equation for B[y,x] by simply using A[x,y] in the equation. For example
A[x,y]
would make B the transpose of A (by is y, x so we have swapped the order of the dimensions). Using the transpose operator ' would do the same thing
A'
Though shorter, this notation is less clear. However, when both A and B are arrayed by the same dimension twice (A[x,x] B[x,x]) we can still use A' whereas A[x,x] would just return A (we could also use A[@2,@1] but A' is preferable).
Note For historical reasons ' is also allowed in variables names without quotation marks. So you can name a variable A' which is simply a name, and not the transpose of A. The software recognizes the transpose operator only after first checking whether A' is a variable name.
Many array builtins operate by reducing the number of dimensions. For example SUM(D[x,*]) would sum across the second dimension of A. The * is the standard array range and it represents all the elements of that dimension. If you change the size of the dimension it will represent all the elements of the new dimension.
Sometimes it is convenient to sum, or perform another array operation, across a subset of the elements in a dimension. If, instead of a * you use N1:N2 where N1 and N2 are ordinal dimension element positions (or label names), or expressions. Thus for example
SUM(A[1:3])
would give the sum over the first 3 elements of A. If A is arrayed by x and x is defined as x1,x2,x3,x4 you could also use
SUM(A[x1:x3])
with the same meaning.
If N1 and N2 are the same this will select a single element, but remain valid in SUM and other array builtins.
Some caution should be exercised when using ranges in this manner. If you rearrange the elements of an array, the range use may end up returning unexpected results. The only validity checking done is that the last is at least as big as the first and that it is not bigger than the biggest value in the dimension. This validation is done as part of equation checking when N1 and N2 are either element names or numbers.
N1 and N2 can be element names, numbers or expressions involving other model variables.
If N1 and N2 are expressions involving other model variables then range checking occurs only at run time and, for performance reasons, the results will be placed in the Simulation Log only if you have already opened it. Runtime range checking does not stop the simulation. If N1 is less than 1 it will be treated as 1. If N2 is greater than the number of dimension entries it will be treated as the number of dimension entries.
As long as N2 is greater than or equal to N1, N2 is greater than 0 and N1 is less than the number of dimension entries, the range will be computed on the valid subset created by adjusting N1 and N2 as necessary. If there is no valid subset (the preceding conditions are not true) then the SUM builtin will return 0, the PROD builtin 1, the MIN builtin inf (infinity), the MAX builtin -inf, the MEAN builtin 0 and STDDEV builtin 0.
Whether an adjustment is required to N1 or N2, or there is no valid range, an error message will be sent to the Simulation Log if it is open. It is strongly recommended that you open the Simulation Log periodically on models using range expressions involving other variables.