The important entity is the "task", which models a concurrent process or object, or a logical or physical resource. A task has "entries" through which it offers services to other tasks or users; an entry models a method of an object. Service requests are made to entries, messages are sent to entries, and resource demand parameters are attached to the execution of entries. Objects that are bound into a task have their demand parameters aggregated together.
Resources of all kinds are modelled as "tasks", with "entries" for different services. This includes logical resources such as critical sections or mutexes, locks and control tokens of any kind (e.g. for window flow control) as well as devices such as shared buses, processors, disks, and interface controllers.
In a conceptual analysis, one can proceed by structural analysis, by analyzing scenarios, or by a combination of these. The final goal is a model definition, using a graphical interface, a menu interface, or a textual definition.
A conceptual analysis based on a planned software and system structure begins by identifying the objects (or modules) and services. These should be at the level of concurrent tasks.
Each service of a given module is analyzed with the goal of
The demands for system services are then replaced by the demands those services make for execution cycles, other system services and other tasks, until there are just demands for devices and for other tasks. These are the parameters of the software tasks in the model. The device parameters are the times for a logical operations.
In this approach an I/O device like a disk is modelled by a task representing its logical service, with the disk as its host device.
It may be a help for a complex service, to model just the one service by a scenario, and reduce it as described under Building a Model from a Scenario Analysis. In this analysis all the activities would be allocated to the one entry, which makes it a simple special case.
If the demands are known for objects or modules which are not concurrent tasks they can be modelled as above (pretending they are tasks), and then aggregated together (task aggregation).
The operation of each activity is analyzed with the goal of
As also for entry analysis, the demands for system services are then replaced by the demands those services make for execution cycles, other system services and other tasks, until there are just demands for devices and for other tasks. These are the parameters of the activity. A short example may help explain this process.
When there are multiple scenarios, the entries in different scenarios are initially treated as distinct. However if they describe the same service by the same task, their demand parameters may be aggregated together (see entry aggregation).
The activities are parcelled out among entries in a task structure. Where a sequence of activities are executed by the same entry, their demands are added up to give the entry demands. Where there is a transition from an activity in one entry (call it A) to another (call it B), a request to B is inserted as one of the demands made by the first activity. Where there is a later transition back to A, this is the end of the request and the further demands are also added to the first activity. If the request from A to B occurs inside a loop it will be inserted with a repetition count of its own. At the end of this reduction there will be one activity per entry, and the activity parameters become the entry parameters. The model is completed by adding device allocations, overheads and speeds.
If the activity graph has forking and joining of the flow, it gives a more complex case (see Activity Graph Reduction).
A simple fork with no join has three possible interpretations. If it occurs at a transition to another entry there is an asynchronous request at this point, which is entered as a demand at the sending activity. If it occurs at the point where a service to another entry ends (a transition back to an earlier entry in the scenario) then it has a special place in the model as a "second phase", and the activities after the fork are added up together as a special "second-phase activity" for the entry. Otherwise if the following activity is allocated to the same entry, it becomes a fork to a separate activity, which is part of the activity structure of the entry.
To simplify the model some entries may be aggregated together. The demands of the new aggregate entry are each a weighted sum of the demands, weighted by the expected relative frequency of invocation. The weights may be known from the relative rate of operation of the different scenarios, or they must be supplied by the modeller.
Separate overhead entry calls can be made for each time an entry sends or receives a request (task switch and communications overhead for receiving, just communications for sending). The amount of overhead depends on the midware. Smaller overhead amounts occur if the two tasks are on the same host processor.