Business processes and business rules #5
Last in a series exploring the relationship between business processes and business rules.
It is adapted from a paper on Business Process Architecture and the Workflow Reference Model [Lawrence (2007)1] which appeared in the 2007 BPM and Workflow Handbook published by the Workflow Management Coalition (WfMC). The full paper is available from www.makeworkmakesense.com.
More on tasks
Can the task level break down further? Yes and no. If we consider its functionality then clearly the task could break into components and components of components. But from a pure process and control perspective a task is a unit. A task cannot ‘partially complete’, since the request (eg the order) must exit in a well-formed way so as to route to the next task correctly. This is because everything that happens in a task can happen together. In the case of an automatic task it is a set of rules which can be applied in one go (in a defined sequence if necessary). In the case of a manual task it is a set of things it makes logical sense for one person at a particular skill and authority level (determined by rules) to complete in the same session.
In the case of a manual task it is obviously possible for a user to stop halfway – perhaps capture only part of the data the manual task is there for. It must be possible to exit the task at that point, and either lose everything done so far or leave things in a partially-updated state. The task itself has ‘fully completed’ in the sense of having completed in a well-formed way. Subsequent routing will need take account of the abandoned or partial update – eg by looping back to the same task to redo or finish the update.
We have now introduced an important feature of the meta-model: the distinction between the process component (the task) and any functionality component(s) needed to implement it. From a process and control perspective a task, like a subprocess, is a transition in the business status of the request. The task is a transition from one status to one or more possible statuses representing possible subsequent routings. To achieve those transitions functionality is needed – and there could be many different ways of implementing the task.
The task as process component is a unit with attributes like:
• Task name
• Input status and possible output statuses
• Whether automatic or manual
If the task is manual, further attributes will include what roles and/or skill levels it requires.
Automatic and manual tasks will call for different types of functionality. A manual task may display and accept data via an interface, while both classes of task may run rules and update data. But as nodes in a process flow they are equivalent – they both take the request from one input status to one of a number of possible output statuses. In theory at least a manual task is replaceable by an automatic task with the same outcome, or vice versa. (In practice incremental automation is more likely to involve increasing the proportion of requests handled by automatic tasks, than replacing manual tasks one-for-one by automatic tasks. Removing a manual task will more likely involve rationalising routing within the whole subprocess rather than just replacing the task with a single equivalent automatic task.)
We should therefore think of task-as-process-component as something separate from but loosely coupled with the functionality component(s) which implement it. The relationship could be many-to-many: one task could be implemented by a set of functionality components (possibly linked together hierarchically) and/or the same functionality component could be used in more than one task.
It could be objected that at this task level the model stops being purely logical (what) and becomes physical (how). Perhaps the process could be implemented with different tasks; or in a different way entirely, not needing ‘tasks’? Perhaps. But we recall that ‘task’ arose in the meta-model by considering the impact of the subprocess rules on the range of possible attribute values of the extended request dataset which passes through the subprocess. (This extended request dataset might include relevant external contingencies – for example is a reply does or does not arrive within the allotted time.)
Another axiom is that ‘task’ – like ‘process’ and ‘subprocess’ – is a control unit at the appropriate level. The process is the control unit responsible for taking the request through to the well-formed outcome. The subprocess takes the request from one business status to the next, defined by business rules applicable to all requests of the initiating type, regardless of individual case characteristics. Tasks are the minimum control units required so that any possible request can make the status transition defined at subprocess level.
‘Control unit’ should be construed at a management/process information/state description level. Since these are business processes there will be different stake-holders interested in knowing for example where each request is, or perhaps just knowing this information exists.
If ‘task’ seems more at the ‘solution’ level and not just at the purely ‘logical’ level this could come from the ‘real world’ implications of the last two axioms – ie that the model must cater for any possible request (therefore all the nuts and bolts of exceptions and error conditions); and that the nodes are unambiguously defined control units.
But it is important to be clear about what the meta-model is, and therefore what is inside it and what is outside it. The meta-model is a coherent set of interrelated concepts which can be used to design process models at a logical level, process models which can then be implemented in solution architecture.
The relational meta-model describes a set of interrelated concepts (entity, relationship, attribute, primary key, foreign key etc) which can be used to design data models at logical level, models which can then be implemented in physical databases. The relational meta-model guides design choices, but does not prescribe them. It does not say what entities to recognise. A real-world data domain could be represented in two or more equally valid logical data models, equally aligned with relational principles. The different data models could reflect different business and strategic priorities, and different assumptions about future flexibility. But there would be a vast and indefinite number of ‘invalid’ data designs, which ignore or flout relational principles. The strength of the relational meta-model is in its principles for identifying the relatively small number of good data models for a given context, and choosing between them.
Similarly the double-entry paradigm does not impose a particular chart of accounts but it does guide accounting design.
Like both these models the process meta-model provides building blocks for creating a logical process design. It does not say what the rules should be; or what requests an organisation should honour and so what processes it should support. It does not say what controls and break points and therefore what status transitions to recognise. It does not prescribe ranges of values for request data sets. But it does provide a related set of concepts which, given an organisation’s business context and objectives, help it to make rational choices as to what its processes should be, how they should interact, what intervention would repay technology investment, and how processes should be designed and implemented to maximise return on that investment.
It does this not by viewing technology implementations in increasingly generic and abstract terms, but by providing tools to identify and then represent the fundamental logic of its business processes in terms of their rules. The eventual configuration of tasks will depend on the scope of the particular process initiative and the strategic priorities of the organisation. Just as there may be more than one possible logical data model for a given context there could be more than one possible process model. But none need assume any particular technology implementation.
1 Chris Lawrence, Business process architecture and the Workflow Reference Model. In Layna Fischer (Ed.), BPM & Workflow handbook 2007, Future Strategies Inc, Lighthouse Point, Florida, 2007; in association with the Workflow Management Coalition.
© Chris Lawrence 2008