I’m not sure that Google Docs is the best place to draw BPMN diagrams given the plethora of very good free BPMN editors out there. However, I’m a huge Google Docs fan, so I thought I’d throw out a BPMN template for Google Docs for what it is worth. I’m just covering the basic shapes for now. There should be more than enough here to create a quick and dirty BPMN diagram in Google Docs should the need arise. I’ll try and add more shapes and refine the template. Feedback and comments are appreciated. Try it out!
I’ve been on a mission lately to promote a different way of thinking about simulation, talking about Simulation’s Place in BPM and Why We Should Reposition Simulation in BPM. However, unlike others, I’m not throwing out the baby with the bath water: the key to my argument is that simulation is extremely valuable, but maybe not as a process design tool in the scope of BPM, as is the traditional use case for simulation.
This sort of begs the question: does business process simulation ever make sense as a process design tool. Of course it does, just maybe not in most cases of BPM implementations. Here’s a list of cases where business process simulation might be appropriate as a process design tool:
- The process does not yet exist.
- True business process reengineering where you are obliterating your existing processes and starting over.
- Where you are not implementing a BPM/automation system.
- Where the costs of automation are very high.
The above cases make sense when the cost of simulation is much less than the cost of automation. Consider cases (1) and (2) where you don’t even know what you’ll be automating (if in fact you will even be automating the resulting process). These are strategic cases, and a good candidate for using simulation where many what-if scenarios can be worked on with no actual implementation costs. In case (3), business process improvement initiatives will always incur a data acquisition cost for simulation analysis as automation will never provide the data to drive simulations. I would think that case (4) doesn’t even require comment.
Operational Decision Making
It has been suggested that maybe we are still in the early adopter stage of business process simulation. That, to me, is simply not true. Business process simulation has been around for ages, and even within business process management, simulation is considered a key feature of BPM suites (think Gartner’s magic quadrant). While part of the problem might be the actual simulation tools provided with BPM suites, one has to ask the obvious question: if business process simulation makes so much sense, why isn’t everybody using it? And why aren’t they using it all the time?
It seems to me that if there is a cost associated with simulation analysis (and there most certainly is), then reuse of the analysis is critical. Process design happens once, the process runs all the time. The cost of simulation makes sense when it is amortized across its continual use in production. Simulation models can be used to fill in the gaps in production reporting when a process is not, or is only partially, automated. That allows managers to make better decisions. When a process is automated, simulation can be used to augment production reporting by predicting near and mid term process outcomes based on the current state of the process. Better yet, use the predicted outcomes to suggest optimizations to management, such as resource scheduling or even changes to the process structure itself.
In conclusion, of course there is a place for simulation as a process design tool. I just think there is a much, much bigger opportunity for simulation as a decision support tool in the actual ongoing operation of business processes.
In a somewhat controversial post at his Ground Floor BPM blog, Scott Menter of BPLogix suggested that simulation in the Business Process Management (BPM) world is a non-starter. While his claim that no one in BPM uses simulation is surely a dramatic generalization, he does offer some rationale for his assertion:
Simulation delays automation, and I don’t like to wait. There is an instant benefit from automation, even if the underlying process is not very efficient. To put it another way: I’d rather automate a poorly designed process today than spend six months analyzing, simulating, and optimizing it before automating.
For the simulation and business process design advocates out there, this amounts to paving the cow path, when what you should be doing is using simulation and/or other analyses to study and improve the process before automating it:
Scott’s article offended my sense of logic for exactly this reason, yet something about what he was saying resonated with me and my real world experience in applying simulation to business processes. My first reaction was that Scott was simply wrong. In fact my first encounter with automation, an imaging and workflow implementation, had been a complete failure which we attributed to not engineering or improving the process first. However, that was over 15 years ago. Times have changed, and the more diplomatic side of me tended to agree with some of the commentators: there is no right or wrong answer, sometimes simulation makes sense and sometimes, perhaps not. But then I got to thinking: of course there is a right answer. This is business after all, and the right answer will be dictated by cost benefit.
But why the strong feelings? Why is it that Scott’s article offends? I think one has to look at the history of BPM and simulation to understand this, and I’d like to consider a specific attribute of each:
- Business Process Reengineering (BPR): Many still associate BPM with BPR. The mantra in BPR is that one does not pave the cow path. Analysis, engineering and improvement of the process comes first followed by automation. And simulation fits perfectly: it is less expensive to experiment with proposed process changes on a simulated system than the real thing!
- Manufacturing: Computer simulation technology has it’s roots in manufacturing and the practitioners are traditionally of the industrial engineering type. While the technology has been adopted to handle simulating (service) business processes, the roots of simulation technology and the mindset around its use is often from a time and a place where the costs of automation were very high (think factories and plants with expensive super specialized hardware and proprietary domain specific systems).
Within the context of contemporary BPM, both of the above should be called into question. Let’s look at each in turn.
The desire not to pave the cow paths is so appealing because it is intuitive. It’s a key catchphrase of the business process reengineering movement, and the actual passage is from Hammer and Champy’s Reengineering the Corporation: A Manifesto for Business Revolution. However, “it’s time to stop paving the cow paths” is only part of the quote, and like anything taken out of context, it potentially misleads. The actual passage is:
It is time to stop paving the cow paths. Instead of embedding outdated processes in silicon and software, we should obliterate them and start over. We should “reengineer” our businesses.
BPR is about starting over: radical redesign where what replaces an existing process looks nothing like what is currently in place. I would argue that BPM really has little or nothing to do with BPR. Global360 makes the following point:
…while BPR is meant to be disruptive and involves completely re-thinking processes, BPM has its roots in gentler methods. It’s intended for continuous improvement of processes and its evolution was driven by specific technologies. The focus today is on improving productivity of the workers you already have, and making it easier to roll out new business processes or new products while taking advantage of existing IT systems.
If anything, the desire is for business process improvement, incremental changes and optimizations to make the existing process run better. So yes, we are deliberately paving cow paths and there is no mandate to rip out the current process. BPM is not BPR. It’s not necessarily even business process improvement. The business case may be predicated on automation alone. The costs of experimentation in the real world have come way down. How is simulation a requirement for automation?
The number of solutions and the relative ease of deployment (compared to yesteryear) of BPM systems have radically lowered the cost of BPM implementation. When the costs of automation, whether through poor processes or implementation, are perceived as so high in the traditional simulation mindset, the costs of developing and maintaining meaningful simulation models is not considered material in the big picture. However, there are definite costs associated with simulation:
- Analysis paralysis: this is where Scott was apparently coming from in his article. The desire to gold plate a process design can lead to a delay in automation and any associated benefits.
- Talent: there is specialized training and capabilities required to create meaningful simulation models.
- Data acquisition: while a process description is required for both automation and simulation, simulation also requires a typically large data set of parameters or inputs such as volumes and their arrival patterns, task durations, resource availability and so forth to ensure the model generates meaningful and accurate results.
- Model maintenance: if you are ever going to use the simulation model again, chances are you will have to actively update the parameters described above.
These costs are not trivial.
So at this point you may be thinking I have gone over to the dark side and joined forces with Mr. Menter. You’d be wrong. While, for example, the cost of data acquisition for analyzing processes is low for automation when compared to simulation, simulation has a much higher predictive capability. The kinds of future looking and what-if capabilities available in simulation are simply not available from automation alone (that’s not to say automation alone has no predicitive capabilities based on the data it provides, it is just less):
What this suggests to me is that the traditional use of simulation for process design is misplaced in the BPM context and that the real benefit of simulation within BPM is in the predictive capabilities and prescriptive analysis it provides as a consumer of automation data. When systems are already automated, data acquisition and simulation model maintenance costs come way down. In most cases it makes more sense to employ simulation after automation, not before it. While simulation may still offer benefits as a process design tool (for example, even with automation, if the process simply does not yet exist, the case for simulation as a design tool is strong), the real story is the use of simulation after automation:
In this configuration, the costs of simulation are driven down while the accuracy, and therefore predicitve ability, of simulation models increase:
Why is this so? Automation collects the data required by simulation models. Automating the collection of data and in some cases the baseline process definition itself through process discovery reduces the need for specialized talent. Automation also provides a means of providing the data required to maintain the models for ongoing use (both simulation parameters and process model extension). Lastly, data collected from an automated system is more accurate than data compiled by hand for static, steady state simulation models, which leads to better simulation results.
Simulation, when used in this way, is positioned to provide predictive capabilities for BPM systems that are extremely valuable to management. It’s also consistent with the latest trends in BPM: specifically the process prediction component of process mining and prescriptive analytics answering such questions as: When will my process end? How do I best schedule and assign my staff to handle the anticipated workload?
This article is actually part 4 of my Process Event Streams: What You Need To Know series. I thought I’d be a little more creative with the title of this one. You might want to read part 1, part 2 and part 3 in the series before reading this post.
I’d like to make two propositions:
- Proposition 1: A simulation scenario can be defined entirely by a set of events.
- Proposition 2: The output of a simulation scenario is a set of events.
Taken together: a simulation model is a function of a set of events and results in a set of (virtual) events:
In this article, I’d like to discuss the first proposition. What this proposition says is that all the inputs or parameters required to specify a simulation model can be defined by a set of events. Practically speaking, some of the events will have been abstracted into a format that is more natural for describing some of the simulation model parameters, however, these abstractions have their basis in sets of events.
One way to explore this assertion would be to identify the components of a simulation model and see if they can each be related to a set of business process events. The Sim4BPM proposal formalizes what defines a simulation model and so we can explore each component of the Sim4BPM specification as a means of organizing our thoughts:
This is specified as a diagram or formal description (e.g. a BPMN diagram) and represents a case where events have been abstracted into a more natural format that is easier to grasp and/or specify: a process model. To demonstrate that a process model is in reality ultimately defined by a set of events, we turn to process mining. One outcome of process mining is the use of process event logs for the purposes of process discovery. In this case, there is no existing description of the business process, so event logs are used to ascertain the structure or a description of the business process. In cases where process discovery is not, or cannot be used to define a business process, it seems reasonable to assume that our ability to manually model a process is based on a knowledge of the events that occur, or could occur in a business process. When someone manually creates a flowchart of a business process, I believe they are intuitively walking through the events that can occur in that process and abstracting that into a diagram or description.
It is not difficult to see that the availability of resources are based on events. For example, the events of coming on shift and going off shift. Proficiencies at a certain task or set of tasks is based on the historical productivity of the resource at a task or set of tasks. This is calculated based on a set of events corresponding to when work at these tasks was initiated and completed when being performed by each specific resource. Even the skill sets of resources can be ascertained by looking at the events corresponding to work of a given type at a given task being reserved by a specific resource.
Things like activity durations and the routing of completed work are based on the relationship between events. For example, activity duration can be calculated by using the initiation and completion of specific instances of the activity, and the relative number of post processing outcomes. Data associated with these events (or data associated with the process instances or activity instances associated with the event) suggest the condition or business rule for doing one thing vs. another after a given activity is completed.
In the Sim4BPM specification, event parameters represent where tokens of work need to be injected into the simulation model. For example, the arrival pattern of work, and the initialization of the model with work in progress are all cases where tokens are injected into the model. The relationship to business process events should be obvious and is pretty much one to one. In particular, these parameters are based on events that originate outside the scope of the model, but effect processing inside the model.
Border Events vs. Modeling Events
One distinction that is useful to make is between events that originate outside the scope of the model, but whose occurrence directly effect processing inside the model, which I will call simulation border events, and events that are used to model the activities and processing rules inside the simulation model, which I will call simulation modeling events.
Border events consist of events which cause the injection of a new token into the simulation model (the Event Parameters in Sim4BPM) and resource schedules, which in a way cause the same thing – the injection of resource availability into the model.
Modeling events can be based on a set of historical events, predicted events, or imagined events. These events are used as the basis for defining the process description, the various activity parameters, and resource proficiencies. In the diagram above I have illustrated past events as being the source of modeling events used in specifying the simulation scenario. This might represent an existing process description and resource capabilities being used as the basis for future simulation scenarios. The modeling events could easily be based on imagined events in the case of a process that is being designed from scratch.
Coming up: I’ll discuss the relationship between the kind of analyses that may be performed with simulation models and the placement of the simulation scenario event space on the time dimension. In doing this, I’ll be discussing the second proposition: that simulation results are a set of events.
I have advocated that the concept and potential value of business process simulation is easy to understand. Yet, if simulation makes so much sense intellectually, why aren’t business managers using simulation all the time? I’ve outlined the deficiencies of simulation, and the Sim4BPM effort is meant to help address these. In the current state of business process simulation, what managers ideally want is to do something as simple as click a button and automatically simulate their business processes. What they actually get is a rather non trivial exercise, requiring specific skills sets, to create and maintain meaningful simulation models.
If you step back, what one realizes is that managers don’t actually want a simpler way to simulate their process. Nor, I suspect, do they want a simpler was to automate their processes. What the really want, quite simply, is answers. How do I process this item at the least cost? How much staff do I need to manage the workload and meet my deadlines? What is the best way to order or schedule the work? Could I operate with less equipment, and if so, how much less? Etc.
Simulation, even if it was dead simple, is only a means to an end. So is BPM in general. Managers don’t actually want either of these. They want the easiest way to get answers to the specific problems and challenges of running their business. Failing that, they want the analytics capability to find those answers. Failing that, they’ll implement the tools to generate the data to get the analytics to get at the answers…
Simulation is but one tool, albeit a useful and powerful one, that generates the kind of data required to perform the analytics required to provide the answers business managers need. It’s kind of low on the food chain. It doesn’t mean it’s not important or useful, because it is both. It’s just that managers don’t want simulation, they want answers. They might need to use simulation (or job scheduling software, or statistical forecasts, or…), but they are a means to an end. Something to keep in mind.
Tools and technologies that provide business process answers and not just analytics can often be characterized by optimization methods that identify a best (optimal) configuration. Simulation, in and of itself, is not an optimization tool. Instead it measures how changes in a business process’ parameters affect the behavior of the process over time. However, simulation technology can be used as an input to optimization methods. For example, Meta Software aims to answer the question what is the optimal set of staff schedules given the business process and staffing constraints? This is accomplished by using simulation technology to model and predict workloads that are input into workforce management technology that optimizes scheduling against those workloads. Robert Shapiro and Hartmann Genrich have developed technology that can optimize process structure. Simulation as an enabler of their solution. In both cases, simulation is very important and useful, but it’s not really about the simulation.
For the purposes of full disclosure, you should know that I currently work for Meta Software. If you or your company is also using simulation as an input to optimization technology to provide answers as opposed to analytics, let me know and I’ll make sure it gets mentioned.
Photo Credit: Caro’s Lines.
In order to set the stage for how event streams impact simulation, it’s helpful to dig a little deeper into how events are related to one another. Consider the diagram below which describes some events that occur at a single activity in a business process:
In this diagram time flows from left to right. Events occur at a point in time and therefore events on the left side of the diagram occur before events on the right side of the diagram. We can walk through the sequence of events in temproal order by starting from the left side of the diagram:
- The first instance of work (instance “1″) is available (or arrives) at the activity.
- Next, at apparently the same time, a second instance of work (instance “2″) is available at the activity and a resource capable of performing the activity comes on shift and is ready to perform work.
- The first instance of work is then reserved for the resource that had previously come on shift.
- Next, processing is initiated on the first instance of work by the resource, and hence an “in progress” event.
- When the work is completed, the resource can then be reserved by the second instance of work at the activity.
We can infer some general characteristics of business process events from this example:
- Events are related to other events in a causal fashion.
- An event can “cause” or lead to zero, one or more further events. For example, in the above diagram, the reservation of the resource by the first instance of work causes processing to be initiated on the instance of work by the resource.
- An event may have been caused by one or more previous events. For example, in the above diagram, the reservation of the resource by the second instance of work is caused, or made possible, by both the availability of the second instance of work at the activity and the resource completing processing of the first instance of work.
- An event cannot cause an event to occur in the past, i.e. the relationships between events always move forward in time.
- A set of events and their causal relationships to each other define an event space.
The following diagram shows a representation of an event space containing many related events:
With related events illustrated in this way, it is critical that one not confuse this diagram with a process model diagram which shows the relationship between process activities. In the event space diagram, every instance of an event is represented in time, whereas a process model digram will show single points in the diagram where multiple instances of events will occur in the work/control flow of the process (not time). In fact, in the event space, all events occur along a single time dimension, and this diagram could be equally illustrated as a time line with events occurring along that line. I’ve spread out the events vertically for readability: there is no real meaning to the vertical dimension in my diagram (although there could be: for example, physical location, or organizational unit could define the vertical dimension in tandem with the horizontal time dimension).
The event space diagram also indicates the current epoch, i.e. the point in time that represents “now”. Events before this epoch belong to the set of real events, and events after this epoch belong to the set of virtual events. Virtual events may be unknown or predicated based on a forecast of some sort.
I think this lays enough groundwork for discussing event streams and their relationship to business process simulation. More to come!
Business process events are anything that happens in a business process. Let’s call these potential events since any of these occurrences could be captured by a computer system. The set of all these potential events can be defined as such:
Let’s call the subset of potential events captured by a computer system the recorded events (these are also often described as event objects):
Some events may result from totally manual activities that are not performed in, or tracked by, a computer system. Clearly the number of recorded events can be no greater than the number of potential events:
Or, more concisely:
We might also think of the number of recorded events as a measure of automation, or at least potential automation. If the event is captured by a computer system of some sort, then the event is occurring within, or in conjunction with, a computer system. One might define the process automation deficit as the following:
Which can be stated more concisely as the compliment of the set of recorded events relative to the set of potential events:
If you are aiming at total control and/or automation, then you want this number to be as low as possible. Note that this measure does not discriminate against adaptive case management (ACM) systems where there may not be a formal automated work flow embedded in the system. A process may have a low automation deficit, even though there is no automated work flow in the traditional sense. The key observation is that the work that generates events are touched by systems that allow management to measure the work in process and could also allow management to control or automate the work through these systems.
What other kinds of events are there? For one, there are events that will happen in the future. Some of these are anticipated, in the sense that they have been forecast to occur (whether by some kind of statistical study, based on a forward looking simulation from the current state, anecdotally based on experience, or because of known changes to the business). There are also unanticipated future events, events that will occur but cannot be known ahead of time (for example, the results of a natural disaster).
There are also what I will call fantasy events. These are events that are not anticipated, nor necessarily likely, but can be generated by a plan based on an arbitrary set of what-if conditions. Fantasy events might be generated based on a Business Continuation Planning (BCP) scenario, or a dreamed up business acquisition with little basis in the current reality. Fantasy events are the results of scenario planning.
Virtual events are the set of all future and fantasy events:
Since unanticipated events are not known, planned or even fantasized of in any way, we can drop these kinds of events from the discussion. It is worth noting that these events are still possible, and will occur!
Virtual events generated by, or input into, a computer system can also at least theoretically be used by the applications discussed in part 1:
Virtual events allow these applications to be used to manage potential future situations, and provide forward looking, speculative capabilities for these tools. Of course, the ability to look into the future is potentially very valuable to management. The event cloud for applications that work off of these events is defined by the following set of events:
Coming next, we will discuss how event stream processing impacts business process simulation and how simulation can impact event stream processing.
A dictionary defines an event as “something that takes place; an occurrence”. Therefore, a business process event represents anything that happens or is happening in a business process. When processed or captured by a computer system, an event object, entity or data structure represents or records such an event. An event stream is a sequence of events, usually ordered by time.
Before discussing business process events in particular, their relationship to simulation, and the standards that are available to represent events and event streams, it is helpful to understand the kind of applications that rely on event streams. There is a lot of terminology in this application space, and a lot of the terms happen to overlap. Here is how I understand the main application areas:
This is the act of using system event logs to analyze business processes. Event logs generated by computer systems can often be used to extract knowledge about the processes these systems support:
Process mining aims at improving this effort by providing techniques and tools for discovering process, control, data, organizational, and social structures from event logs.
Process mining can be used for process discovery. In this case, there is no existing description of the business process, so event logs are used to ascertain the structure or a description of the business process. When a description of the process already exists (for example a process model or diagram), process mining can be used to test conformance between the actual process and the pre-existing description. Event logs are used to find and report discrepancies between real world processing and the process description. Lastly, process mining may be used to extend a process description, by finding discrepancies and then using them to modify or update the process description to reflect the process as it actually occurs.
Business Process Intelligence (BPI)
BPI, or simply process intelligence, is also known as operational intelligence. It is the real time monitoring of business processes and/or specific activities within a business process. This kind of monitoring allows management to react to service interruptions, bottlenecks and other inefficiencies, as well as potential risks and threats as they occur. BPI is achieved by capturing process event data and calculating metrics based on these events in real time. Typically the resulting data is published on dashboards, or used to send notifications of exceptional circumstances via some messaging channel (e.g. email, SMS messages, etc.).
Complex Event Processing (CEP)
CEP is the underlying technology behind many process intelligence solutions and involves the monitoring, recording and filtering of events to identify meaningful “high level” events within the event cloud (a collection of event streams). By monitoring and correlating many events happening across all the layers of an organization, complex events can be inferred. For example, late deliveries plus an abnormal rate of absenteeism might infer bad weather or a traffic disruption near an operation. By analyzing the impact of complex events, action can be taken in real time (e.g. reroute deliveries, contact employees to suggest leaving early for work).
Business Activity Monitoring (BAM)
This is a software solution that provides a real-time summary of business activities to management. This is done by aggregating, analyzing and presenting real time information on activities in the business process, usually on some kind of dashboard that provides a set of key performance indicators (KPI’s). Management can use this to measure performance, adherence, as well as to ascertain potential problems. All BAM systems process events. Some BAM systems are based on CEP which allows them to present information on higher level complex events which, as described in the previous section, are identified by correlating several lower lever events in the event stream/cloud. Clearly, these CEP based systems blur any distinction between BAM and BPI (frankly, BPI without CEP seems awfully similar to BAM as well for that matter).
All of these applications rely on streams of business process events. Event logs are just event streams persisted in some format, and event clouds are simply a collection of event streams – typically generated by one or more distributed systems. Strictly speaking, event clouds are not necessarily an ordered set of events (a stream implies some kind of order, usually temporal).
Coming soon: More on events, the relationship between business process event streams and simulation, and standards used to define events and event streams.
Sandy Kemsley who tweets more on BPM than appears humanly possible, recently brought the following article to the attention of those who follow her: What’s Wrong with If-Then Syntax For Expressing Business Rules. I really wanted to comment on this piece, but alas ModernAnalyst.com requires registration before they will allow you to comment. That seems a little anti-social to me, especially when all I wanted to do was make a quick comment, but I digress…
In the article Ronald Ross goes into quite a bit of detail on how rules are expressed from a business perspective and from an IT developer’s perspective. More importantly, he discusses how the expression of rules by these two groups are different. It’s a great BPM topic as business and IT typically converge (or diverge as the case may be) when it comes to implementing BPM. He specifically makes the following assertion:
Many IT professionals currently prefer the if-then form for expressing rules. Why? Put simply, it’s closer to what they need for implementation, whether under a rule engine or a programming language.
I can’t comment on particular rule engines, but I can comment on modern programming languages. The use of if-then logic is quite dated, and is fundamentally a construct of procedural programming. Programming languages have come a long way since then. In an object oriented language, one typically refactors (e.g. eliminates) if-then logic by using polymorphism. Polymorphism allows different types of things to be handled using a uniform interface. Why do we use polymorphism? For the following reasons:
- to cut down on the the number of conditions and conditional logic that would be required using if-then or switch style logic, i.e. to keep the number of conditions in one’s code from “ballooning”.
- to re-use rules and methods cross-functionally: the expression of the business rule remains the same even if different responses to a given situation are required. One interface, multiple responses.
Polymorphism addresses the exact gripes the author has with if-then logic in business rules! A decent developer should be very comfortable with this concept, which allows business rules to be expressed in a more natural way. Using polymorphism, rules are expressed by developers in a way that is much more consistent with the business perspective alluded to in the article. Let’s give a simple illustration using an example from the article:
if claim > 500 and claim.type == inquiry then return <polite message>
if claim > 500 and claim.type != inquiry then return <claim not payable>
else <pay claim>
Where claim could be a “big claim”, a “small claim”, a “claim inquiry”, etc. and where all these claim types implement a common interface with the SubmitClaim() method. A “small claim”‘s SubmitClaim() method might return a payment, a “claim inquiry”‘s SubmitClaim() method might return a polite message, etc., etc.
Last week I presented the topic of Simulation for Business Process Management at the Workflow Management Coalition‘s annual global user group event for implementers, developers and practitioners of the XML Process Definition Language for BPM Notation (XPDL4BPMN).
Business process simulation allows one to estimate the behaviour of a process when experimentation with the
real system under study would be disruptive, cost prohibitive, or where a real process does not yet exist. It’s conceptually easy to understand, and a compelling tool for business management. My presentation was based on an question and an observation. The question: if simulation makes so much sense intellectually, why aren’t business managers using simulation all the time? The observation: standards exists for the process definitions required by simulation scenarios, but none exist for specifying the rest of the parameters required by simulation scenarios (like the volumes and arrival patterns of work, activity durations and resource availability).
The presentation tried to answer the above question by outlining the current deficiencies with business process simulation:
- A lack of suitable skills to develop meaningful simulation models.
- Inadequate simulation tools.
- Over simplified models.
- Indirect (or non) use of available artifacts.
I then described how a standard for specifying business process simulation scenarios could help address these issues. The slides from the presentation are below, however, these slides are more in the Presentation Zen style and are meant to be visual backdrop to my talk which was the actual content. Therefore there isn’t much content in the slides themselves. I’d like to make them available anyway.