DecisionCamp 2024 Presentation
Revolutionizing Credit Risk Management in Banking

Presented By
Stefaan Lambrecht (The TRIPOD for OPERATIONAL EXCELLENCE)
Description

The European Banking Authority (EBA) Dear CEO letter, typically issued to provide guidance and expectations for banks on key regulatory issues, emphasizes the need for stringent credit risk management, continuous monitoring, and compliance with evolving regulations.

The primary challenge for banks in monitoring customers and credit risks is the complexity and volume of data that must be continuously analyzed and acted upon. This complexity arises from several factors: the variety of triggers, the volume and complexity of metrics, continuous monitoring, quickly adaptable regulatory compliance, a comprehensive 360-degree customer view.

By leveraging DMN modeling & execution banks can effectively meet the EBA’s expectations outlined in the Dear CEO letter. DMN engines provide a robust solution for automated decision-making, continuous monitoring, regulatory compliance, and transparency, ensuring that banks can manage credit risks proactively and efficiently while maintaining the required standards set by the EBA and other regulatory bodies. This alignment not only helps in fulfilling regulatory obligations but also strengthens the overall financial health and stability of the bank.

During his presentation Stefaan Lambrecht will demonstrate an end-to-end solution to these challenges inspired by a real-life case, and making use of an integrated use of DMN, CMMN and BPMN.

Watch the video

Content

Presentation

View all

DecisionCamp 2024 Presentation
Revolutionizing Credit Risk Management in Banking

Presented By
Stefaan Lambrecht (The TRIPOD for OPERATIONAL EXCELLENCE)
Description

The European Banking Authority (EBA) Dear CEO letter, typically issued to provide guidance and expectations for banks on key regulatory issues, emphasizes the need for stringent credit risk management, continuous monitoring, and compliance with evolving regulations.

The primary challenge for banks in monitoring customers and credit risks is the complexity and volume of data that must be continuously analyzed and acted upon. This complexity arises from several factors: the variety of triggers, the volume and complexity of metrics, continuous monitoring, quickly adaptable regulatory compliance, a comprehensive 360-degree customer view.

By leveraging DMN modeling & execution banks can effectively meet the EBA’s expectations outlined in the Dear CEO letter. DMN engines provide a robust solution for automated decision-making, continuous monitoring, regulatory compliance, and transparency, ensuring that banks can manage credit risks proactively and efficiently while maintaining the required standards set by the EBA and other regulatory bodies. This alignment not only helps in fulfilling regulatory obligations but also strengthens the overall financial health and stability of the bank.

During his presentation Stefaan Lambrecht will demonstrate an end-to-end solution to these challenges inspired by a real-life case, and making use of an integrated use of DMN, CMMN and BPMN.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Intelligent Digital Assistance for Contract Workflows

Presented By
Tom Debevoise (Advanced Component Research)
Denis Gagne CEO & CTO Trisotech
Description

Legal contracts often lead to intricate workflows that traditional modeling techniques, such as BPMN, struggle to represent accurately. This presentation explores the complexity of real estate sales agreements and how retrieval-augmented generation (RAG) can effectively model their dynamic nature. Real estate contracts, which encompass numerous critical elements like purchase price, property condition, financing terms, and regulatory compliance, are not static but evolve through negotiations and contingencies.

The dynamic and multifaceted nature of these contracts necessitates a sophisticated approach to workflow management for knowledge workers. Current methods fall short in capturing the detailed nuances, resulting in inefficiencies and errors as professionals manually extract essential information. By leveraging RAG, it is possible to maintain an adaptive list of activities and events, akin to a “checklist” with a closing calendar, guiding the efforts of real estate professionals more effectively.

This presentation by Tom and Denis will delve into the challenges of modeling complex contracts and demonstrate how RAG can address these issues, ultimately enhancing the workflow and efficiency of knowledge workers involved in real estate transactions.

Watch the video

Content

Presentation

View all

Intelligent Digital Assistance for Contract Workflows

Presented By
Tom Debevoise (Advanced Component Research)
Denis Gagne CEO & CTO Trisotech
Description

Legal contracts often lead to intricate workflows that traditional modeling techniques, such as BPMN, struggle to represent accurately. This presentation explores the complexity of real estate sales agreements and how retrieval-augmented generation (RAG) can effectively model their dynamic nature. Real estate contracts, which encompass numerous critical elements like purchase price, property condition, financing terms, and regulatory compliance, are not static but evolve through negotiations and contingencies.

The dynamic and multifaceted nature of these contracts necessitates a sophisticated approach to workflow management for knowledge workers. Current methods fall short in capturing the detailed nuances, resulting in inefficiencies and errors as professionals manually extract essential information. By leveraging RAG, it is possible to maintain an adaptive list of activities and events, akin to a “checklist” with a closing calendar, guiding the efforts of real estate professionals more effectively.

This presentation by Tom and Denis will delve into the challenges of modeling complex contracts and demonstrate how RAG can address these issues, ultimately enhancing the workflow and efficiency of knowledge workers involved in real estate transactions.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Dr. John Svirbely's blog post - In Healthcare: To Automate or Not to Automate, that is the Question
Dr. John Svirbely, MD
Blog

In Healthcare:
To Automate or Not to Automate, that is the Question

By Dr. John Svirbely, MD

Read Time: 3 Minutes

With modeling tools, you can define complex processes such as clinical guidelines. In theory these models can be automated. In practice it may be wise not to automate everything. The decision to automate depends on several factors, such as your goals and the problems that you need to solve. Automation is not without costs, and you need to consider the return on your investment (ROI).

The Decision to Automate

Certain processes or decisions are more attractive to automate than others. To identify these, you may ask some questions:

How much data that the models require and how easy it is to obtain are key issues. If the automated process constantly interrupts the user or requires a large amount of data, then it may bring little value to the organization. One solution may be to have standing orders in place that will guarantee that the required data is always collected and available when it is needed.

The Emergency Department is an excellent example of practice setting which can be a challenge to automate. The environment can be chaotic, and some patients require dynamic care that is determined on the fly. Such tasks are a challenge to automate. However, even in the ED there are other processes where automation can relieve staff from drudgery and free them up for patient care.

One issue to consider relates to patient complexity. If most patients are straightforward while only a small subset are clinical challenges, then the complex patients can be triaged to a clinician while the remainder handled by an automated process. This improves overall efficiency and use of manpower.

Microservices

Even if a guideline is not fully automatable, it often contains elements that are. These can be encapsulated in microservices that are triggered when a certain set of conditions are met.

These are attractive since they often need a limited amount of data. They are easier to create and maintain. On the other hand, many of these services may be needed, which can introduce another set of challenges.

An invalid BPMN diagram

One challenge with microservices is the user experience. Having a lot of microservices means that a lot of messages could be generated and cause alarm fatigue. It is important to develop a strategy that will allow essential information to get through to the user.

Conclusions

The decision to automate or not can be challenging. Several things need to be considered such as cost, liability, acceptability, and care quality. However, considering the economic challenges faced in healthcare today, automation is an attractive idea. Some processes can and should be automated.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all

Dr. John Svirbely's blog post - Clinical Models at Scale
Dr. John Svirbely, MD
Blog

Clinical Models at Scale

By Dr. John Svirbely, MD

Read Time: 3 Minutes

If you need to create a large number of clinical models – either for a new project or to replace outdated software – then you are probably (or should be) feeling a bit overwhelmed. Such a project may take thousands of hours of coding, several informaticians, and many resources. Faced with such a daunting task it is no wonder that so many legacy systems persist for decades. However, there are ways to ease the burden and give you some control.

Working Smart

Sometimes people feel an urge to jump into model building right off the bat. This often results in working hard all through the project. Spending some time to plan and prepare can often to prove to be more efficient in the long run.

When building process or decision models, there are several ways to work smarter, such as:

Standardization

Standardization is something that many people push back on. There are various reasons for this. Sometimes people feel that their domain is unique, and each solution must be individually crafted. While this attitude has some merits, it also increases the work needed to program your solution. The more that you standardize, the fewer the models that you need to develop and maintain, thereby increasing efficiency.

Sometimes you can standardize almost everything, but there are still a few variations between implementation sites that remain. A solution to this problem is to create what Trisotech calls a model “template”, which allows different versions of a model to be tweaked for a specific site, while leaving most of the overall model otherwise unchanged.

Controlling Data and Terminology Proactively

Proactive control of data and terminology may seem insignificant compared to all the other tasks, However, if you do not have control of terminology and data when you start, then later stages of development can become a nightmare with a lot of wasted effort. For example, if you have multiple informaticians, then you will probably have multiple variable names all pointing to the same data object. Each name is interpreted by the software as being unique, and as such each must be linked to your data source. If you have control on your terminology, then you can reduce your data integration challenges by 50% or more.

Making Use of Patterns

When building clinical models, you may notice that the same tasks appear together over and over again. This is termed a pattern.

To illustrate this, let us look at preauthorization, which has 4 main decision tasks:

All of these must be cleared before approval is granted. These tasks can be modeled in BPMN as follows:

If you are a payer faced with preauthorizing drugs or services, then this one pattern can be used over and over again with minor variations. Using patterns can speed development when compared to treating each situation as a unique problem. In addition, users can better understand what you are trying to do.

Reuse

Once a model has been created, it can be used repeatedly. One goal of process and decision modelers is to create a library of models that can be re-used as building blocks in future projects.

When copying a model into another, the copy can occur in 2 ways:

Each approach has their pros and cons. Reuse by reference has many benefits since you do not have to go to each model that uses a particular decision to make any changes. However, to achieve this a good deal of standardization is needed.

Other ways to reuse a previously created knowledge include services or business knowledge models (BKMs).

Conclusions

Several strategies can be used to reduce the burden of programming burden without compromising quality. These require some careful thought and planning upfront, but they pay dividends over the long haul, speeding development and simplifying maintenance.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all

Dr. John Svirbely's blog post - Do Healthcare Process Models Need Attended Tasks?
Dr. John Svirbely, MD
Blog

Do Healthcare Process Models Need Attended Tasks?

By Dr. John Svirbely, MD

Read Time: 2 Minutes

Several challenges may be encountered when creating process models in healthcare:

All of these challenges can be addressed using attended tasks.

What is an attended task?

An attended task is a task or decision that has an attribute which:

The review, changes, and user are recorded, confirming with timestamp that a person has approved the task or decision results.

In a Trisotech BPMN model, an attended task is indicated by the presence of a small check box in the lower left corner, as shown in Figure 1. This example shows a decision task for the diagnosis of anemia based on criteria from the World Health Organization that uses three data inputs (age, sex, and hemoglobin).

Figure 1

What happens in an attended task?

As mentioned above, when execution of a process comes to an attended task or decision, it stops and allows the provider to interact with it in ways that have been configured by the model developer. The settings for the attended task are shown in Figure 2.

Figure 2

The users able to make changes can be restricted. This allows a provider who is familiar with the patient to individualize the patient’s care based upon information known or observed about the patient. For example, the significance of a hemoglobin value may vary depending on whether or not the patient was transfused prior to the specimen being collected. Similarly, a certain pattern of clinical findings may not fully capture the patient’s current state, while a clinician at the bedside can observe it. Things in life may look different than they do on paper.

Since data and decisions are all recorded, retrospective analysis of decisions relative to outcomes can be performed. This gives insights into care and interventions, supporting the development of a learning health system.

Caveats in Using Attended Tasks

Attended tasks are useful at key decision points that can significantly impact the patient. Not every task in a process should be an attended task, since an attended task requires interaction with a user, thereby slowing the process. Deciding which tasks should be treated as an attended task requires weighing the pros and cons of the choice.

Conclusion

Healthcare process models may seem like a black box to users. An attended task can shed light on the process and allows clinicians to interact with a model at key decision points. If used judiciously they can improve healthcare, as well as provide insights into how clinical decisions impact outcomes.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all

Dr. John Svirbely's blog post - Orchestrating Generative AI in Business Process Models
Dr. John Svirbely, MD
Blog

Orchestrating Generative AI in Business Process Models

By Dr. John Svirbely, MD

Read Time: 2 Minutes

Generative AI is spreading fast and constantly becoming more powerful. Its uses and roles in healthcare are still uncertain. Although it will be disruptive, it is unclear what it will change or what will be replaced as the technology evolves.

The use of Generative AI poses several challenges, at least for now. In some respects, it behaves like a black box. It may be unable to give the sources for what it produces, so it is hard to judge the reliability of its sources. It can be hard to validate depending on how it is used. These factors may make doctors, patients, and regulators nervous about its use in a sensitive area like healthcare. If a claim of malpractice is made involving it, then it may be hard to defend its mysterious behavior.

Generative AI and Business Process Models

A business process model can access Generative AI simply by adding a connector to a task, which is done by a simple drag and drop. Because it is now part of a process, you can control when and how it is called.

Since there may be several possible paths through the model, you can have different calls that are appropriate for each path. Orchestrating the output provides an opportunity to give an individualized solution for a specific situation. Orchestration of Generative AI can make it less of a black box.

Since the calls to Generative AI can be tightly constrained and since you know exactly where it is being used and what the inputs are, the appropriateness of its explanation can be judged in context. This can make validation a bit less daunting.

Illustrative Example

A common problem in healthcare is the need to communicate health information to patients. Not only may the patient and family not understand what the provider is saying, but also the provider may misunderstand the patient. The need to communicate better has created a need for access to human translators around the clock. This raises other problems, as the translator may not understand the nuances of medical terms. It can also be quite expensive since you need to have multiple translators on call.

In Figure 1 there is a portion of a BPMN model for the diagnosis of anemia. A DMN decision model first determines whether a patient has anemia, and, if so, its severity. It may be desirable to inform the patient quickly and easily about these findings. The problem of translation can be approached by taking the outputs of the decision and sending them as inputs to Generative AI (in this case OpenAI, indicated by the icon in the top left corner), along with the patient’s preferred language and education level. The Generative AI then takes these inputs and instructions and generates a letter tailored to the patient.

Figure 1

Generating narrative text is a strength for Generative AI. If known inputs and appropriate constraints are placed on it, then it can reproducibly generate a letter to inform a patient of the diagnosis in language that the patient can understand. Performance can be validated by periodic review of various outputs by a suitably qualified person. This can simply but elegantly solve problems in a cost-effective manner.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all

Dr. John Svirbely's blog post - Going from Zero to Success using BPM+ for Healthcare. 
                Part I: Learning Modeling and Notation Tools
Dr. John Svirbely, MD
Blog

Going from Zero to Success using BPM+ for Healthcare.

Part I:
Learning Modeling and Notation Tools

By Dr. John Svirbely, MD

Read Time: 3 Minutes

Welcome to the first installment of this informative three-part series providing an overview of the resources and the success factors required to develop innovative, interoperable healthcare workflow and decision applications using the BPM+ family of open standards. This series will unravel the complexities and necessities for achieving success with your first clinical guideline automation project. Part I focuses on how long it will take you to reach cruising speed for creating BPM+ visual models.

When starting something new, people often ask some common questions. One is how long will it take to learn the new skills required. This impacts how long it will take to complete a project and therefore costs. Learning something new can also be somewhat painful when we are set in our old ways.

Asking such questions is important, since there is often a disconnect between what is promoted online and the reality. I can give my perspective based on using the Trisotech tools for several years, starting essentially from scratch.

How long does it take to learn?

The simple answer – it depends. A small project can be tackled by a single person quite rapidly. That is how I got started. Major projects using these tools should be approached as team projects rather than something an individual can do. Sure, there are people who can master a wide range of skills, but in general most people are better at some things than others. Focusing on a few things is more productive than trying to do everything. A person can become familiar with the range of tools, but they need to realize that they may only be able to unlock a part of what is needed to automate a clinical guideline.

The roles that need to be filled to automate a clinical guideline with BPM+ include:

1 subject matter expert (SME)

2 medical informaticist

3 visual model builder

4 hospital programmer/system integrator

5 project manager

6 and of course, tester

A team may need to be composed of various people who bring a range of skills and fill various roles. A larger project may need more than one person in some of these roles.

The amount of time needed to bring a subject matter expert (SME) up to speed is relatively short. Most modeling diagrams can be understood and followed after a few days. I personally use a tool called the Knowledge Entity Modeler (KEM) to document domain knowledge; this allows specification of term definitions, clinical coding, concepts maps and rule definitions. The KEM is based on the SVBR standard, but its visual interface makes everything simple to grasp. Other comparable visual tools are available. The time spent is quickly compensated for by greater efficiency in knowledge transfer.

The medical informaticist has a number of essential tasks such as controlling terminology, standardizing data, and assigning code terms. The person must understand the nuances of how clinical data is acquired including FHIR. These services cannot be underestimated since failures here can cause many problems later as the number of models increase or as models from different sources are installed.

The model builder uses the various visual modelling languages (DMN, BPMN, CMMN) according to the processes and decisions specified by the SME. These tools can be learned quickly to some extent, but there are nuances that may take years to master. While some people can teach themselves from books or videos, the benefits of taking a formal course vastly outweigh the cost and time spent. Trsiotech offers eLearning modules that you can learn from at your own pace.

When building models, there is a world of difference between a notional model and one that is automatable. Notional models are good for knowledge capture and transfer. A notional model may look good on paper only to fail when one tries to automate it. The reasons for this will be discussed in Part 3 of this blog series.

The hospital programmer or system integrator is the person who connects the models with the local EHR or FHIR server so that the necessary data is available. Tools based on CDS Hooks or SMART on FHIR can integrate the models into the clinical workflow so that they can be used by clinicians. This person may not need to learn the modeling tools to perform these tasks.

The job of the project manager is primarily standard project management. Some knowledge of the technologies is helpful for understanding the problems that arise. This person’s main task is to orchestrate the entire project so that it keeps focused and on schedule. In addition, the person keeps chief administrators up to date and tries to get adequate resources.

The final player is the tester. Testing prior to release is best done independently of other team members to maintain objectivity. There is potential for liability with any medical software, and these tools are no exception. This person also oversees other quality measures such as bug reports and complaints. Knowing the modeling languages is helpful but understanding how to test software is more important.

My journey

I am a retired pathologist and not a programmer. While having used computers for many years, my career was spent working in community hospitals. When I first encountered the BPM+ standards, it took several months and a lot of prodding before I was convinced to take formal training. I have never regretted that decision and wish that I had taken training sooner.

I started with DMN. On-line training takes about a month. After an additional month I had enough familiarity to become productive. In the following 12 months I was able to generate over 1,000 DMN models while doing many other things. It was not uncommon to generate 4 models in one day.

I learned BPMN next. Training online again took a month. This takes a bit longer to learn because it requires an appreciation of how to design a process so that it executes optimally. Initially a model would take me 2-3 days to complete, but later this dropped to less than a day. Complex models can take longer, especially when multiple people need to be orchestrated and exception handling is introduced.

CMMN, although offering great promise for healthcare, is a tough nut to crack. Training is harder to arrange, and few vendors offer automatable versions. This standard is better saved until the other standards have been mastered.

What are the barriers?

Most of the difficulties that I have encountered have not been related to using the standards. They usually arise from organizational or operational issues. Some common barriers that I have encountered include:

1 lack of clear objectives, or objectives that constantly change.

2 lack of commitment from management, with insufficient resources.

3 unrealistic expectations.

4 rushing into models before adequate preparations are made.

If these can be avoided, then most projects can be completed in a satisfactory manner. How long it takes to implement a clinical guideline will be discussed in the next blog.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - Instance Alignment in BPMN
Bruce Silver
Blog

Instance Alignment in BPMN

By Bruce Silver

Read Time: 3 Minutes

One of the most common mistakes beginners make with BPMN stems from lack of clarity as to what exactly BPMN means by a process. A BPMN process is a defined set of sequences of activities, performed repeatedly in the course of business, starting from some triggering event and leading to some final state. The key word here is “repeatedly”. The same process definition is followed by each instance of the process. Not all instances follow the same sequence of activities, but all follow some sequence allowed by the process definition. That’s not Method and Style, that’s from the spec. The spec just doesn’t say that very clearly.

Each process instance has a defined start and end. The start is the triggering event, a BPMN start event. The end occurs when the instance reaches an end state of the process instance, which in Method and Style is an end event. It helps to have a concrete idea of what the process instance represents, but I have found in my BPMN Method and Style training that most students starting out cannot tell you. Actually it’s very easy: It is the handling of the triggering event, which in Method and Style is one of only three kinds: a Message event, representing an external request; a Timer event, representing a scheduled recurring process; or a None start event, representing manual start by an activity performer in the process, which you could call an internal request. Of these three, Message start is by far the most common. That request message could take the form of a loan application, a vacation request, or an alarm sent by some system. The process instance in that case is then essentially the loan application, the vacation request, or the alarm. In Method and Style, it’s the label of the message flow into the process start event. With a Timer start event, the instance is that particular occurrence of the process, as indicated by the label of the start event.

Here is why knowing what the process instance represents is important. The instance of every activity in the process must have one-to-one correspondence with the process instance! Of course, there are a few exceptions, but failure to understand this fundamental point leads to structural errors in your BPMN model. And those structural errors are commonplace in beginner models, because other corners of the BPM universe don’t apply that constraint to what they call “processes”.

Take, for example, the Process Classification Framework of APQC, a well-known BPM Architecture organization. It is a catalog of processes and process activities commonly found in organizations. But these frequently are not what BPMN would call processes. Even those that qualify as BPMN processes may contain activities that are not performed repeatedly on instances or whose instances are not aligned with the process instance. Here is one called Process Expense Reimbursements, listing five activities.

But notice that two of the five (8.6.2.1 and 8.6.2.5) are not activities aligned one-to-one with the process instance. That is, they are not performed once for each expense report. That means that if we were to model 8.6.2 Process Expense Reimbursements in BPMN, activities 8.6.2.1 and 8.6.2.5 could not be BPMN activities in that BPMN process. So where do they go? They need to be modeled in separate processes… if they can be modeled as BPMN processes at all! Take 8.6.2.1Establish and communicate policies and limits. For simplicity, let’s assume that establishing and communicating have one-to-one correspondence, so they could be part of a single process. How does an instance of that process start? It could be a recurring process – that’s Timer start – performed annually. Or it could be triggered occasionally on demand – Message or None start. The point is that 8.6.2.1 needs to be modeled as a process separate from Process Expense Reimbursements. The result of that process, the policy and limits information, is accessible to Process Expense Reimbursements through shared data, such as a datastore.

Activity 8.6.2.5 Manage personal accounts is not a BPMN activity at all. It cannot be a subprocess, because there is no specified set of activity sequences from start to end. To me it is an instance of a case in CMMN, not an activity in this BPMN process.

All this is simply to point out that instance alignment is a problem specific to BPMN because other parts of BPM do not require it.

Since “business processes” in the real world often involve actions that are not one-to-one aligned with the main BPMN process instance, how do we handle them? We’ve already seen one way: Put the non-aligned activity in a separate process – or possibly case. Communication of data and state between the main process and the external process or case is achieved by a combination of messages and shared data.

Repeating activities are another way to achieve instance alignment.

When instance alignment requires two BPMN processes working in concert, it is often helpful to draw the top level of both processes in the same diagram. This can clarify the relationship between the instances as well as the coordination mechanism, a combination of messages and shared data. You can indicate a one-to-N relationship between instances of Process A and Process B by placing a multi-instance marker on the pool of Process B.

An example of this we use in the BPMN Method and Style training is a hiring process. The instance of the main process is a job opening to be filled. It starts when the job is posted and ends when it is filled or the posting is withdrawn. So it qualifies as a BPMN process. But most of the work is dealing with each individual applicant. You don’t know how many applications you will need to process. You want processing of multiple applicants to overlap in time, but they don’t start simultaneously; each starts when the application is received. So repeating activities don’t work here. One possible solution is shown below.

Here there is one instance of Hiring Process for N instances of Evaluate Candidate, so the latter has the multi-participant marker. Hiring Process starts manually when the job is posted and ends when either the job is filled or the posting expires unfilled after three months. Each instance of Evaluate Candidate starts when the application is received, and there are various ways it could end. It could end right at the start if the job is already filled, since before the instance is routed to any person, the process checks a datastore for the current status of the job opening. It could end after Screen and interview if the candidate is rejected. If an offer is extended, it could end if the candidate rejects the offer, or successfully if the offer is accepted. And there is one more way: Each running instance could be terminated in a Message event subprocess upon receiving notice from Hiring Process that the posting is either filled or canceled. While not perfect, this BPMN model illustrates instance alignment between multiple processes working in concert, including how information is communicated between them via messages and shared data.

There is yet another way to do it… all in a single process! It uses a non-interrupting Message event subprocess, and is an exception to the rule that all process activities must align one=to-one with the process instance. It looks like this:

Now instead of being a separate process, Evaluate Applicant is a Message event subprocess. Each Application message creates a new instance of Evaluate Applicant. You don’t know how many will be received, and they can overlap in time. As before, each instance checks the datastore Job status. Since everything is now in one process, we can no longer use messages to communicate between Evaluate Applicant and the main process. Here we have a second datastore, candidates, updated by Evaluate Applicant and queried by Get shortlist to find newly passed applicants. Instead of an interrupting event subprocess to end the instance, we use a Terminate event after notifying all in-process candidates.

If you are just creating descriptive, i.e., non-executable, BPMN models, you may wonder why instance alignment matters. It certainly can make your models more complicated. But even in descriptive models, in order for the process logic to be clear and complete from the printed diagrams alone – the basic Method and Style principle – the BPMN must be structurally correct. If it is not, the other details of the model cannot be trusted. If you want to get your whole team on board with BPMN Method and Style, check out my training. The course includes 60-day use of Trisotech Workflow Modeler, lots of hands-on exercises, and post-class certification.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Sandy Kemsley’s Vlog - AI and BPM
Sandy Kemsley Photo
Vlog

AI and BPM

By Sandy Kemsley

Video Time: 8 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog to talk about the latest Hot Topic in BPM: Artificial Intelligence.

Now, I’ve been at a couple of conferences in the last month, and I’ve had a few briefings with vendors and there’s a lot of interest in this intersection between AI and BPM. But what does that mean? What exactly is the intersection? And it’s not just one answer, because there’s several places where AI and Process Management come together.

Now, the dream, or the nightmare in some people’s opinion, is that AI just takes over processes it figures out what the process should be then it automates and executes all the steps in the process. The reality is both less and more than that. So, let’s look at the different use cases for AI in the context of BPM. Let’s start at the beginning with process Discovery and design and there’s quite a bit that AI can do in this area as an assist of Technology. Now, at this point it might not be possible to have an AI completely designed processes without human intervention, but it is possible to have AI act as sort of a co-pilot for authoring process models or finding improvements to them.

There’s a couple different scenarios for this.

First of all, you could have a person just describe the process that they want to have in broad terms, and have generative AI create a first impression of that, or a first version of that process model for them. So, then the human designer can make changes directly or add additional information, the AI could then make refinements to the process model and so on. Now, the challenge with using generative AI in this scenario is that you need to ensure that the training data is relevant to your situation. This might mean that you need to use private AI engines and data sources that are trained on your own internal data or on data that’s specific to your industry in the very least in order to ensure some reasonably good results.

Now, the second process modeling scenario is when there are logs of processes in place, like we would use for process mining, and we’ve talked about process mining in in a previous podcasts. Now, in that case, there are possibilities for having AI look at the log data and then other enterprise and domain data and using process mining using other search-based optimization suggest improvements to the process. So, for example adding parallelism at certain points, or automating certain steps or decisions, or having some activities be required for regulatory or conformance reasons. Again, there needs to be some checks and balances on the training data that’s used for the AI to ensure that you’ve included processes and regulations that pertain to your business.

Now, in both of these cases, there’s the expectation that a person who’s responsible for the overall process operation, like the process owner, might review the models that are created or revised by the AI before they’re put into production. It’s not just an automated thing where the AI creates a model or modifies a model and it’s off and running. Now, we can look at using similar types of a AI and algorithms that you would for process improvement that are based on process mining and other domain knowledge, we can also use those in the scenario where AI acts again as a co-pilot, but for the people that are doing the human activities in a process, so the knowledge workers. Now they can ask complex questions about the case that they’re working on, they can be offered suggestions on the next best action, they can have guard rails put in place so that they don’t make decisions at a particular activity that would violate regulations or policies.

Now, we already see a lot of decision management and machine learning applied in exactly this situation. So, a knowledge worker just needs a little bit of an assist to help them make complex decisions or perform more complex activities. And adding AI to the mix means that we can have even more complex automation and decision-making that can support knowledge workers as they do their job. So, the ultimate goal is to ensure that the knowledge workers are making the best possible decisions at activities within processes, even if the environment is changing maybe regulations are changing, or procedures are changing. And then also to support less skilled knowledge workers so that they can become more familiar with the procedures that are required because they have a trusted expert, namely the AI, by their side coaching them on what they should be doing next.

Now, the last scenario for AI in the context of processes, is to have a completely automated system or even just completely automated activities within a process that used to be performed by a person. So the more times that an activity is performed successfully, there’s data collected about the context the domain knowledge that all that go behind that decision, the more likely it is that AI can be trained to make decisions and do activities of the same complexity and with the same level of quality as a human operator. We also see this with AI chatbots. We’re seeing these a lot now, that where they interact with other parties processes like providing customer service information. Now, previous previously a knowledge worker might have interacted with a customer maybe on a phone or by email, we’re seeing a lot of chatbots in place now for customer service scenarios. Now, a lot of them are pretty simple they don’t really deserve to be called AI. They’re just looking for simple words and providing some stock answers but what generative AI is starting to give us in this scenario, is the ability to respond to more complex questions from a customer and leave the human operators free to handle situations that can’t be automated or rather can’t be automated yet.

Now, currently I don’t think we need to worry about AI completely taking over our business processes. There’s lots of places where AI can act as a co-pilot to assist designers and knowledge workers to do the best job possible. But it doesn’t replace their roles: it just gives them an assist. Now, a lot of Industries don’t have all the skilled people that they need in both of these areas for designers, for knowledge workers or it takes a long time to train them so letting the people who are there be more productive, is a good thing. So, using AI to make the few skilled resources we have more productive is something that’s beneficial to the industry it’s beneficial to customers. Now, as I noted earlier, the ability of AI to make these kinds of quality decisions and perform the types of actions that are currently being done by people, it’s going to be heavily reliant on the training data that’s used for the AI. So, you can’t just use the public chat, like chat GPT, for interacting with your customers. That’s not going to work out all that well. Instead, you do want to be training on some of your own internal data as well as some industry specific data.

Now, where we do start to see people being replaced is where AI, is used to fully automate specific activity, specific decisions, customer interactions within a process. However this is not a new phenomenon. Process automation has been replacing people doing repetitive activities for a long time. So, all that we’re doing by adding AI, is increasing the complexity of the type of activity that can be fully automated. The idea that we’re automating some activities is not new, this has been going on a long time. So, the bar has been creeping up: we went from simple automation to more complex decision management, machine learning and now, we have full AI in its current manifestation. So, we just need to get used to that idea that it’s another step in the spectrum of things that we’re doing by adding intelligence into our business processes.

Now, are you worried about your own job? You could go and check out willrobotstakemyjob.com or just look around at what’s happening in your industry. If you’re adding value through skills and knowledge that you have personally that’s very difficult to replicate, you’re probably going to be able to stay ahead of the curve and you’ll just get a nice new AI assistant who’s going to help you out. If you’re doing the same thing over and over again however, you should probably be planning for when AI gets smart enough to do your job as well as you do.

That’s all for today. You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - Future-Proofing Your Business With BPM
Sandy Kemsley Photo
Vlog

Future-Proofing Your Business With BPM

By Sandy Kemsley

Video Time: 8 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with a new topic now that we’ve finished my series on best practices in business automation application development. Today, I’m going to talk about future proofing your business with process automation technologies such as BPM.

Now, a little over three years ago, everything was business as usual. Organizations were focused on new competitors, new business models, new regulations, but it wasn’t a particularly disruptive time. Then the pandemic happened, and that was disruptive! So, supply chains, customers, business models, everything changed dramatically in some cases.

Now, that’s not news of course not by now, but it’s become obvious that it’s not enough to have shifted to some new way of doing things in a response to a one-time event. Companies have to become more easily adaptable to frequent disruptions, whether they’re technological, societal or environmental, or they’re just not going to exist anymore. In many cases this means modernizing the very technological infrastructure that supports businesses.

So how is your business going to adopt a change? Both the changes that have already happened and the unknown changes in the future? There’s a lot that falls under the umbrella of modernization, and you need to look at whether you’re doing just enough to survive or if you’re taking advantage of this disruption to thrive and actually outgrow your competition.

I see three ways that companies have been reacting to disruption:

1

So you can support your existing business which is basically adding the minimum amount of technology to do the same things that you were doing before. This is purely a survival model, but if you have a unique product or service or very loyal customers, that might be enough for you.

2

You can improve your business by offering the same products or services but in a much better way. This will give you better resilience to future disruptions, it improves customer satisfaction and it shifts you from just surviving to thriving.

3

You can innovate to expand the products and services that you offer or move into completely new markets. This is going to let you LeapFrog your competition and truly thrive not just as we emerge from the pandemic, but in any sort of future disruption that we might have.

more than
managing your business processes

So I mentioned BPM, but this is about more than just managing your business processes. There’s a wide variety of technologies that come into play here and that really support future proofing of your business: process and decision automation, intelligent analysis with machine learning and AI, content and capture, customer interactions with intelligent chatbots, and Cloud infrastructure for Access anywhere anytime…

So you have to look at how to bring all of those together, and just understanding how all those fit, is like an entire day’s lecture all in one, but you probably have a bunch of those that you’re using already. Let’s look at a few kind of examples of this support/improve/innovate spectrum that I’ve been talking about though and we’re dealing with instruction and then just what it means for future proofing your business. So, supporting your existing business, a matter of just doing what you can to survive, and hoping that either you can keep up or that things will go back to normal. Now basically you’re doing the same business that you always were, but with maybe a bit of new technology to support some new ways of doing things:

  • Your employees might be working from home so you needed some new Cloud technology or Network Technology to help with this.
  • You probably also need some new management techniques, in order to stay productive and motivated even though your your Workforce is highly distributed geographically.
  • You also need to handle changing customer expectations. So you have to have some amount of digital interactions and if you’re dealing with physical goods you might be looking at new ways of handling delivery.
  • Your supply chain processes need to become flexible, this is one of the things that we really saw during the pandemic is there were a lot of broken supply chains, so you want to be able to change suppliers or change other channels in the event of disruption.

But let’s just go a little bit beyond surviving disruption that you might do by sort of mandating together something to support your existing model. The next step to is to look at disruption as an opportunity to thrive. So you want to still be in the same business but embrace new technologies and new ways of doing things. So this really pushes further into looking at customer expectations: adding in self-serve options if you don’t already have them, and then coupling that with intelligent automation of processes and decisions. So, once you’ve added intelligence to your business operations to let them be done mostly without human intervention, now a customer can kick off a transaction through self-service and see a complete almost immediately by intelligent automation, same business – better way to do it, more efficient, faster, more accurate, better customer satisfaction.

Now, this is also going to be helped by having proper business metrics that are oriented towards your business goals. With more automation data is going to be captured directly,, regarding how your operation is working, and then that’s going to feed directly into the metrics. Those metrics then you can use to guide knowledge workers so that they know what they should be doing next. Also to understand how customer satisfaction is and how you can improve it.

So this lets you move past your competition, while keeping your previous business focus. So given that there’s two companies, you and your competitors, who are offering the same products or Services if one does only that survival support that I talked about previously and one does more intelligent improvements focused on customer satisfaction, who do you think is going to win?

Now, the third stage of responding to disruption and adapting to change is innovation. You’ll continue to do process and operational improvements through performance monitoring, data-driven analytics, but also move into completely new business models. So maybe you repackage your products or services and then you sell them to completely different markets, so you might move from commercial to Consumer markets or vice versa or you sell into different geography or different industries because now you have more intelligent processes you have this always-on elastic infrastructure. Here again, you’re just moving past your competition by not only improving your business but actually expanding into new markets, taking on new business models that are supported by this technology-based Innovation.

So it’s the right application of technology that lets you do more types of business and more volume without increasing your employee headcount. Without automation and flexible processes you just couldn’t do that, and without data-driven analytics you wouldn’t have any understanding of the impact that such a change would have on your business or whether you should even try it. So you need to have all of that: you need to have the the data that supports the analytics and you need to have the right type of technology that you’re applying to have more intelligent operations business operations, and this was going to allow you to move from just surviving to thriving to innovation.

Now, a lot of change here. The question that all of you need to be asking yourself now is not is this the new normal but really why weren’t we doing things this way before? There’s just a lot of better ways that we could be doing things and we’re now being pushed to take those things on.

That’s all for today. Next month I’m going to be attending the academic BPM conference in the Netherlands, and there’s always some cool new ideas that come up so watch for my reports from over there!

You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

What is FEEL?

FEEL (Friendly Enough Expression Language) is a powerful and flexible standard expression language developed by the OMG® (Object Management Group) as part of the Decision Model and Notation (DMN™) international standard.

Request a demo

Low-code/No-Code/Pro-Code

It is a valuable tool for modeling and managing decision logic in many domains, including healthcare, finance, insurance, and supply chain management. FEEL is designed specifically for decision modeling and execution and to be human-readable to business users, while still maintaining the expressive power needed for complex decision-making. Its simplicity, expressiveness, domain-agnostic functionality, strong typing, extensibility, and standardization make FEEL a valuable tool for representing and executing complex decision logic in a clear and efficient manner. Organizations using FEEL enjoy better collaboration, increased productivity, and more accurate decision-making.

What Are Expression Languages?

FEEL is a low-code expression language, but what is the difference between Expression Languages, Scripting Languages, and Programming Languages. They are all different types of languages used to write code, but they have distinct characteristics and uses.

Expression Languages

Expression languages are primarily designed for data manipulation and configuration purposes. They are focused on evaluating expressions rather than providing full-fledged programming capabilities. Expression languages are normally functional in nature, meaning that at execution the expression will be replaced by the resulting value. What makes them attractive to both citizen developers and professional developers is that they are usually simpler and have a more limited syntax compared to general-purpose programming languages and/or scripting languages. Due to their simplicity, expression languages are often more readable and easier to use for non-programmers or users who don’t have an extensive coding background. FEEL is a standard expression language.

Scripting Languages

Scripting languages provide abstractions and higher-level constructs that make programming using them easier and more concise than programming languages. They are usually interpreted rather than compiled, meaning that the code is executed line-by-line by an interpreter rather than being transformed into machine code before execution. Popular examples of scripting languages are Python, JavaScript, and Ruby.

Programming Languages

Programming languages are general-purpose computer languages designed to express algorithms and instructions to perform a wide range of tasks and create applications. They offer extensive features and capabilities for developing complex algorithms, data structures, and user interfaces. They offer better performance compared to scripting languages due to the possibility of compiling code directly into machine code. Examples of programming languages include C++, Java, and C#.

Is FEEL Like Microsoft Power FX (Excel Formula Language)?

FEEL and Power FX are both expression languages used for data, business rules, and expressions, but in different contexts. Power FX is a low-code programming language based on Excel Formula Language, tailored for Microsoft Power Platform, with some limitations in handling complex decision logic. As soon as the business logic gets a bit tricky, Power FX expressions tend to become highly complex to read and maintain. On the other hand, FEEL is a human-readable decision modeling language, designed for business analysts and domain experts, offering a rich set of features for defining decision logic, including support for data transformations, nested decision structures, and iteration. FEEL provides clear logic and data separation, making it easier to understand and maintain complex decision models.

While Power FX has a visual development environment in the Microsoft Power Platform, FEEL is primarily used within business rules and decision management systems supporting DMN and process orchestration platforms. FEEL is a language standard across multiple BPM and decision management platforms, providing interoperability, while Power FX is tightly integrated with Microsoft Power Platform services. For further comparison, Bruce Silver’s articles FEEL versus Excel Formulas and Translating Excel Examples into DMN Logic.

FEEL Benefits for Technical people and for Business people.

Technical Benefits of FEEL

Decision focus language

FEEL is designed specifically for decision modeling and business rules. It provides a rich set of built-in functions and operators that are tailored for common decision-making tasks. This decision focus nature makes FEEL highly expressive and efficient for modeling complex business logic.

Expressiveness

FEEL supports common mathematical operations, string manipulation, date and time functions, temporal logic and more. This expressiveness enables the representation of complex decision rules in a concise and intuitive manner.

Decision Table Support

FEEL has native support for decision tables, which are a popular technique for representing decision logic. Decision tables provide a tabular representation of rules and outcomes, making it easy to understand and maintain complex decision logic.

Strong typing and type inference

FEEL is a strongly typed language, which means it enforces strict type checking. This feature helps prevent common programming errors by ensuring that values and operations are compatible.

Boxed Expression Support for FEEL

Boxed expressions allow FEEL expressions and statements to be structured visually including:

  • If, then, else statements
  • For, in, return statements
  • List membership statements
  • … and more.

These visual constructs, along with autocomplete make creating, reading, and understanding complex expressions easy to model and debug.

Flexibility and modularity

FEEL supports modular rule definitions and reusable expressions, promoting code reuse and maintainability. It allows the creation of decision models and rule sets that can be easily extended, modified, and updated as business requirements change. This flexibility ensures agility in decision-making processes.

Testing and Debugging

FEEL expressions can be tested and debugged independently of the larger application or system. This enables users to validate and verify decision logic before deployment, ensuring accuracy and reliability. FEEL also provides error handling and exception mechanisms that help identify and resolve issues in decision models.

Execution efficiency

FEEL expressions are designed to be executed efficiently, providing fast and scalable performance. FEEL engines often use optimized evaluation algorithms and data structures to ensure high-speed execution of decision logic, even for complex rule sets.

Integration FEEL

can be easily integrated with other programming languages and platforms. Many decision management systems and business rules engines provide support for executing FEEL expressions alongside other code or as part of a larger application. This enables seamless integration of decision logic via services into existing IT architectures and workflows.

Extensibility

FEEL can be extended with domain-specific functions and operators to cater to specific industries or business domains. These extensions can be defined to encapsulate common calculations, business rules, or industry-specific logic, enabling greater reusability and modularity.

Interoperability

FEEL also enables the sharing and reuse of decision models across different organizations and applications.

Business Benefits of FEEL

Standardization and Vendor-neutrality

FEEL is a standardized language within the OMG DMN standard, which means it has a well-defined specification and is supported by various software tools and platforms. Standardization ensures interoperability, as FEEL expressions can be used across different DMN-compliant systems without compatibility issues. FEEL is designed to be portable across different platforms and implementations.

Business-Friendly

FEEL focuses on capturing business rules and decision logic in a way that is intuitive and natural for business users. This allows subject matter experts and domain specialists to directly participate in the decision modeling process, reducing the dependency on IT teams and accelerating the development cycle.

Simplicity and Readability

FEEL has a syntax that is easy to read and understand – even for non-technical users like subject matter experts and citizen developers. It uses natural language constructs including spaces in names and common mathematical notation. This simplicity enhances collaboration between technical and non-technical stakeholders, facilitating the development of effective decision models.

Ease of Use

FEEL is supported by various decision management tools and platforms. These tools provide visual modeling capabilities, debugging, testing, and other features that enhance productivity and ease of use. The availability of modeling and automation tooling support simplifies the adoption and usage of FEEL.

Decision Traceability

FEEL expressions support the capture of decision traceability, allowing users to track and document the underlying logic behind decision-making processes. This traceability enhances transparency and auditability, making it easier to understand and justify the decisions made within an organization.

Decision Automation

FEEL has well-defined semantics that support the execution of decision models. It allows the evaluation of expressions and decision tables, enabling the automated execution of decision logic. This executable semantics ensures that the decision models defined in FEEL can be deployed and executed in a runtime environment with other programs and systems.

Compliance and Governance

FEEL supports the definition of decision logic in a structured and auditable manner. This helps businesses ensure compliance with regulatory requirements and internal policies. FEEL’s ability to express decision rules transparently allows organizations to track and document decision-making processes, facilitating regulatory audits and internal governance practices. FEEL includes several features specifically tailored for decision modeling and rule evaluation. It supports concepts like ranges, intervals, and temporal reasoning, allowing for precise specification of conditions and constraints. These domain-specific features make FEEL particularly suitable for industries where decision-making based on rules and constraints is critical, such as healthcare, finance, insurance, and compliance.

Decision Analytics

FEEL provides the foundation for decision analytics and reporting. By expressing decision logic in FEEL, organizations can capture data and insights related to decision-making processes. This data can be leveraged for analysis, optimization, and continuous improvement of decision models. FEEL’s expressive capabilities allow for the integration of decision analytics tools and techniques, enabling businesses to gain deeper insights into their decision-making processes.

Trisotech FEEL Support

Most comprehensive FEEL implementation

Trisotech provides the industry’s most comprehensive modeling and automation tools for DMN including support for the full syntax, grammar, and functions of the FEEL expression language. To learn more about basic types, logical operators, arithmetic operators, intervals, statements, extraction and filters supported by Trisotech see the FEEL Poster.

FEEL Boxed Expressions

Boxed Expressions are visual depictions of the decisions’ logic. Trisotech’s visual editor makes the creation of Boxed Expressions and FEEL expressions easy and accessible to non-programmers and professional programmers alike.

FEEL Functions

FEEL’s entire set of built-in functions are documented and menu selectable in the editor. The visual editor also offers supports for the Trisotech-provided custom FEEL functions including functions for Automation, Finance, Healthcare, and other categories.

Autocompletion

The Trisotech FEEL autocompletion feature proposes variable and function names including qualified names as you type when editing expressions thus saving time and improving accuracy.

FEEL as a Universal Expression Language

Trisotech has also expanded the availability of the international standard FEEL expression language to its Workflow (BPMN) and Case Management (CMMN) visual modelers. For example, FEEL expressions can be used for providing Gateway logic in BPMN and If Part Condition expressions on sentries in CMMN.

FEEL Validation and Debugging

Trisotech provides validation for FEEL and real-time full-featured debugging capabilities. To learn more about testing and debugging read the blog Trisotech Debuggers.

Additional Presentations and Blogs

You can also watch a presentation by Denis Gagne on using FEEL as a standards-based low-code development tool and read a blog by Bruce Silver about how using FEEL in DMN along with BPMN™ is the key to standards-based Low-Code Business Automation.

OMG®, BPMN™ (Business Process Model and Notation™), DMN™ (Decision Model and Notation™), CMMN™, (Case Management Model and Notation™), FIBO®, and BPM+ Health™ are either registered trademarks or trademarks of Object Management Group, Inc. in the United States and/or other countries.

Trisotech

the
Pioneer

View all

Trisotech's blog post - Invoking AWS Lambda functions
Trisotech Icon
Blog

Invoking AWS Lambda functions

By Trisotech

Read Time: 2 Minutes

You have created your own code in an AWS Lambda function in Python, Java, JavaScript or C# and want to integrate it with an automated workflow created in the Trisotech Digital Enterprise Suite?

This simple Python function says hello from the region it is deployed in with HTTP 200 as plain text.

AWS Lambda Function can be easily deployed but it is important to make sure that access to this function is restricted. This is where authentication comes into the picture. In this case it is going to be based on an API key that is assigned and provided to the consumers of this function. To be able to use API key-based authentication, the function needs to have a trigger based on API Gateway.

Adding API gateway is done via adding a trigger to the function. It is important to use the REST-API as the type and API key as security mode.

With API Gateway as trigger, this function will get assigned URL to invoke this function from outside. It will be in following format:

https://{xxxxxxxxxx}.execute-api.{region}.amazonaws.com/default/DESSample

where xxxxxxxxx and region are replaced with actual value based on AWS environment.

Additionally, there will be one API key created automatically and more API keys can be created in the configuration of the API Gateway. The API key needs to be provided as an HTTP header (named x-api-key) when invoking the function.

Trisotech Digital Modelling and Automation suites trivialize the orchestration of externally defined code with its low code approach if it’s exposed though a standard REST API that can be described using the OpenAPI (Swagger) standard.

The integration is done through a BPMN service task that invokes the lambda function.

The lambda function is referenced using the Operation Library that allows to define where the lambda function can be accessed, its parameters and its security constraints. Clicking on the service task gear will allow you to create a new Interface and Operation.

Name your Interface and configure it (using the pen icon) with:

  • Server URL – what is the server URL where this interface can be found
  • Security
    • Type: API Key
    • In: header
    • Name: x-api-key

Security section defines the mechanisms to authenticate when invoking the service. In this case, it uses API Key as defined in the API Gateway for the AWS Lambda function.

Name your Operation and configure it (using the pen icon) with:

  • Method – HTTP method that the request will use.
  • Path – context path in the server URL for the AWS Lambda function.
  • Outputs – data to be retrieved from the function.
    • result

This integration allows to invoke lambda functions written in any language, but also more complex services exposed through a REST API opening an infinite world of orchestration and integration using your existing or newly created functions and services.

Trisotech also offer an Automation Cookbook as part of the Digital Automation Suite that contains a lot of other recipes to integrate systems and with its automation capabilities.

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - Standardizing BPMN Labels
Bruce Silver
Blog

Standardizing BPMN Labels

By Bruce Silver

Read Time: 4 Minutes

In my BPMN Method and Style training, I show the following BPMN and ask students, “What does this diagram say?”

You really cannot tell. It says something happens and then either the process ends or something else happens. Not very informative. But if you run Validation against the rules of the BPMN spec… no errors!

As far as the spec is concerned, it’s perfect. But if the goal is to communicate the process logic, it’s useless. If we run Validation against the Method and Style rules, we get this:

Now we get 6 errors, and they all pertain to the same thing: labels. Compare that process diagram with one containing labels and other “optional” elements:

Now the diagram says something meaningful. Beyond labels, the BPMN spec considers display of task type icons, event triggers, message flows, and black-box pools to be optional. Their meaning is defined in the spec, but modelers may omit them from the diagrams. Labels – their meaning, placement, or suggested syntax – are not discussed at all. They are pure methodology.

Obviously, to communicate process logic intelligibly through diagrams, labels and similar methodological elements are necessary. That was the main reason I created my own methodology, called Method and Style, over a decade ago, which includes rules about where labels are required, what each one means, and where corresponding diagram elements must be labeled identically. The best BPMN tools, like Trisotech Workflow Modeler, have Method and Style Validation built in, and thousands of students have been trained and certified on how to use it.

Here are some of the rules related to labeling:

  • Activities should be labeled (Verb-object) to indicate an action.
  • A Message start event should be labeled Receive [message name].
  • A Timer start event should be labeled with the frequency of occurrence.
  • The label of a child-level page should match the name of the subprocess.
  • Two activities in the same process should not have the same name unless they are identical.
  • A boundary event should be labeled.
  • An Error or Escalation boundary event on a subprocess should be labeled to match the throwing Error event.
  • A throwing or catching intermediate event should be labeled.
  • If a process level has multiple end events, each end event should be labeled with the name of the end state (Noun-adjective).
  • Each gate of an XOR gateway should be labeled with the name of an end state of the previous activity. If 2 gates, the gateway may be labeled as end state plus ? and the gates labeled yes and no.
  • Gates of an AND gateway should not be labeled.
  • Non-default gates of an OR gateway should be labeled.
  • Two end events in a process level should not have the same name. If they mean the same end state, combine them; otherwise give them different names.

One reason why the task force developing the BPMN 2.0 standard didn’t care about labels is that their focus was primarily on model execution, which depends heavily on process data. While it is possible to suggest process data and data flow in the diagrams, this is something that – in most tools – is defined by Java programmers, and as a consequence is omitted entirely in descriptive, i.e. non-executable, models. In order to reveal model meaning in descriptive models in the absence of process data, Method and Style introduced the concept of end states.

The end state of an activity or process just means how did it complete, successfully or in some exception state. End state labels, typically in the form Noun-adjective, serve as a proxy for the process data used to determine branching at gateways. For example, in the diagram above, the gate labels Valid and Invalid are by convention the end states of the preceding activity Validate request. An executable version of this process would not necessarily have a variable with those enumerated values, but the labels suggest the process logic in an intuitive way.

I’m not going to review the other Method and Style conventions and rules here. There is plenty of information about them on my website methodandstyle.com, my book BPMN Quick and Easy, and of course my BPMN Method and Style training. Method and Style makes the difference between BPMN diagrams that communicate the logic clearly and completely and those that are incomplete and ambiguous. BPM project team managers tell me that before Method and Style, their BPMN diagrams could be understood only by the person that created them. Standardization of the meaning and usage of diagram labels changed all that.

CMMN, the case management standard, has the same issue. For example, a plan item’s entry and exit criteria, modeled using sentries and ON-part links, require labels on both the ON-part (a standard event) and the IF-part (a data condition) to make sense, but labeling is not required or even suggested by the spec. Without labels, all we can tell is that some entry or exit condition exists. My book CMMN Method and Style suggests several labeling conventions for those diagrams, as well.

The Trisotech platform provides a key benefit to Method and Style practitioners: Method and Style rules about labels and other conventions can be checked in one click. Style rule validation not only improves model quality but reinforces the rules in the modeler’s mind. In the training, certification exercises must be free of any validation errors.

If your BPM project depends on shared understanding of BPMN diagrams, you need to go beyond the spec with labeling standards. You need Method and Style.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - DMN's Killer App
Bruce Silver
Blog

Interrupting Events in Automation

By Bruce Silver

Read Time: 2 Minutes

In my BPMN Method and Style training, we use examples like the one below to illustrate the difference between interrupting and non-interrupting boundary events:

Here an Order process with four subprocesses could possibly be cancelled by the Customer at any time. As you can see, a single physical Cancellation message from Customer is modeled as multiple message flows. That’s because the Cancellation message is caught by four different message boundary events, representing four different ways the message is handled depending the state of the process when Cancellation occurs. Note that if the message is received during Prepare invoice, i.e., after Ship order is complete, the Order process cannot be terminated, and explanatory information is instead sent to the Customer. And this is a fine way to show, in descriptive models – i.e., non-executable BPMN – the behavior expected with exceptions like Customer Cancellation.

But as I have begun to get more involved with Automation models – executable BPMN – I am discovering that modeling exception handling in this way is not ideal. The BPMN spec says that an interrupting boundary event terminates the activity immediately. As far as the process engine is concerned, the instance just goes away, along with all its data. A user performing some task inside a cancelled subprocess is unaware of this until the task is complete; the Performer gets no notification that their work on this Order is for naught. That’s the main problem. There is also the problem that some actions performed in the activity prior to cancellation may need to be undone. That can be done on the exception flow, so the main difficulty, as I see it, is notifying the user performing some task when the Cancellation occurs.

Let’s focus on the first subprocess, Enter order. In this simplified example, the Order is first logged into the Order table, then inventory is checked for each Order item, reserving the Order quantity for each one. If some items are out of stock, we need to Update Order table. An Order Acknowledgment message is sent to the Customer containing the OrderID, a unique identifier in the system. The Customer would need to provide this in any Cancellation message. These are all automated activities, occurring very fast. Then a User task Check credit authorizes the purchase for this Customer. That could take a while, so if the Customer decides to cancel shortly after submitting the order, it’s likely going to occur during Check credit. And if that happens, we don’t want the user performing Check credit to waste any more time on this Order, as would happen with an interrupting boundary event.

So instead we model it as a non-interrupting event subprocess. We could do it with a non-interrupting boundary event on the subprocess, but this way I think is cleaner. Now if the Cancellation message is received, before we terminate Enter order we do a bit of cleanup, including Update Order table to show a status of “Cancelled by Customer” and notifying the Check credit performer by email to stop working on this instance, after which we terminate the subprocess with an Error end event. The exception flow from this Error event in the parent level allows additional exception handling for the Cancellation.

The required exception handling requires some additional information about the process instance. We need to know who is the task performer of Check credit. This is platform-dependent, but on the Trisotech Low-Code Automation platform you can use a FEEL extension function to retrieve the current task performer. In a more complex subprocess, we may also need to know its state when the cancellation occurred, which tasks have been completed and may need to be undone. For this, Trisotech has provided additional extension functions as well. Cleaning up when a process instance is cancelled in-flight can be messy, but it’s still within the reach of Low-Code Business Automation.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Learn how it works

Request Demo

Confirm your budget

Request Pricing

Discuss your project

Request Meeting
Graph