Sandy Kemsley’s Vlog - Process Pain Points: Pended Processes
Sandy Kemsley Photo
Vlog

Process Pain Points: Pended Processes

By Sandy Kemsley

Video Time: 10 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with a third in a series of videos on finding and dealing with process pain points. Although, this will be of interest to business analysts who are figuring out the Asis and 2B processes within our organization, I’ll also touch on implementation issues that will be of interest to my more technical listeners.

Now, in the first video I talked about finding where people are using spreadsheets and email as workarounds because this indicates that the existing systems aren’t doing everything that the business needs. In the second part I looked at handoffs, when you trace a business process from start to finish. You start to find the inefficiencies and errors that can happen when work is handed off between people and departments.

Now, I’m going to cover pending processes, in other words what happens when somebody is in the process of working on something and decides that they need to pend it for a later time. Maybe they’re not able to complete the task right at the moment, or they have to step away from their desk so this sounds deceptively simple but after decades in the trenches of process automation, I can assure you that improperly managed pended items are like the Bermuda Triangle for processes.

Now, I’m not talking about automated pens that are automatically released, such as a process that waits for an overnight batch to run before continuing then gets released first thing in the morning. This is about manual pens where someone who’s assigned an activity in a process just isn’t able to do it right then.

Now, there are a number of different reasons why someone needs to manually pen the process.

So first of all, the activity might be time consuming and the worker doesn’t have time to complete it before they leave for the day or step out of the office. In that case it’s a pretty simple hold situation it’s much like a file that you would put aside on your desk and then pick up first thing the following day.

Secondly they might not have the skills required to complete the activity that’s been assigned to them and they need to loop someone else in to help them. If the system that they’re using doesn’t allow them to reassign the work to someone else who does have the skills then they might need to pend it while they wait for the help that they need.

Thirdly, they might not have the information required to complete an activity, and they’re waiting for something to arrive, this could be from a colleague or a customer or a business partner and it could be a document or a phone call. So they’re just they have to set it aside wait for the information to come in and then they can pick up the activity again and complete it.

Now, pens can be handled in a couple of different ways. Sometimes the worker has a list of work assigned to them, in sort of an inbox, like you would have with email and they can just close the activity and it will stay there in their inbox and they’re going to leave it there in their own work queue until they decide to come back to it. Or you might have workers that are working through in more of an assembly line fashion or what we would call Push workflow. So they have to finish one item before they start on the next one. In that case if they’ve got something that they’re actively working on and they’re not able to finish it they have to send it somewhere in order to pend it because otherwise they won’t be able to move on to the next thing that’s in their that’s in their work queue. So either they need to assign it to another worker or they have to put it in a personal hold queue that then they will go back and get later or they put it in some sort of shared queue for items that require followup for anybody possibly on their team.

Now, what we can find is that once items get pended they can have a number of problems with them and in general these fall into two different categories.

First of all, how is the process located and unpend? In other words how does it get released and show back up in somebody’s work queue so that they that they work on it again so there could be a timeout that makes it just pop back into somebody’s queue. The worker might have to search for the process manually and then release it back to their work queue, and if so they have to know when to search at a specific time in order to do that. There might be reminders that are sent to the worker or to someone else to go and search for the item for example, there could be some other information that arrives asynchronously and then that automatically rendez-vous together with the process and causes it to be released back to their workflow, or to their work queue. So there’s a number of different ways that the process might be located and then unpend as part of the natural part of how the work get processed.

Now, the second thing we need to consider, is what happens when that doesn’t happen. What happens if no one searches for and releases a process that’s been pended, or nothing rendez-vous with it, maybe the original worker went on vacation or they quit their job or they just forgot about it, or the item that was supposed to come in and match with it never came in. So is the work automatically released back to them after a certain period of time, so in other words is there a condition but then it times out, maybe somebody who’s supposed to be handling escalations or a manager needs to review the contents of everybody’s pend queues on a regular basis, and again we have the issue what happens if they forget to do that, or do the items just sit there forever or wait until the irate customer calls in to say: “what happened to my transaction?” and then somebody goes searching for it. So, a lot of companies have tried to solve the pen problems by just stating that all work is one and done, there’s no pens you just have to finish the work when you have it in front of you but that’s not always a feasible alternative, as I mentioned the reasons why we could be pending might have to do with information that’s missing or skills that are missing. So there’s always going to be a scenario when somebody can’t finish a task. And if you don’t let them pend it, they’ll find a way! I’ve seen situations where pens were not permitted, but you could send the item to your manager. Now, that policy really quickly changed, because all of a sudden, the managers had like a million things in their workbox that really had nothing to do with them and didn’t require their attention. But the workers just had to get the things off of their plate, so that they could move on to the next item, and that was the only way that they had of moving them on to the next item.

So given that you can’t avoid pens in all scenarios, you need to properly address how and when pended items are are released. Now, the best way to start is to build these pens into your process models and then look at how they impact your process performance depending on how they’re handled. If there’s timeouts can you do a simulation on this or can you use some process mining data and look at what actually occurs, and say what happens if they sit there for 24 hours or when should they be released back to the to the workers. So if you do have some historical data, you can see how long items have normally sit in in pen queues then that’s really helpful for helping those edge cases that might get managed by pens.

Now, if you’re pending for some sort of asynchronous event that’s external to the process, like some missing information or documentation has to come in, might be something internal to your organization or it could be outside with the customer, the best possible solution is to automatically rendez-vous the inbound item with the pended item, so then that rendez-vous is going to release the item back into the work queue and then it’ll pop up for somebody to complete. So this works really well when the item is pended because it’s missing a document from a a customer and then the inbound document is can be matched up to to the pendent item based on a customer number or a transaction number. You see this a lot with onboarding situations where you have to wait for the customer to to upload some proof of ID or some financial information or other documentation.

Now, if you can’t match and rendez-vous automatically, somebody needs to be alerted if an inbound event might match with a pended item, and then search for that item and then release it manually. So you might have an admin who does this with all of the inbound correspondents so they might take every piece of inbound correspondence see where it belongs look through everyone’s pended items and then match up the ones that they think should be matched up and released back to the work queues, or maybe a worker gets a notification that they have inbound correspondence and then they have to search for it. So in either case the item is manually released back to the work queue and more critically it’s not released if nobody searches for it, so that can become a bit of a problem.

Now, you can always have the situation where I mentioned this previously, where items are pended for a period of time, that could be because your process management system doesn’t know how to do a synchronous rendez-vous, and this is the best it can do, or maybe it’s legitimately a timed pen like waiting for an overnight process, or it’s a reminder to call the customer back on a specific date and time to get some information. So when the timer expires, the item is going to come back into the work queue. You probably also need to give the worker the opportunity to go in and find the item that’s sitting there waiting for a timer and say: “release it now, because the customer called me and I need to get access to that item right now, and work on it”. Wo you have to look at all these scenario something is in a timer state, something is just in a pending queue, something is in an exception queue of some sort.

Now, finally you have the case where you’re just relying on workers to remember to review their pen queue on a regular basis and then release items back that are ready to work on. So if you’re allowing this model for pens, you have to be absolutely sure that more than one person is able to access everyone’s pen queues, and that escalation notifications are being sent to multiple people, as the pen time increases. Otherwise, if someone is unable to check their pens or they leave their job unexpectedly, there’s your bur Bermuda Triangle of processes.

Now, I’m going to wrap up this series next time with an episode on how escalations are handled.

That’s all for today. You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - Process Pain Points: Handoffs
Sandy Kemsley Photo
Vlog

Process Pain Points: Handoffs

By Sandy Kemsley

Video Time: 5 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with the second in a series of videos on finding and dealing with process pain points.

Although this will be of interest to business analysts who are figuring out the Asis and 2B processes within our organization, I’ll also touch on some implementation issues that will be of interest to my more technical listeners.

Now in the previous video, I talked about finding the spread sheets and email in existing business processes in order to locate process pain points. This gives us a starting point for process improvement since having to work around existing systems with ad hoc methods usually means there’s a part of the process that’s not being addressed in a consistent fashion.

Today, I’m going to look at another paino in business processes which is failed handoffs as work moves between people and between departments. As I mentioned previously, I tend to analyze an organizations’s processes by walking around and talking to the people who are actually doing the work, not just looking at documentation and process diagrams.

As a starting point, I usually try to follow a business process from beginning to end. And if you’ve been listening in the past. you’ll know I’m a huge proponent of understanding and analyzing your end-to-end processes.

Now let’s say the process is triggered by an inbound paper document from a customer, then I start at the mail room (if it’s triggered by an online order form), and then I’ll start at the order entry (if it’s triggered by an internal timer every day that kicks off an overnight reconcililation process, I start with that). Then, I follow it through to the conclusion of the process, for a customer facing process that could be when the customer receives the goods or services or if their complaint is resolved or whatever it is that they called or emailed or sent in a form about. For purely internal processes, this could be when the account books are reconciled or an internal audit is complete and documented.

Now, following business processes through from beginning to end lets me discover where things can go wrong when handing off work between people and departments. This could be a situation where someone needs, for example, to manually re-enter the customer’s order data into an internal system because there’s no integration with the online ordering system (this happens more often than you think even today) or the information passed from an operational group to an audit group is missing some data that’s required to complete the audit, and the audit group has to request that data from the operational group and then manually integrate it in before they can do their work, or different departments don’t use the same workflow system and we end up back with the spreadsheets and email to communicate work between departments.

Now, handoffs are typically one of the worst spots for inefficiencies and even degradation of quality in processes, and they always have been even in purely manual processes. So whenever work passes from one person to another, there’s an opportunity for information to be missed, or even for tasks to get completely dropped or hidden away somewhere. Think about the last time you called in into your bank or your mobile phone provider and had some sort of service request or a problem, so how many times did you have to repeat the description of your problem to different people and even repeat your personal information so that they could authenticate you multiple times as it got handed off from one person to another? That’s because they do a really bad job of handing off work, namely your call, between people and between departments. And in part, this is why oneandone customer service interactions gained in popularity there were no handoffs or at least a minimal number so there weren’t the opportunities for inefficiencies and dropped task to happen at that handoff point.

So, looking at the entire end-to-end processes in your organization, paying special attention to what happens every time a piece of work is handed off is going to let you find a lot of your process pain points. Now the funny part of looking at handoffs as a process pain points is that many failed handoffs can be directly correlated with local optimization efforts. So in other words, a single department gets enthusiastic about optimizing their own internal processes, without thinking about where they fit in the end-to-end process, and they don’t think about who needs to provide them with inputs, and more importantly they don’t think about who needs to consume their outputs. So this can result in critical business information being locked in their departmental line of business systems where it’s inaccessible to other departments. Even those that need to do some downstream work using that same information, in order to complete the end-to-end process.

The moral of the story, if you don’t have everyone thinking about where they fit in the end to-end business process and worried as much about the metrics for the entire process as they are about their own departmental metrics, then you’re almost certainly going to be experiencing some pain at the handoff points.

That’s all for today. You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - Process Pain Points: Spreadsheets and Email
Sandy Kemsley Photo
Vlog

Process Pain Points: Spreadsheets and Email

By Sandy Kemsley

Video Time: 6 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with a new series of videos on finding and dealing with process pain points. Although this will be of interest to business analysts who are figuring out the As-is and To-be processes within an organization. I’ll also touch on some implementation issues that will be of interest to my more technical listeners.

Quite a bit of my work over the years, has been helping companies with implementing business process improvement and automation within their organization. In this context, I’ve done everything: from the business analyst role of figuring out how the processes work and how they should work, to the architect’s role of designing how systems and processes fit together, to the developers role of how this is actually implemented and rolled out. In this range of activities, that upfront part of determining the problems with the current processes and how to best improve and automate those. is often the trickiest part. It’s as much an art as it is a science, and it requires a combination of business knowledge, technology knowhow, and a process improvement mindset.

I’m often asked how I go about figuring out that part in improving an organization’s processes. In other words how do I find the organization’s process pain points and determine what changes to make to ease that pain. My answer almost always starts with spreadsheets and email. I tend to analyze an organization’s processes by walking around and talking to people who are actually doing the work, not just looking at documentation for the systems or processes or procedures. And every time I sit down beside an insurance claims manager, a customer service rep or a loans officer, I ask the same question: “show me your spreadsheets and email”.

This might sound a little bit simplistic, but think about it. Business processes are made up mostly of calculations, decisions and workflow. Every company has line of Business Systems that their employees work with. In Insurance there are underwriting Administration and claim systems, in customer service there are CRM systems, in loan there’s loan origination and administration systems. These systems almost always do the calculation part of business processes, and they may also do some of the decision and workflow part but they almost always fall short of what people need them to do in those latter two areas.

So instead, there will be a collection of other methods that happen outside of the line of business system system to do some parts of the work, to track certain activities or hold data that isn’t kept in the main systems. The most common implementation of these are the old office automation standbys: spreadsheets for storing data doing calculations, email for workflow.

Now in many cases, these spreadsheets and emails aren’t documented as part of the standard operating procedure this creates a significant risk for the company. So for example what the audit reports state may not be what is actually happening: calculations that impact business decisions might be made in a spreadsheet where you’re not even really sure who can edit the formula and workflow and oversight are dependent on people knowing who to include on an email chain and there may not be an audit trail of who touched something at any particular point.

So departments could even be performing entirely new processes that were never considered part of their responsibility and just doing it in spreadsheets and email outside of their line of Business Systems and outside of the purview of of a it. Now most likely, line of Business Systems don’t have sufficient flexibility to be able to make the changes required by the business as quickly as they need to be, or there’s going to be a significant amount of coding by IT to make that happen. There could be a new business capability that’s been added and the system just can’t be enhanced to support it, or you could have work patterns: people in the workflow that aren’t supported by the system such as now you need to have case management instead of the sort of oneandone trans action processing in order to handle some new functionality or capability, or work distribution is more complex than just a simple round robin assignment that might be in the line of business system.

So there are all sorts of scenarios where the line of business system just can’t do what you need it to do in terms of either calculation, but more likely, in terms of decisions and in terms of workflow or the processes.

Now, I’ve talked a lot in the past of the video series about model driven development and the need for flexible processes and decisions is where model driven development really saves the day. So if you look at automating your main business process with systems that use graphical process and decision models, it’s going to be a lot easier to make changes and enhancement to those core systems, and avoid the need for spreadsheets and email around them.

Now, finding that spreadsheet and email usage in the core business processes doesn’t necessarily tell you what the new process should look like. It just shines a on where the problems are. So you know in a lot of cases the spreadsheets and emails were kind of added on in an incremental or ad hoc fashion, might not actually be the best way to be doing that process it’s just the way that people have devised along the way. But once you have a spotlight on those areas where the problems exist, then you need to know where to focus your energy and look at how the work is being done now and then what needs to change it in order to do it more effectively.

I’m going to be following on in the next couple of videos looking in particular at some exception cases that can happen within a mainstream line of business processing that most often result in things uh happening in spreadsheets and email. So tune in for the next couple of videos and you’ll be able to see some of the some drill down into some of the areas where this is of particular interest.

That’s all for today. You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - AI and Automation: Friends or Foes?
Sandy Kemsley Photo
Vlog

AI and Automation: Friends or Foes?

By Sandy Kemsley

Video Time: 8 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog to continue with last time’s discussion of Artificial Intelligence and BPM. Actually, I’m going to address the larger topic of Process Automation in general, how this impacts businesses and people, and then how AI fits into this.

We see a lot of press. Now, about the dangers of AI: it’s going to take our jobs, it’s going to take over our lives, we’re all going to become slaves to the robots… Okay that’s mostly a bit of an exaggeration but there is a lot of Doom and Gloom around the topic of AI, this isn’t a new phenomena though, it’s not just about AI. If we look at um a promo for a recent book on this topic, so there’s a book called “Blood in the machine”, it says in the promo: “the most urgent story in modern tech begins not in Silicon Valley, but 200 years ago in rural England when workers known as the Luddites rose up rather than starve at the hands of factory owners who were using automated machines to erase their livelihoods”. In short, the alarm currently being raised over AI isn’t a new phenomena.

It happens with pretty much any new technology. In most cases the technology itself is not the problem, it’s the social constructs around how it’s used and the rights of the workers and other people who are impacted by the technology. I’m not any sort of expert in those fields but I’ve always seen fears about Automation in the business process projects that I’ve been involved in. Now, I started in process automation back in the Imaging and workflow day several decades ago, and at that time there were people whose job it was to push carts full of file folders around between different desks, based on handwritten routing slips on the folders, so the routing slips and the mail carts were the workflow of that age and the jobs of the people who filled out those routing slips and pushed those carts around were definitely impacted by the projects that we implemented for Imaging and workflow it took those file folders and made them electronic and it took away most of those carts that traveled around from one desk to another.

Now, after that we had years of “business process re-engineering” which in many cases was just an excuse for companies to downsize their Workforce, but it was also to some degree enabled by business automation. So processes that were previously very manual required a lot of human decision points, were suddenly partially or even fully automated.

Now, the last several years have fine-tuned that process automation by integrating in decision management, this is a huge factor in reducing human decision- making in business processes, so the process and the decision automation just keeps getting more intelligent. Now, we’ve also integrated many other systems through direct calls between systems or robotic process automation which means that there’s a lot less people doing copy and paste or reing of information between different systems that also means that there are many fewer errors due to copy paste and rekeying data, and it takes less time, so the automation gets faster and better too.

Now, does this mean that some of the people involved in those processes have radically um different jobs now, or maybe even had to find a different job altogether? Absolutely! Does it mean that customers are seeing different levels of service and quality both improvements and failures? Of course! And does it mean that companies are succeeding or failing financially based in part on their decisions about when and how to deploy automation? Well yeah! We saw that in excruciating detail during the business disruptions of the pandemic, which I’ve talked about in previous videos.

What I’m trying to to say is that in most cases the automation technology itself isn’t inherently good or bad. It can result in job losses, it can also improve job satisfaction by reducing the boring routine work, it can help customers get what they want faster through self-service, or it can create a frustrating customer experience when something goes wrong that’s not accounted for in the automation. It can make a company more profitable and efficient, it can also backfire and create a customer satisfaction nightmare.

I think we’ve all seen examples of both the positive and negative side of of all of these for the people who work with the technology, for the customers who are impacted by it, and for the companies who bring this new technology in. So this is true for most types of business automation that we deal with today: BPM, systems decision management, process mining, RPA and yes how AI is used with all of these.

I don’t think that people on the customer side of business want to return to the pre-automation days for the most part. You remember the bad old days when a straightforward business transaction like getting car loan or processing a simple Insurance claim, could take weeks or even months. So automation is also what gives us a lot of online self-service for customers so you can now buy office supplies with a couple of clicks, or you can make a stock market trade in your pajamas at home, or you can renew your fishing license on the weekend. All of these things are possible because of automation.

Now, if you look at the business’s side of these transactions, they don’t want to return to the mountains of paper files and the manual processes. They also don’t want to return to having critical business procedures exist primarily as folklore in the heads of people within the company that may or may not stay with the company in in the long term. Now, from a purely practical standpoint, there’s no putting the automation technology to toothpaste back in the tube any more than we’re going to go back to handloom textiles from the pre-Luddite days. Organizations are going to use automation or not use it for their own reasons, and there will be both good and bad things that happen because of that. As consumers, workers, business owners, and citizens we have a say in both the positive and negative impacts of automation.

Now, as I mentioned in my last video, I believe the current doomsaying about AI is a bit over blown. AI isn’t going to completely take over all of our business processes, any more than the previous generations of technology did. AI can increase the complexity of things that can be fully automated, but that’s always being a constantly changing threshold with every new generation of Technology. The same could be said for decision management. The same could be said for business process management in general. These things always make it so that you can automate more and more complex things, the more of these technological components that you bring in.

Now, where automation technology including AI can can really help and really add value, is when it provides guidance to knowledge workers to help them do the best possible job without replacing those roles. So it’s not just about taking the repetitive low skill jobs and automating them, it’s also about letting lower skilled workers work on more complex jobs because they have some amount of a automated guidance, and they will also learn as they work without risking violating the company policies or procedures. So you can still have people in the processes, you can have some things that are automated, and you can have the people who remain in the process be guided by the technology, AI, decision management, business Process Management, to make sure that they’re doing the right thing at the right time. And given that a lot of Industries have a lack of skilled knowledge workers, letting them be more productive earlier is a good thing for everybody involved.

That’s all for today. You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - AI and BPM
Sandy Kemsley Photo
Vlog

AI and BPM

By Sandy Kemsley

Video Time: 8 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog to talk about the latest Hot Topic in BPM: Artificial Intelligence.

Now, I’ve been at a couple of conferences in the last month, and I’ve had a few briefings with vendors and there’s a lot of interest in this intersection between AI and BPM. But what does that mean? What exactly is the intersection? And it’s not just one answer, because there’s several places where AI and Process Management come together.

Now, the dream, or the nightmare in some people’s opinion, is that AI just takes over processes it figures out what the process should be then it automates and executes all the steps in the process. The reality is both less and more than that. So, let’s look at the different use cases for AI in the context of BPM. Let’s start at the beginning with process Discovery and design and there’s quite a bit that AI can do in this area as an assist of Technology. Now, at this point it might not be possible to have an AI completely designed processes without human intervention, but it is possible to have AI act as sort of a co-pilot for authoring process models or finding improvements to them.

There’s a couple different scenarios for this.

First of all, you could have a person just describe the process that they want to have in broad terms, and have generative AI create a first impression of that, or a first version of that process model for them. So, then the human designer can make changes directly or add additional information, the AI could then make refinements to the process model and so on. Now, the challenge with using generative AI in this scenario is that you need to ensure that the training data is relevant to your situation. This might mean that you need to use private AI engines and data sources that are trained on your own internal data or on data that’s specific to your industry in the very least in order to ensure some reasonably good results.

Now, the second process modeling scenario is when there are logs of processes in place, like we would use for process mining, and we’ve talked about process mining in in a previous podcasts. Now, in that case, there are possibilities for having AI look at the log data and then other enterprise and domain data and using process mining using other search-based optimization suggest improvements to the process. So, for example adding parallelism at certain points, or automating certain steps or decisions, or having some activities be required for regulatory or conformance reasons. Again, there needs to be some checks and balances on the training data that’s used for the AI to ensure that you’ve included processes and regulations that pertain to your business.

Now, in both of these cases, there’s the expectation that a person who’s responsible for the overall process operation, like the process owner, might review the models that are created or revised by the AI before they’re put into production. It’s not just an automated thing where the AI creates a model or modifies a model and it’s off and running. Now, we can look at using similar types of a AI and algorithms that you would for process improvement that are based on process mining and other domain knowledge, we can also use those in the scenario where AI acts again as a co-pilot, but for the people that are doing the human activities in a process, so the knowledge workers. Now they can ask complex questions about the case that they’re working on, they can be offered suggestions on the next best action, they can have guard rails put in place so that they don’t make decisions at a particular activity that would violate regulations or policies.

Now, we already see a lot of decision management and machine learning applied in exactly this situation. So, a knowledge worker just needs a little bit of an assist to help them make complex decisions or perform more complex activities. And adding AI to the mix means that we can have even more complex automation and decision-making that can support knowledge workers as they do their job. So, the ultimate goal is to ensure that the knowledge workers are making the best possible decisions at activities within processes, even if the environment is changing maybe regulations are changing, or procedures are changing. And then also to support less skilled knowledge workers so that they can become more familiar with the procedures that are required because they have a trusted expert, namely the AI, by their side coaching them on what they should be doing next.

Now, the last scenario for AI in the context of processes, is to have a completely automated system or even just completely automated activities within a process that used to be performed by a person. So the more times that an activity is performed successfully, there’s data collected about the context the domain knowledge that all that go behind that decision, the more likely it is that AI can be trained to make decisions and do activities of the same complexity and with the same level of quality as a human operator. We also see this with AI chatbots. We’re seeing these a lot now, that where they interact with other parties processes like providing customer service information. Now, previous previously a knowledge worker might have interacted with a customer maybe on a phone or by email, we’re seeing a lot of chatbots in place now for customer service scenarios. Now, a lot of them are pretty simple they don’t really deserve to be called AI. They’re just looking for simple words and providing some stock answers but what generative AI is starting to give us in this scenario, is the ability to respond to more complex questions from a customer and leave the human operators free to handle situations that can’t be automated or rather can’t be automated yet.

Now, currently I don’t think we need to worry about AI completely taking over our business processes. There’s lots of places where AI can act as a co-pilot to assist designers and knowledge workers to do the best job possible. But it doesn’t replace their roles: it just gives them an assist. Now, a lot of Industries don’t have all the skilled people that they need in both of these areas for designers, for knowledge workers or it takes a long time to train them so letting the people who are there be more productive, is a good thing. So, using AI to make the few skilled resources we have more productive is something that’s beneficial to the industry it’s beneficial to customers. Now, as I noted earlier, the ability of AI to make these kinds of quality decisions and perform the types of actions that are currently being done by people, it’s going to be heavily reliant on the training data that’s used for the AI. So, you can’t just use the public chat, like chat GPT, for interacting with your customers. That’s not going to work out all that well. Instead, you do want to be training on some of your own internal data as well as some industry specific data.

Now, where we do start to see people being replaced is where AI, is used to fully automate specific activity, specific decisions, customer interactions within a process. However this is not a new phenomenon. Process automation has been replacing people doing repetitive activities for a long time. So, all that we’re doing by adding AI, is increasing the complexity of the type of activity that can be fully automated. The idea that we’re automating some activities is not new, this has been going on a long time. So, the bar has been creeping up: we went from simple automation to more complex decision management, machine learning and now, we have full AI in its current manifestation. So, we just need to get used to that idea that it’s another step in the spectrum of things that we’re doing by adding intelligence into our business processes.

Now, are you worried about your own job? You could go and check out willrobotstakemyjob.com or just look around at what’s happening in your industry. If you’re adding value through skills and knowledge that you have personally that’s very difficult to replicate, you’re probably going to be able to stay ahead of the curve and you’ll just get a nice new AI assistant who’s going to help you out. If you’re doing the same thing over and over again however, you should probably be planning for when AI gets smart enough to do your job as well as you do.

That’s all for today. You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - Future-Proofing Your Business With BPM
Sandy Kemsley Photo
Vlog

Future-Proofing Your Business With BPM

By Sandy Kemsley

Video Time: 8 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with a new topic now that we’ve finished my series on best practices in business automation application development. Today, I’m going to talk about future proofing your business with process automation technologies such as BPM.

Now, a little over three years ago, everything was business as usual. Organizations were focused on new competitors, new business models, new regulations, but it wasn’t a particularly disruptive time. Then the pandemic happened, and that was disruptive! So, supply chains, customers, business models, everything changed dramatically in some cases.

Now, that’s not news of course not by now, but it’s become obvious that it’s not enough to have shifted to some new way of doing things in a response to a one-time event. Companies have to become more easily adaptable to frequent disruptions, whether they’re technological, societal or environmental, or they’re just not going to exist anymore. In many cases this means modernizing the very technological infrastructure that supports businesses.

So how is your business going to adopt a change? Both the changes that have already happened and the unknown changes in the future? There’s a lot that falls under the umbrella of modernization, and you need to look at whether you’re doing just enough to survive or if you’re taking advantage of this disruption to thrive and actually outgrow your competition.

I see three ways that companies have been reacting to disruption:

1

So you can support your existing business which is basically adding the minimum amount of technology to do the same things that you were doing before. This is purely a survival model, but if you have a unique product or service or very loyal customers, that might be enough for you.

2

You can improve your business by offering the same products or services but in a much better way. This will give you better resilience to future disruptions, it improves customer satisfaction and it shifts you from just surviving to thriving.

3

You can innovate to expand the products and services that you offer or move into completely new markets. This is going to let you LeapFrog your competition and truly thrive not just as we emerge from the pandemic, but in any sort of future disruption that we might have.

more than
managing your business processes

So I mentioned BPM, but this is about more than just managing your business processes. There’s a wide variety of technologies that come into play here and that really support future proofing of your business: process and decision automation, intelligent analysis with machine learning and AI, content and capture, customer interactions with intelligent chatbots, and Cloud infrastructure for Access anywhere anytime…

So you have to look at how to bring all of those together, and just understanding how all those fit, is like an entire day’s lecture all in one, but you probably have a bunch of those that you’re using already. Let’s look at a few kind of examples of this support/improve/innovate spectrum that I’ve been talking about though and we’re dealing with instruction and then just what it means for future proofing your business. So, supporting your existing business, a matter of just doing what you can to survive, and hoping that either you can keep up or that things will go back to normal. Now basically you’re doing the same business that you always were, but with maybe a bit of new technology to support some new ways of doing things:

But let’s just go a little bit beyond surviving disruption that you might do by sort of mandating together something to support your existing model. The next step to is to look at disruption as an opportunity to thrive. So you want to still be in the same business but embrace new technologies and new ways of doing things. So this really pushes further into looking at customer expectations: adding in self-serve options if you don’t already have them, and then coupling that with intelligent automation of processes and decisions. So, once you’ve added intelligence to your business operations to let them be done mostly without human intervention, now a customer can kick off a transaction through self-service and see a complete almost immediately by intelligent automation, same business – better way to do it, more efficient, faster, more accurate, better customer satisfaction.

Now, this is also going to be helped by having proper business metrics that are oriented towards your business goals. With more automation data is going to be captured directly,, regarding how your operation is working, and then that’s going to feed directly into the metrics. Those metrics then you can use to guide knowledge workers so that they know what they should be doing next. Also to understand how customer satisfaction is and how you can improve it.

So this lets you move past your competition, while keeping your previous business focus. So given that there’s two companies, you and your competitors, who are offering the same products or Services if one does only that survival support that I talked about previously and one does more intelligent improvements focused on customer satisfaction, who do you think is going to win?

Now, the third stage of responding to disruption and adapting to change is innovation. You’ll continue to do process and operational improvements through performance monitoring, data-driven analytics, but also move into completely new business models. So maybe you repackage your products or services and then you sell them to completely different markets, so you might move from commercial to Consumer markets or vice versa or you sell into different geography or different industries because now you have more intelligent processes you have this always-on elastic infrastructure. Here again, you’re just moving past your competition by not only improving your business but actually expanding into new markets, taking on new business models that are supported by this technology-based Innovation.

So it’s the right application of technology that lets you do more types of business and more volume without increasing your employee headcount. Without automation and flexible processes you just couldn’t do that, and without data-driven analytics you wouldn’t have any understanding of the impact that such a change would have on your business or whether you should even try it. So you need to have all of that: you need to have the the data that supports the analytics and you need to have the right type of technology that you’re applying to have more intelligent operations business operations, and this was going to allow you to move from just surviving to thriving to innovation.

Now, a lot of change here. The question that all of you need to be asking yourself now is not is this the new normal but really why weren’t we doing things this way before? There’s just a lot of better ways that we could be doing things and we’re now being pushed to take those things on.

That’s all for today. Next month I’m going to be attending the academic BPM conference in the Netherlands, and there’s always some cool new ideas that come up so watch for my reports from over there!

You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - Best practices in business automation application development - design #2
Sandy Kemsley Photo
Vlog

Best practices in business automation application

Implementation

By Sandy Kemsley

Video Time: 8 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with the sixth and final episode in a series on best practices in business automation application development.

In the first episode of the series, I talked about the automation imperative that’s driving organizations, there’s a lot of new technology that lets us automate things that were never possible before, and you need to be considering it or risk losing your competitive edge. In the second episode, I talked about best practices in strategic vision, namely making business automation a strategic direction and picking the right processes that will have the maximum impact. Then, in the last three episodes, I looked at best practices in the design of business automation, covering metrics, understanding your process (understanding what to automate and what not to automate) and finishing up with a session on design anti-patterns. So you can go back and review those earlier episodes if you haven’t already watched them. Today, I’m going to be wrapping up the series with some best practices in implementation methodology for business automation.

Best Practices In Implementation Methodology For Business Automation

Now, using an agile approach for implementing software projects isn’t unique to business automation, but they are particularly well suited to each other. Today’s business automation tools are usually model driven and low-code, so that you’re able to at least get a workable prototype up and running without writing much if any code. That doesn’t mean there will be no code, since you may have parts of the system such as integrations or specialized algorithms that require traditional coding techniques.

However, agile techniques combined with model driven low-code tools means that you can quickly get a working version in the hands of a group of business users, and let them beat it up. And that speed from idea to working system enables the first best practice in implementation: get a minimum viable product rolled out into production, as soon as possible, then iterate in place. Now, I’m showing this popular graphic, created by Henrik Nieberg several years ago, illustrating how to think about MVP, and you can follow the link in this QR code to read his excellent article discussing the concepts in detail. You might find it useful if you ever need to explain the concepts of minimum viable product and iterative implementation to others in your organization.

Now, you don’t want to be in the position of taking months to perfect your system, deploy it to the business, and then have them tear to shreds on the first day, so instead, you get it to them much earlier in the development cycle. So, when they tear to shreds as they inevitably will, you haven’t invested so much effort in that first version that you’re resistant to their ideas. Then you iterate until there’s a consensus on how this system should look and behave.

That’s not something that’s going to be possible if you’re stuck in old waterfall methodologies. With waterfall, you’ll be three months just writing requirements documents, plus time to get the business to sign off on them, since you’re forcing them to decide what they want months before they’ll actually see it. Then, another six months or so writing and signing off on design documents then a lengthy implementation cycle, just to roll out a system that is almost guaranteed to not be what the business actually wants or needs. If a project is taking too long and is far over budget, take a look at the implementation methodology and you’ll probably find waterfall.

Now, waterfall methodologies work fine for specific types of implementation, such as technical integrations where you’re writing code that will connect to systems using standard protocols. Where waterfall doesn’t work all that well is whenever there are business people who are going to interact closely with the software in their day-to-day operations. Which is of course A lot of the time. Now, if there was ever a way to convince your organization to adopt agile or agile-like implementation methodologies, you want to show them the power and flexibility of model driven low-code tools in order to do that. So whip up a quick prototype in a couple of days, it doesn’t have to be operational, and show it to people, get some feedback, change it, show them another version later the same week. They’ll get the idea that implementation can be iteratives and also collaborative between business and development. And that is what will maximize the probability of success.

Now, this requires a pretty close connection between your implementation team and your business. It doesn’t mean that you can’t outsource implementation or that your developers can’t be in other locations, but it does mean that they need to work closely with the business as a team, deploy frequent iterations, and then be able to quickly integrate feedback into the implementation cycle. So, if there aren’t daily conversations between someone in the business and someone on the implementation team, you’re probably not connected as closely as you need to be. This also means that the business needs to trust the implementation team when they say that the MVP delivery is part of the process and not a final delivery.

Too many users have been burned by being stuck with version 1.0 of an implementation, while the dev team is shunted off to the next project. That’s how we end up in these lengthy waterfall cycles of requirements and design documents, and in fact, Nieberg suggests using a different term rather than MVP, such as earliest usable product, which implies that this is the earliest of many releases, rather than a final delivery.

Now, the other best practice in business automation implementation that I want to talk about, is being ready and able to pivot. This is not just a matter of changing little things on the user interface so that the business likes it better, or creating a new API to extract data from a legacy system. I’m talking about radical change in direction or functionality once the business sees that first iteration and decides that you need to go down a different path. Sometimes because it’s the business’s first real exposure to the capabilities of the business automation tools, and the flexibility of a low-code approach, they just didn’t even know that some of these things were possible. Then they look at their first iteration, they scratch their heads and then they say: “Hey, couldn’t we completely automate this part of the process?” or “Would changing this allow us to use remote workers for certain human steps?” or “Can machine learning be used to auto-adjudicate these decisions?” or “Can we integrate this functionality into our customer portal for self-service?”. You get the idea.

Now, Nieberg’s article had a great little graphic to illustrate that point. What if you were busy implementing the car by way of skateboard and bicycle as your iterative steps, but figured out they would really be better served by taking a bus? Think of it as radical re-engineering of the business process driven by the business.

Now, business goals usually align with two high-level corporate metrics: net revenue and customer satisfaction. To achieve these, the business is going to be looking for more automation, more accurate and efficient processes, appropriate levels of customer self-service and occasionally a completely different way of serving your customers, or a completely new business model in order to provide services. Now, a smart business analyst who’s well versed in the business and the automation tools should be suggesting that type of different functionality early on, but if not, then when that first version is put in front of the business, be prepared to pivot.

That’s all for today. I’ll be back next time with something completely different, now that we’ve wrapped up this series on best practices in business automation projects.

You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Orchestration, or more specifically process orchestration, is a vital aspect of many industries, including healthcare and financial services. It involves the coordination of various people, processes, and technologies to achieve a specific goal or outcome. Multiple levels of orchestration exist, each with its own unique characteristics and requirements. On this page, we will delve into the different levels of orchestration and showcase how the Trisotech platform enables businesses in healthcare and financial services to manage and optimize their orchestration endeavors.

Process Orchestration 

What is Process Orchestration?

Process orchestration refers to the management and coordination of multiple interconnected processes to achieve a desired outcome efficiently.

It involves integrating various systems, applications, and human tasks into a seamless workflow. Process orchestration ensures that each step in the process is executed in the correct order, with the necessary data and resources available at the right time. By automating and streamlining complex workflows, process orchestration improves operational efficiency, reduces errors, and enhances overall productivity.

The rise in process automation, microservice architecture patterns, and organizations’ reliance on cloud providers has sparked the demand for advanced software capable of creating dynamic composite end-to-end business processes. These processes effectively manage and coordinate a wide range of components, such as legacy systems, automated workflows, RPA, AI/ML, and cloud services.

Process orchestration presents a more intricate challenge compared to task automation. In orchestration, the designer or modeler specifies the desired outcome. However, the process orchestration software must handle the complexity of both internal and external elements within a business. This includes systems, services, individuals, bots, events, and data. Moreover, the software should possess the ability to adapt to evolving situations and conditions, ensuring flexibility and responsiveness throughout the orchestration process.

Modeling Orchestrations

Using a model-driven approach is the most powerful way to specify orchestrated processes.

It offers abstraction and visualization capabilities, clear process flow representation, adaptability to changes, facilitates collaboration and communication, provides validation and verification benefits, enables reusability and standardization, and integrates with automation tools for seamless execution. This approach empowers organizations to efficiently design, manage, and optimize orchestrated processes, leading to improved operational efficiency and business agility

Declarative versus Prescriptive Processes:
Understanding the Difference

Processes are a series of steps or actions taken to achieve specific goals or outcomes, involving coordinated activities. These processes can be classified as either prescriptive or declarative.

Prescriptive processes are commonly employed in predictable situations that can be modeled using structured workflows. They specify a predetermined sequence of activities, indicating what needs to be done next. A well-known standard notation for capturing prescriptive workflow processes is the Business Process Model and Notation™ (BPMN™). BPMN serves as a prescriptive visual language, where activities are sequenced, and the model itself dictates the next activity.

On the other hand, declarative processes are typically utilized in dynamic situations where it is impossible to prescribe a fixed, structured workflow process. They are represented as independent statements that respond to varying conditions. The Case Management Model and Notation™ (CMMN™) case models exemplify declarative processes. CMMN is a standard declarative visual language, focusing on what can happen rather than dictating how things should happen.

To encompass the entire spectrum of possible processes, a combination of both prescriptive and declarative processes is necessary.

For a more detailed exploration of declarative and prescriptive models, including the use of international standards, you can refer to Denis Gagne’s informative Trisotech short video titled “Overview of BPMN, CMMN, and DMN.”

Orchestration Logic

Process orchestration encompasses the sequencing, coordination, and management of multiple prescriptive and declarative processes to execute larger workflows or processes. It commonly involves end-to-end organizational processes that span across multiple systems. An automated process orchestrator acts as a coordinator, assigning the work to the appropriate agents (people, bots, automated processes, decision services, systems, etc.) rather than completing the work itself.

Effective orchestration of complex workflows requires advanced logic capabilities to adapt to changing environments and events. It involves coordinating multiple processes simultaneously, responding to internal and external events, managing data flows, and making decisions based on both human and automated task outputs. A crucial element of orchestration logic is decision making. Decision automation software, based on the Decision Model and Notation™ (DMN™) standard, can be utilized to create decision service tasks, which play a role in driving the flow path and providing data manipulation capabilities within the end-to-end orchestration.

Process orchestration can occur at various levels, including the coordination of human activities, web services, lambda functions, or data manipulation activities. However, a process orchestrator is most effective when it can orchestrate and coordinate both prescriptive processes (BPMN Models) and declarative processes (CMMN Cases) as end-to-end processes, spanning all levels of activities.

Process Orchestration Use Cases

Trisotech process orchestration offers a wide range of capabilities, and here are several customer use cases that highlight its versatility:

Business Logic
Orchestration

Orchestrating human, system, decision, and service tasks based on specific logic, following various execution paths. This type of orchestration, often known as a workflow, can be complex in real-world scenarios.

REST API
Orchestration

Process orchestrations can seamlessly invoke both cloud-based and local web services, providing access to over 200 million public APIs suitable for end-to-end process orchestrations.

Decision/Rules Management
Orchestration

Process orchestrations can leverage decision management and rules management tasks, including direct integration with Decision Management Notation (DMN) model-based services. Decision Management services enable responses to process events and data that guide process orchestration flows.

AI/ML
Orchestration

Process orchestrations can coordinate decision management tasks that incorporate AI/ML capabilities through standards like PMML or by utilizing specific engines such as ChatGPT and Microsoft Text analytics.

Data
Orchestration

Process orchestrations utilize data input and output mapping, data validation, and integration between processes, replacing traditional extract, transform, and load (ETL) tools in end-to-end business processes. Data mapping tasks facilitate the integration of legacy system data with modern web services and mobile applications during digital transformation initiatives.

AWS Lambda Function
Orchestration

Process orchestrations enable the execution of Amazon Web Services Lambda Functions, allowing the serverless execution of various code. You can use BPMN events to trigger Lambda Functions and use Lambda URL HTTP(S) endpoints for invoking the functions.

DevOps Application Release
Orchestration (ARO)

Process orchestrations manage and coordinate DevOps tools used in continuous integration and continuous delivery (CI/CD) pipelines. These pipelines automate the rapid development and delivery of tested, high-quality software, incorporating dependency mapping, process modeling, and collaboration tools.

Bot
Orchestration

Process orchestrations incorporate different types of “bots,” including RPA bots, chatbots, social bots, download bots, ticketing bots, and more.

Content Delivery
Orchestration

Process orchestrations integrate email tasks, video and document integrations, mobile applications, and edge-location cached content from a Content Delivery Network (CDN). CDNs cache and distribute content globally, such as documents and videos.

Healthcare Interoperability
Orchestration

With its Healthcare Feature Set (HFS), Trisotech process orchestrations encompass care pathways, clinical guidelines, Clinical Decision Support (CDS), FHIR data stores (health data fabric) , SMART™ Health IT applications, and a wide array of pre-built evidence-based workflow and decision models available in the BPM+ Health™ standard.

Financial Services
Orchestration

Trisotech offers BPMN extended modeling and decision management (DMN) technology tailored for the financial services industry. No-code drag-and-drop “Accelerators” are available for standards like MISMO™ (Mortgage Industry Standards Maintenance Organization), FIBO® (Financial Industry Business Ontology), and Panorama 360. These standards and templates facilitate the creation of automation services used in process orchestrations within financial services.

These use cases illustrate the diverse applications of Trisotech process orchestration across various industries and domains.

Trisotech Orchestration Platform: Empowering End-to-End Automation

Trisotech Digital Enterprise Suite (DES) stands as the world’s most robust orchestration platform, enabling the creation of automated end-to-end process orchestrations. With support for BPM+ (BPMN, CMMN and DMN) models, it effortlessly coordinates various components such as artificial intelligence/machine learning (AI/ML), microservices, legacy systems, APIs, AWS lambda functions, RPA, IoT devices, and more.

Visual Low-code/No-code Automation

Trisotech offers a comprehensive low-code/no-code visual development, administration, and automation environment, all within a browser-based platform. This streamlined approach simplifies the entire process.

Scalability and Continuous Operations

The Digital Enterprise Suite ensures scalability and continuous operations through vertical and horizontal scaling capabilities. Additionally, containerization support enables failover and geographic dispersion for enhanced reliability.

Pre-built Connectors

The Trisotech platform includes pre-built connectors, facilitating seamless integration with various systems and services.

Custom Connectors: Seamlessly Extend Functionality

On the Trisotech platform, creating custom connectors is a breeze. Simply upload an OpenAPI or OData file, and you’re ready to go. Don’t have an OpenAPI file? No worries. You can easily create your own operations in the operation library by filling out the provided template. With this flexibility, expanding the platform’s functionality to meet your specific needs becomes effortless

Pre-defined Data Record Layouts

Extended modeling support accelerators provide pre-defined data record layouts tailored for specific industries. Examples include Financial Services (MISMO™, FIBO®, Panorama 360) and Healthcare (Healthcare Feature Set, FHIR®, CDS Hooks™, SMART™, BPM+ Health™).

1,000
free pre-built evidence-based workflow and decision models

Pre-defined Industry standard process definitions

Accelerate your development process with drag-and-drop process definitions from Trisotech’s extended modeling support accelerators. Industry-specific templates are available for Healthcare (nearly 1,000 free pre-built evidence-based workflow and decision models), APQC (Cross-Industry, Banking, Property & Casualty Insurance, Aerospace and Defense, Healthcare Provider, etc.).

Process Orchestration Patterns and Behaviors

Trisotech Process Orchestration provides a rich set of advanced patterns and behaviors to orchestrate end-to-end processes effectively. These include:

Trisotech Orchestration Platform provides the necessary tools and features to achieve comprehensive end-to-end automation, empowering organizations to streamline their processes effectively.

Request a demo

Low-code/No-Code/Pro-Code

OMG®, BPMN™, (Business Process Model and Notation™), Decision Model and Notation™, (DMN™), CMMN™, (Case Management Model and Notation™), Financial Industry Business Ontology (FIBO®) and BPM+ Health™ are either registered trademarks or trademarks of Object Management Group, Inc. in the United States and/or other countries.

HL7®, and FHIR® are the registered trademarks of Health Level Seven International and the use of these trademarks does not constitute an endorsement by HL7.

CDS Hooks™, the CDS Hooks logos, SMART™ and the SMART logos are trademarks of The Children’s Medical Center Corporation.

Trisotech

the
Innovator

View all

Sandy Kemsley’s Vlog - Best practices in business automation application development - design #2
Sandy Kemsley Photo
Vlog

Business automation best practices

#3 – Application development – Design (part 2)

By Sandy Kemsley

Video Time: 8 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with the fourth in a series on best practices in business automation application development.

In the first episode, I talked about the automation imperative that’s driving all organizations to automate. There’s a lot of new technology that lets us automate things that were never possible before, and you need to be considering it or risk losing your competitive edge. In the second episode, I examined best practices in strategic vision including making business automation a strategic direction and picking the right processes to automate that will have the maximum impact. Then in the last episode, I looked at best practices in the design of business automation starting with metrics. Metrics need to be built into automation from the start and need to be aligned with corporate goals. I also talked about how design needs to be based on an understanding of the current process, not a micro level as is analysis but an understanding of the key elements of success or failure in that process. Go back and review those earlier episodes if you haven’t already watched them.

Now in this video, I’m continuing with more best practices in the design of business automation. I want to look at the two sides of automation design that may seem to be in conflict with each other, but they’re actually the most important balancing act that you’ll have during design.

The first of these is: automate whatever possible. If there’s a technology that you can apply to make your processes more automated, you should be considering it. The second is: don’t automate out people where they add value. In other words, don’t get so caught up in the thrill of automation that you remove some essential human contributions.

Just like Goldilocks however we need to know how to apply not too little or not too much automation but just the right amount. So, let’s look a little bit more closely at this.

Starting with the idea that we want to automate everything that’s possible in order to automate effectively. This is driven by some of the reasons that we turn to automation in the first place: efficiency, speed, cycle time, quality. These can all be significantly improved with appropriate automation. Notice I say appropriate automation. And improving these, leads not only to an improved bottom line in terms of costs, but also an improved top line revenue because of customer satisfaction. This is pretty classic business automation design where you look at what you’re doing now and how to make it faster, better and cheaper. That makes things more efficient, it makes them more cost effective and often leads to some improved customer satisfaction for fairly straightforward applications.

Now, automating repetitive tasks is the easiest way to start. So, you look at those individual steps, if it’s done the same time or the same way every time, then there’s almost always some way to automate it. It might be using decision management, business process management or robotic process automation, but the basic idea is to replace a repetitive human step with a repetitive automated step. Then you can look at the more complex automation of less repetitive tasks by using things like machine learning and artificial intelligence. ML and AI can make more nuanced decisions that would normally have to be done by a person who analyzes not only the current transaction, but also understands how similar transactions and decisions were handled previously. In other words, as the automation systems get smart enough to learn from context and past activities, they gain knowledge and skills in the same way that a person does when they’re learning how to do the same task.

Now, you also want to look above the task level. So, you’re not just looking at how can we make the individual tasks automated or not automated. Any sort of broad automation should be looking at the organizational goals and then think about whether things are really being done in the way that they should be. You don’t necessarily have to do it in the way you did in the past. Understanding what you want to get out of a particular end-to-end process, lets you step back and consider alternative processes and methods that may look completely different from what you’re doing.

Now, remember old school process re-engineering where the idea was to radically redesign processes? This is a little bit like that, but with tools that can actually enable a much smarter automated process. So, think about, let’s take insurance claims as an example, think about redesigning an insurance claims so that it has fully automated data and document capture at the front end, and auto adjudication for some simpler claims adjudication. This turns what was previously a very manual claims process on its head, and through the application of a variety of technologies you can have some claims that are completely hands off from beginning to end. Nobody inside your organization has to touch them, they process faster, they have fewer data entry errors, they apply data and decisions consistently… All of this makes the customers happier because you’re getting your job done faster and more effectively and more consistently. They also eliminate a lot of manual internal tasks and this makes your operations more efficient and less expensive. This also frees up your skilled knowledge workers for handling the tough problems and customer interactions that can’t be replaced by automation.

But animation is not a Panacea. You have to understand enough about your processes and your customers to know where your knowledge workers are adding value to the process. Sometimes that’s customer interaction where they’re dealing with customers directly in order to resolve complex problems. This most often happens when something goes wrong with the normal process such as a defective product or a customer profile that doesn’t match the usual pattern, in that case you want to get a knowledge worker involved as soon as you detect that the normal process isn’t working the way it should. Now, you can wait for your customer to type or say “agent, agent” on your IVR, your chat system or you can be proactive and recognize before they even realize it that something needs to be escalated to a person for resolution.

The other main situation for involving knowledge workers is when you have a complex decision that just can’t be made using automation technologies with any degree of certainty or can’t be made yet with automation technologies. If we think back to the insurance example, there are a lot of claims that are just too complex to adjudicate automatically. These are best handled by a combination of automated tasks and knowledge workers.

That last point really brings home the message about design that I’m making in this entire segment. A lot of processes are not all automation are all manual, they’re a combination of both. As a designer you need, to understand what can be automated and what is best left in the hands of the people in the process. As Technologies get smarter, though some of the things that are best done by knowledge workers today will be able to be automated effectively. If you think about auto adjudication insurance claims, this was never possible in the past and now, it’s starting to become possible for more and more complex claims.

We’ve seen this happen in a number of different ways in just the past few years, you look at manual decisions that the what-were manual decisions are now being handled automatically with ML and AI, it really drives home that automation design is a constantly moving target. And you need to understand how new technologies impact your business operations. That means you need to keep up on the automation technologies and see how they might be added into your business operations at any point in time to help improve things without impacting your customer satisfaction. This goes back to what I talked about in the last episode, where you need to understand the key points in your business operations, so that you know what makes them successful and what can make them fail and then consider where the people part of the processes contribute to that, versus what can be automated.

That’s all for today. Next time, I’m going to finish up the design, I knew I was going to do it this time, but I’m going to cover some of the main design anti-patterns or things to look for that indicate that you may have failed in your design somewhere. And then, after that I’m going to finish up the whole series with some best practices in implementation methodologies for business automation.

You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - Best practices in business automation application development - design #1
Sandy Kemsley Photo
Vlog

Business automation best practices

#3 – Application development – Design (part 1)

By Sandy Kemsley

Video Time: 7 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with the third in a series on best practices in business automation application development.

In the first episode, I talked about the automation imperative: all of our cool new technology lets us automate faster and better and automate things that were never possible before. However, everyone else has access to the same technology and you need to up your game in business automation or your organization won’t be able to compete.

In the second episode, I looked at two best practices in strategic vision for business automation: first make business automation a strategic direction and second pick the right processes to automate, especially at the beginning. Go back and review those two earlier episodes if you haven’t already watched them.

In this video and the next one, I’m going to delve into one of my favorite topics: best practices in the design of business automation. Design best practice that I want to talk about is related to metrics.

Metrics

Metrics are often something that aren’t thought about until after a process has already been automated, but that’s completely the wrong way around. How do you even know what to design and implement unless you understand what’s important. Metrics are supposed to be measuring what’s important to your organization, and this is not just about departmental efficiency like doing the same things faster better and cheaper. You need to take more of a business architecture view of metrics where you start with the corporate goals at the top level, and then use that to create metrics for the end-to-end processes that might span multiple departments and multiple systems but are aligned back up to those corporate goals.

Improve Customer Satisfaction

Let’s say you have a corporate goal to improve customer satisfaction, and quite frankly who doesn’t? Now, that’s a little bit fuzzy, but if you dig into the reasons for your customers dissatisfaction, you’ll probably find that this translates to operational metrics at the lowest level. Things like quality, end-to-end cycle time, process transparency. In other words, if you could process the customer’s order right the first time, get it done faster, and let them see what’s happening with the order during the process, then your customer satisfaction is going to improve.

Now, what does this translate to from a design standpoint. Well, if you want to improve the quality of your customers orders then automation is definitely your friend. Look for the places in the process where a person is doing repetitive tasks and decisions that are prone to errors. Such as re-entering data between your website and your order fulfillment system or deciding which shipping method to use. You can then design in automated connections between systems and design automated decisions where there are clear business rules. This is going to improve data accuracy and reduce decision variability. Both of which are major contributors to the entire process quality.

Improving the End-to-End Cycle Time

Now, adding this type of automation also has the effect of improving the end-to-end cycle time which is the second metric that would be related to our corporate goal of improving customer satisfaction those activities and decisions which are automated now happen instantaneously at any time of the day or night rather than waiting for a person to process them during regular business hours.

Process Transparency

Now, the third metric commonly tied to customer satisfaction is process transparency. The customer wants to know what’s happening with their order and you need to build in ways for them to get that. It might be notifications sent to their email when a Milestones reached or it could be a portal that they log into and then they can check the order status directly themselves. Once you have an end-to-end process orchestrated, even if it’s a combination of automated and manual steps across all these different systems and departments, you can start to add in these points of visibility for your customer to track their own order. And then, as an extra bonus providing ways for them to serve themselves in terms of monitoring their order, you reduce the number of calls into your call center. This reduces your costs at the same time as you’re improving customer satisfaction. Total win-win!

So that’s just an example of how you can take a corporate goal like customer satisfaction and take that down to the level of operational metrics, which you need to do. You then need to make sure you have those linkages between those operational metrics and the corporate goals so you can make sure that they’re serving the goals in the right way.

Understand what makes the process work

Now, the second design best practice that I want to raise here is that you need to understand what makes the current process work, or not work. This doesn’t mean that you do a micro level as is analysis and then just automate the same steps, rather you need to dig into your current business process to understand the key elements of success.

You might start with some sort of automated analysis like process mining, for introspecting your current process, but the key thing here is to go out and talk to the knowledge workers who currently work in the process. There is a great episode of Michael Lewis’s against the rules podcast that really highlights this Michael Lewis is the guy who wrote the book Moneyball and several other books that dig into why certain things in business work the way that they do. Now this episode is called six levels down, and it looks at a U.S Healthcare billing company and about how they became successful by gaining a deep understanding in how to apply the right billing rule at the right time. And the way that they did this was to find the people six levels down from the top of the organization. In other words, the people who actually do the work and understand how and why things work. These are the people at the front lines of the business processes, even if they’re not customer facing, they understand which parts of the process and rules are necessary and which are not.

And spending time with these frontline people while they’re doing their work has always been part of my design process. I definitely see this as part of this best practice of understanding what a process needs to do in order to be successful and I think that that’s something that you should incorporate into your design practices when you’re looking at business automation.

Now, once you’ve understood what’s needed to make the the process successful you have to look at how to design that into your new automated process. You might have some parts that still remain manual because that’s part of the secret sauce of the process, but you’ll probably find a lot will be able to be automated. You have to start however with that kernel of truth about what’s required for a successful customer transaction within that process.

That’s all we have time for today. Next time I’m going to talk about two more best practices in business automation design and a couple of failure indicators that you can watch out for. I’ll wrap up the series after that with a video on best practices in implementation methodologies.

That’s it for today. You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - Business automation best practices #1 - Introduction
Sandy Kemsley Photo
Vlog

Business automation best practices

#1 – Introduction

By Sandy Kemsley

Video Time: 4 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with the first in a series on best practices in business automation application development.

I want to set the stage for this series by talking today about what I call the automation imperative. We have a lot of cool technology now, that can be applied to automating business processes and decisions:

Now, all this technology makes more automation possible. These newer technologies let us automate things faster and better, and in fact automate things that were never possible before. But this has a downside. Yours isn’t the only organization with access to these technologies and everyone else is trying to automate their businesses using the same methods and the same technologies. That means whatever market you’re in it’s become more competitive. Since that rising tide of new technology is floating all the boats, not just yours. More nimble competitors will force you to automate in order to compete, or you’ll die in the marketplace. This is the automation imperative. It’s essential that you leverage automation in your business, or you just won’t be able to survive.

So easy right? Just plug in some of these new technologies and off you go? Sadly not that easy. A study done by Boston Consulting Group, showed that 70% of “digital transformation projects” don’t meet their targets. 70! 7-0, that’s a big number. Well okay, digital transformation is one of those loose terms that gets applied to pretty much any IT project these days, so let’s focus that down a little bit. Ernst & Young looked at RPA projects. Now, robotic process automation vendors claim that you’ll have your return on investment before they even get driven all the way out of your parking lot.

Now, what E&Y found though, is that 30 to 50 percent of initial RPA projects fail. So, if organizations aren’t succeeding with a technology that’s supposed to be the most risk-free way to improve your business through automation, then there might be some problems with the technology, but probably, someone is doing something wrong with how they’re implementing these projects as well. We have billions of dollars being spent on technology projects that fail. A lot of companies are failing at business automation. You don’t have to be one of them.

Now, this series of videos is focusing on best practices, that will help you to maximize your chances of success in application development for business automation. Or if you want to look at it in a more negative light, they will minimize your chances of failure.

I’m going to split the series into three parts:

So we’re going to be covering quite a bit of territory and looking at a number of things.

Now, in each of these videos, I’m going to discuss a couple of things you should be doing. So those best practices and also some of the indicators that you might be failing. So some of these anti-patterns that you can be looking for as you were going through your projects to figure out if something’s going off the rails maybe before you have a serious failure. So stay tuned for the coming videos in this series.

That’s it for today. You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - Composability: leveraging best of breed
Sandy Kemsley Photo
Vlog

Composability: Low-code versus model-driven

By Sandy Kemsley

Video Time: 7 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here today for the Trisotech blog with a few more thoughts on composable applications. In my previous two videos, I talked about the general definition of composability, and why you need to consider a best-of-breed approach to components. Today I’m going to look at how model-driven fits into the landscape.

Now, although it’s not strictly necessary, most platforms that fall into the composable category are also low-code, meaning that you don’t write a lot of technical code in a language like java, but you graphically assemble applications using building blocks in a way that’s more easily understood by less technical citizen-developers and business analysts. Some low-code environments use predefined application templates that are customized by modifying parameters, so they’re even easier to use. Others allow the developer to create some simple models like a linear workflow and the data structure of the parameters, but that’s not necessarily fully model-driven.

Now, let’s look at these three concepts together and see how they fit. They are somewhat independent but there’s overlaps.

As I discussed in a previous video, application development environments can be composable without being either low-code or model-driven. The concept of composability is almost as old as programming itself and it includes things like calling external libraries from traditional coding languages.

Now, low-code on the other hand, does not imply that the environment is either composable or model-driven it could be a walled garden for creating applications based on modifying parameters of predefined templates, and did not even allow any external components to be added.

And model-driven doesn’t necessarily enforce composability since it could just be for modeling and executing applications that don’t even call external services. Now, however model-driven is usually considered to be, if not low-code, at least not full-on technical coding. Although it’s usually quite a bit more technical than most low-code.

Confused yet?

Let’s go back to the definition of model-driven engineering. It’s about creating these conceptual domain models of everything related to your business: process, decisions, data, and much more. And then, instead of having to write code that explicitly defines these entities, applications are created based on these abstract representations by putting together these models. Now, business process management systems or BPMS, are a great example of model-driven application development. Since they allow you to create a graphical process model using a standardized notation and then that model actually executes, it is your running application.

You need a bit more than just the process model for an entire application, of course, but a lot of the business logic can be represented in a process model using BPMN, which is the modeling standard. It’s the same with case management and decision management. We have these three related standards: BPMN, CMMN for case management, and DMN for decision management. And these provide graphical representations for modeling, as well as the underlying execution semantics that allow these models to be directly executable. So, they’re not just pretty pictures that represent what you want your application to do, they are actually part of the application.

Now, one argument against the wider use of these modeling standards — the BPMN, CMMN and DMN — is that they’re just a bit too technical for the audience that’s targeted by low-code systems. In other words, they are low-code-ish. Definitely less code than a traditional coding language, but more of a visual coding language than most low-code environments. And in fact, you can call these executable models as components from a traditional coding language. So, even though the models might be created graphically and were done model-driven in a relatively low-code environment, then used as components in a composable environment, the whole end-to-end application development would not be considered low-code because it would be coming from a more traditional coding language like java.

So, let’s sum it up then.

So, that means instead of the overlapping Venn diagram that I showed you earlier, which is technically correct, the way that these systems are practically created and used today would fall into more like these patterns:

So, that puts model-driven as a subset of low-code, and low-code as a subset of composable. So, you need to look at your developer audience to know which of these things are going to be important to you.

Are you looking at just a composable environment?

Always, you definitely want to have a composable environment because there’s too many third-party services and components out there that you might need to put together — and see the previous video that I did on best of breed to kind of understand more about that.

Do you need to have low-code?

Yeah, if you have less technical developers, or even your technical developers can use it either for prototyping or for applications that don’t need the type of optimization or hardening that would usually be done with more complex written code.

What about model-driven?

I would say, if you’re going low-code, go model-driven. Go with something that allows you to create models, such as data models and flow models, even if they’re fairly simple models, rather than using this limited just configurable template sort of paradigm. Technical complexity can obviously vary a lot with model-driven, from these very simple kinds of models to full BPMN and DMN, but you’ll find something in there that meets all application developer needs across whatever the spectrum might be.

That’s it for today you can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - Composability: leveraging best-of-breed
Sandy Kemsley Photo
Vlog

Composability: Leveraging best of breed

By Sandy Kemsley

Video Time: 5 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with a few more thoughts on composable applications.

In my last video, I went through the basics of composability, and some of the benefits of composing applications by assembling components or services. Having the ability to reuse components in multiple applications is definitely one of the big benefits. As well as being able to stand on the shoulders of other developers, by calling their components from your application rather than developing everything yourself. I also spoke briefly about interoperability, which means that any service or component written by anyone, using any programming language, can be used by anyone else, creating a new application using potentially some other programming language, as long as they both use standard interfaces for interoperability.

Now, we’ve been through a few generations of these interfaces. Some of you must remember Corba and Soap, and the current gold standard is REST. As long as the component has a RESTful API and the application knows how to make a REST call, then they can work together. Now, all of this interoperability is important, because it allows organizations to take a best-of-breed approach to composing applications.

Why would you use the substandard functions that your monolithic application development environment provides when you could directly add in a better component from a best-of-breed vendor? Now, we see this a lot in the business process management world, with this push towards IBPMS (all in one app dev environments that have process engines and decision engines and user interface and everything bundled in there). Now, there are a lot of vendors that have really great process engines, and then they’ve bundled in some not-so-great versions of other capabilities, just so that they can tick all of the IBPMS boxes. Now, you can still use best-of-breed external components in your applications, but many organizations won’t because they’re already paying for the monolithic vendors’ walled garden. Plus the vendors would have you believe that things just won’t work quite as well if you don’t use their components. Now, can you really trust a vendor that wants to sell you an environment for creating composable applications, but doesn’t want you to use anyone else’s components?

The whole idea of composability, is independent building blocks after all. So you can plug and play any type of functionality into your application. If a third-party component won’t work as well, according to the vendor, you should be asking them: why not? Are they doing something non-standard behind the scenes, that would become obvious if you use someone else’s components? Are they denying external access to some shared infrastructure like an analytics data store? Or are all of their components actually one big inseparable monolithic engine under the covers?

Now, you’re going to need some capabilities as well, that just aren’t offered by your platform vendor, and this is really where best of breed becomes important. And these are usually things that are specific to your industry. So let’s say you found a great AI-driven chatbot for insurance claims, that you want to integrate into your claims application. Or a SCADA component that integrates with industry processes and physical machinery for running your manufacturing plant. You want to be able to locate any of these components in service directories and marketplaces, then plug them right into your applications. Now, I really believe that creating enterprise-strength applications that are still lightweight and agile relies on avoiding monolithic environments. Select the best low code environment for your application builders, select the best analytics platform, the best process automation, the best decisioning, the best artificial intelligence and machine learning, any other components for your particular needs. Then use that low code environment to compose them into applications. As you build those applications, figure out what other capabilities that you need, and add those. If a component no longer serves your needs, then replace it with a more appropriate one, possibly from a competitor and with cloud-based environments. You don’t even need to worry about where the component exists. Only that it performs when you call it, using that standard REST interface.

I’m going to continue on the composability topic for at least one more video. In the next video I’ll talk about the difference between low code and model driven for composability, because there’s a difference and there’s a lot of confusion over that. That’s it for today, you can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com see you next time

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - Composability: Old Wine in New Bottles?
Sandy Kemsley Photo
Vlog

Composability: Old Wine in New Bottles?

By Sandy Kemsley

Video Time: 5 Minutes

hi I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with some thoughts on composable applications.

Composability, in its most general form, is the ability to assemble components into a variety of different configurations. Think of Lego, where each building block is a component, that can be assembled into an unlimited number of larger structures, from rocket ships to castles. Now, composability in software development is the ability to create applications by piecing together software components. Not really that different from the Lego example. Now, funnily enough, it’s being promoted as a “hot new capability”, and I even saw an otherwise quite reputable source describe composability as a unique characteristic of cloud computing. It is, however, neither new nor unique to cloud computing, although there are some modern twists on it, that I’ll go into in a little bit more detail.

Now, for you old-timers in software development, the idea of reusable components dates back to the 1940’s when subroutines were first described. A subroutine is a reusable self-contained set of functionality that can be called from other applications. Which is pretty much the same high-level definition as a function, a service, or even a microservice. Yes, there are technological distinctions between all of these, but they all exist for the same purpose: being able to quickly assemble capabilities into applications, without having to write all that code yourself every time, and then being able to rearrange the capabilities, just by rearranging the order in which the services are called. In short, composability makes it faster to create applications and then to change them, while allowing you to include functionality that you didn’t have time to create yourself or maybe you didn’t have the skills to create yourself. So speed, reuse, flexibility, and enhanced functionality are the things that we get out of composability.

Components, that your organization creates internally, could share engines and data stores. So, for example: services that access processes in a business process management system, likely share the same underlying BPM engine, and probably even the process state data services that access your legacy systems are sharing the same legacy data store in the back end, which makes them inherently stateful as well. This is really the modern version of our old subroutines, where we were developing reusable ways to work with our own methods and systems.

Because many of the services that you may want to include in your application, don’t exist within your organization. And you’re going to be calling external services when you’re composing applications. External services are more likely to be completely self-contained and stateless: they’ll have their own engines, they’ll have their own data stores, and all of that is going to be invisible to you as the calling party. You see the service interface, but you don’t need to know how they perform the operations as long as they do the right activity for you.

Things do start to get more interesting in composability with cloud architectures.

Now, again for the old-timers, think of these more like those third-party software libraries, that you used to include in your projects to perform specific functions that you didn’t want to develop yourself, or maybe had functionality based on someone else’s intellectual property built into them. A key enabler of today’s composability is the interoperability provided by these standardized calling interfaces. And that’s where cloud architectures have had a big influence.

So these days, we’re all using REST and similar API formats to add in new components to our applications regardless of where they are whether these are internal or external services. The components could be running on the same server as a calling application, or they could be in a different container within the same organization, or they could be on the other side of the world and managed by a different company. And it doesn’t matter. We’ll use the same interface to compose them into our applications regardless.

These standards also allow for language interoperability

So that a component created in Java, or Cobalt for that matter, could be called from a low code composition platform, or it could be called from a Javascript web app and neither side, either the calling side or the service side, has to know anything about each other’s technologies. We have that interoperable layer in between. And that’s definitely an improvement over the old days, where my Fortran programs could only call Fortran subroutines.

I have more to say on composability, particularly on best of breed component approaches, and the difference between low code and model driven. Check out my next video next month to hear more about this.

Follow Sandy on her personal blog Column 2.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley's Vlog - Treating Your Employees Like Customers
Sandy Kemsley Photo
Vlog

Treating Your Employees Like Customers

By Sandy Kemsley

Video Time: 6 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with some practical tips for internal facing processes. I spend most of my time with clients looking at core line of business processes that form part of the customer journey in some way. But there are also significant benefits to making your internal processes just as good as your customer facing ones.

Let’s look at a process that most employees go through, onboarding.

Once you’ve decided to hire someone and made them an offer, you probably send them a whole raft of forms to fill out: employee information for your HR system, tax information for your payroll system, background information to be checked for security clearance, employee benefits go to your benefits insurer, computer and other equipment requisitions, relocation, visa, other work requirements… You get the picture. Lots and lots of forms. And a lot of them have the same information over, and over again: name, address, birth date, social security number or other government ID… Some of them will be ancient pdfs that require the employee to print them, hand write the information and then sign the forms and scan them before submitting them on paper or emailing them back in. Now all of this is a colossal waste of time and effort. The new employees time obviously spent on filling out the same information in different formats on multiple forms. And also, the time that’s spent by HR, IT, and admin people, retyping that information into other systems, and then of course, there’s going to be time spent correcting data, since you can be sure that somewhere along the line someone is going to retype something incorrectly and it will have to be fixed at some point in time.

Now, in addition to people’s time, this type of process also increases the end-to-end time of the entire onboarding cycle since there are manual steps along the way. So, your new employee might show up on their first day of work, but they don’t have a desk or computer because their paperwork is still stuck somewhere. Funny story that actually happened to me once.

Customer onboarding processes get a lot of attention because customers don’t have much loyalty towards companies. If you handed a new customer 10 paper forms and told them to fill it out all by hand all different variations of the same information and then to scan it in before you even open their account, they’re going to go somewhere else! Now, your new employee probably isn’t going to walk out just because you gave them a reasonable amount of onboarding paperwork to do, but it’s going to leave them with a bad impression of how well the company is run. And they may not be wrong about that because you’re showing just how little that you value your employees time, and how bad you are at finding efficiencies within your operations.

So, how do we fix this?

The big trick is to start thinking of your employees as your customers! so if you’re in a department that serves employees rather than customers (HR, internal IT services or admin) then employees actually are your customers. Their presence justifies your department’s existence and if you don’t do a great job at it your department might be targeted for outsourcing. Since internally facing services are rarely part of a company’s competitive advantage it’s important that the services are performed correctly and efficiently, but they don’t need to be done internally in order to be done well.

So, what if you want to redesign your onboarding process around the new employee journey, as opposed to a customer journey, rather than having a collection of forms that you’ve accumulated over the years that you just dump onto a new employee? So, what would this look like?

First of all, the onboarding process needs to be integrated and online. This means one unified place where the new employee can enter any information that you require. This means that you have to stop using these kludgy old PDF forms. Your portal is not a place for them to download the same old forms to be hand filled signed and scanned in, but rather a data-driven interface for directly capturing information and then pushing it into the required systems behind the scenes.

The second thing is never ask the new employee for the same information twice. You already have that information the portal needs to be able to directly integrate with all the information repositories: the HR systems, payroll systems, etc. So, any information that you’ve captured from them already, should be there in one of those systems. So, you want to be able to pull that back and then use that, have them, augment it with what information might be missing, and then initiate processes in other systems, such as IT service management or even with external agencies, like for doing reference checks or security clearances.

Now the third thing is you want to give the new employee visibility into the process, so that they understand what still needs to be completed to finish their onboarding, and whether that prevents them from starting their job. They need to see the cause of any delays in case there’s something that they need to do to move things along.

Now when you look at your current onboarding process through the new employees eyes what do you see? Is it some mess of mismatched paperwork and manual processes that consumes hours of their time and requires a lot of replication of information and effort? Or is it a streamlined integrated experience that asks for and offers the right information at the right time? Most importantly what does your onboarding process tell your new employee about your company and about you, and how you value their time?

Follow Sandy on her personal blog Column 2.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ blog

View all

Sandy Kemsley's Vlog - Process latency - not always a bad thing
Sandy Kemsley Photo
Vlog

Process latency – not always a bad thing

By Sandy Kemsley

Video Time: 8 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here today for the Trisotech blog with some thoughts on process latency.

On the face of it, process latency sounds like a bad thing. Basically, it means that the process pauses for some reason usually for input from a person and that means it takes longer for the process to complete: longer cycle times, often result in lower customer satisfaction and unnecessary human input can increase the cost while adding to the latency.

Now there was a very funny XKCD comic a couple of weeks ago about process latency, where we see automated steps that take a few hundred milliseconds bracketing a much longer period of several minutes while someone copies and pastes data between applications. And this doesn’t even consider additional latency caused by queuing times, since the workers doing that step in the middle, may not be available 24 hours a day, or they might have a long list of other things to do first. now if you’re an avid reader of XKCD like I am, you know that each comic has an extra text pop-up, sort of an easter egg if you float your cursor over it, and the text for this one states what we process professionals know from long experience that each of these copy and paste activities that’s in the process increases the probability that the process isn’t going to complete until at least the next business day. Now what I said earlier however is that unnecessary human input increases latency, not just human input. Definitely, the type of input that we see in the XKCD comic where somebody is copying and pasting between applications is probably unnecessary. If this is a well understood activity, then we should be looking at how to automate that step. Now the best way is through API integration of these applications, that are being copied and pasted from into. If the APIs are not available, then RPA (robotic process automation) can be used to mimic the workers actions by doing the copying and pasting directly as unattended screen commands. And then in either case you might use a business process management or BPM system to orchestrate the API calls and RPA bought, so that you’re automating the whole thing end to end.

But what if the human input’s a bit more complex than just copying and pasting, and it actually requires decision making? Well, we have technologies to deal with this: artificial intelligence, machine learning, decision management. These can be used to automate decisions that previously had to be made by human operators. So, the automated activities get a bit smarter, and can replace the human activity, and therefore reduce process latency, faster cycle time, usually also reduce costs, and more satisfied customers because they’re getting the same result faster.

Now from all of this automation talk you might think that I’m trying to automate everyone out of a job. Definitely, some jobs are being replaced by automation. Take a look over at willrobotstakemyjob.com to see if you might be in the crosshairs. If your job is just data entry: copying and pasting between applications without adding any value, then that’s being automated exactly as I’ve discussed. But if you’re doing some sort of knowledge work, that can’t be automated all that well, especially in processes that directly impact the customer journey, then your manual input may be the secret sauce behind your company’s success. And you don’t want that to be automated away for the sake of efficiencies. There are a lot of processes that need the human touch. There will be automation applied to parts of these, because a skilled knowledge worker doesn’t need to also be doing the boring copy and paste part of their work. I’d routinely see this sort of thing at the desks of, for example, insurance claims managers who have to manually create letters to clients by copying and pasting data from green screens into a word document when they could be spending their time applying human judgment where it’s required to adjudicate claims. Now a key part of process design, is understanding that distinction. Knowing when and how to use people within a process to the best advantage and understanding how technology can help those people to do the boring bits of their job faster, or even replace the boring bits of their job completely with automation.

Many years ago, I remember working with a banking client who had selected a very automation-focused tool: it was an actually an integration broker as their process automation tool. And then I chatted with them about how we were going to automate some of the back office loan approval workflows that still required people in the process. And the person I was speaking to in their architecture group somewhat dismissively referred to these as “human interrupted processes”. To him, the only good process was one that had no people left in it, and was therefore optimized to reduce latency. To me, the best process is the one that uses the critical skills of automation and the unique skills of people together to optimize the customer experience. Faster isn’t always better!

That’s it for today you can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Watch this short video for more…

Follow Sandy on her personal blog Column 2.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ blog

View all

Learn how it works

Request Demo

Confirm your budget

Request Pricing

Discuss your project

Request Meeting
Graph