Executable Workflow Engines Part 1 - Are they for everyone?

by Joel Grenon, published on 2021-03-18

Workflow engines offer an appealing value proposition to help orchestrate your services and make things more visual, simple and robust. Let's explore their impacts and if they fit all use cases and all budgets.

There's a big trend in cloud native development to create and deploy finer and finer grained logic components, binary independent and with their own data context and clear documented API. This is kind of the definition of micro-services but I would say that we're getting closer to nano-services as we realize the value of segmenting builds, tests, features, being able to upgade them independently from each other. Well that's at least the theory! In real life, APIs aren't always perfectly documented and designed, maintaining separate data stores for each service is quickly challenging data modeling, versioning and validating. Inter-dependencies, if not properly planned, which is rarely the case, become a nightmare (insert spaghetti image here!).

So in order to improve this (potential) mess, we hear more and more about workflow engines being adopted to orchestrate and put some order in this ecosystem of small services. The idea is interesting: visual design even/signal driven workflows, separately maintaining the state of many parallel instances, each deciding which service to call depending on its internal state. The perfect strucuture to assemble together a number of service calls, at the right time, in the right sequence and in the right context. Add to this resilient error management, failsafe processes and you get an ecosystem of executable workflow instances supported by a myriad of isolated low-level, independent services. What could go wrong?

While workflow engines offer a much better visibility on complex logic, providing visual representation and the capacity to monitor each instance success or failures, they are based on the assumption that processes have defined steps, a simple or complex procedure that must be followed if certain conditions are met. So their capacity to properly orchestrate services is closely related to the quality of the modeling of these steps. This is where I see a potential problem. This logic looks like business logic, smells like business logic, but act like programming logic. So the person responsible for identifying the business flow and conditions is not the same as the one actually designing the workflow. They are two distinct skillsets in my mind and this is a critical activity in your software development process that will cause a lot of pain if not properly handled. The capacity and constraints of the underlying service ecosystem has critical impacts on workflow design, which are most of the time, outside the understanding of business people. This gap will be where you'll consume a lot of your effort.

Another aspect that needs to be considered is that procedural workflow engines (about all of them, including all BPMN products like Camunda for example) forces you to create a snapshot of your business process, describing all steps in details before they can be implemented. This level of understanding of one's business is not available to all organizations. It's expensive to achieve and once achieved, it's kind of become a barrier to change, which cause the system either to limit users in what they can do, force them to adjust manually to a newly, emerging or exceptional process or to force the presence of a continuous improvement team, responsible for constantly improving and adjusting these workflows. This impact is evidently directly-proportional to the stability of your business processes. But in today's fast moving pace, you have to plan for change, to be able to quickly adapt, to minimize the cost of any acquisition down the road. Workflow engines are good, but they affect your capacity to evolve as an organization. Maybe easier than maintaining complex logic inside Java classes, but I think that modern organizations should think beyond procedural workflows in order to solve their nanoservice ochestration complexity issues.

That being said, you need to assess these points before jumping into the workflow-base micro-service orchestration bandwagon:

  • Who will design these processes in my organization? Internally? Consulting?
  • Are our micro-services capacity and constraints documented enough to ease the flow design?
  • How will we manage process changes? Do we model only stable ones? What do we do with more unstable or changing one?
  • Do we plan on acquiring companies or technologies that would affect our process designs? How will we handle this?
  • How do we model our process state to minimize data conversion and evolution issues? process states injects additional data domains into your system, you need to treat them the same way you'd do for micro-services.
  • Think about securing the workflow engine from day one. Execution permissions can become very tricky.

In my next article, I will study in more details how Netflix have moved from their procedural workflow engine (Conductor) to their new Plato, their new forward chaining rules approach, which provides a more declarative and flexible take on managing complex flows.

Some references

Want to know more about this topic? Want to explore your options?Schedule a Risk-free Coaching Session