← All blog posts
Blog Product · Tech

How Do You Measure a Workflow?

I’m Ines, and I work as an AI Engineer at AppliedAI.

This is the way I would describe our product, Opus: Opus is our proprietary large work model. It can take any natural language query and generate from it an agent-based workflow, complete with the steps needed to produce the final business outcome.

In other words, Opus is able to create a step-by-step model that both represents visually and executes a business process.

As AI Engineers for Opus, our main focus is to build and refine AI features. We are always confronted with the question of the quality of Opus’s AI functionalities, especially as the product grows bigger and pushes its limits further.

Last summer, we decided to dig thoroughly into the problem of measuring the quality of a workflow. While this can seem very Opus-focused, it’s actually quite linked to measuring the quality of work, a deeply human challenge that is far from being new. We can at least go back to Taylorism, and the end of the 19th century, to observe that the measurement of work was a key point in the capitalist era.

The theory of Taylorism, developed by American mechanical engineer Frederick Taylor, breaks down jobs into simple, repetitive, standardised tasks, maximizing industrial efficiency by analyzing and optimising workflows. Taylor’s scientific management of tasks had the purpose of turning labor efficiency into a measurable science.

While not being Frederick Taylor, and working in a very different period of time than him, it was the same seed that fed the main question of our last paper. How can we build a scientific framework that will evaluate our workflows, their quality, going back to the roots of what makes work good?

Even in the era of process mining tools, business process management and constant monitoring of work, we couldn’t find a reliable mathematical model to measure if a workflow was efficient or not. Our research helped us gather lots of the pieces of the puzzle, but we couldn’t find a solid backbone on which we could lean on to build a proper evaluation. We decided to tackle the problem ourselves, and to propose our own method.

Our model relies mainly on two levels: single responsibility and information hygiene. In business process management as well as in software engineering “clean code” fundamentals, we could find traces of the need for a task to be atomic for the workflow to be efficient. We also encountered the idea of minimizing the visibility of sensible data: a task should show only relevant data, not less, not more. All of this was of course depending each time on the context of the workflow, and we described the relationship between task and workflow context with levels of atomicity for it to be relative and not absolute. You can find the whole methodology in our paper, for more details.

From this research, we finally built a new AI feature in Opus called “workflow insights”. The interest of this is to make our paper’s results evaluate your workflow on Opus.

The whole pipeline of this has the structure of a fully developed project. From our research and readings, to our building of the model and the paper, into the functionality of insights in Opus: all of this was necessary and then tangible. Now we know we can always come back to our framework for evaluation; that’s an achievement as well as a relief.

Built for regulated work.
Ready when you are.