Flow and the Future of Software Development

Think about the ways in which the software development process resembles the flow of water running downhill. Ideas are the tiny drops of potential value that gather into features and architectures which then feed backlogs. From there, developers create code and configuration—thus adding value—which  then merges with other flows. Work done in the combined flow gains more and more value until it is presented to those that convert all that value into desired outcomes: profit, or perhaps mission success.

OK, it's a bit of a corny analogy, but it represents my mental model of the challenges that developers face while creating any significant enterprise or commercial software system. First, the challenge is to convert potential energy (a worthy idea or demonstrated need) into kinetic energy (the increasing value of the solution to the end user) via a metaphorical force of gravity (the desire for the aforementioned profit or mission success).

The other thing I picture is that water running through a landscape—say down a beach towards a body of water—will constantly seek to optimize its flow. It will erode obstacles if it can, route around them if it can't. Every opportunity water has to reduce the drag between it and its final destination is taken without hesitation or abandon. 

This, in my opinion, is analogous to the role of software platforms and toolchains in software development. They play the role of the landscape. Thus, they also create the opportunity to identify where and how the flow of software to production can be improved. The process morphs and recombines and toil, risk, or both eroded—or avoided all together.

Why DevOps is a Flow

Whenever I see data flow like this—from a source to a destination without stopping—I immediately think of event-driven approaches to route, temporarily store (aka queue), and process that data. Timeliness is the critical factor. When your process calls for activity to move as quickly as possible through the necessary steps, it clearly a use case for a "push" data model, like events and publish-and-subscribe models.

What our industry has discovered over the last 15-20 years is that timeliness matters in software development, deployment, and operations. DevOps Research and Assessment (DORA) scientist Nicole Forsgren, et al's foundational study measuring the effectiveness of software approaches, Accelerate: Building and Scaling High Performing Technology Organizations, clearly demonstrates the power of getting quality software experiences to end users quickly. Of the four measurements they identified as correlating strongly with better business performance (as measured by market capitalization and revenue growth), two require timely action for optimal execution (Lead Time and Time To Restore Service).

The one we are interested in here is Lead Time, defined as "the time it takes to go from a customer making a request to the request being satisfied" (p. 14). In other words, reducing the time it takes for a software need to go from being identified to being in production is a huge competitive advantage. This isn't marketing BS—DORA's science bears it out.

"But wait," you might be thinking to yourself. "Isn't flow about cross-organization integration via event-driven architectures? What's inter-organizational about developer tool chains?"

I have two answers to that. The first is that in any medium or large enterprise, software delivery is *always* inter-organizational. The way line-of-business software teams interact with IT teams (such as platform or infrastructure teams) is very much like the dynamics of two businesses engaged in a transaction. Service needs to be offered (or negotiated), and interfaces must be provided to enable consumption of that service.

The second answer is that it seems clear now that any development pipeline and operational environment built in the public cloud (including a pipeline that manages serverless functions) is inherently multi-organizational. At the very least, a development team is exchanging events with services provided by the cloud provider. That may be the deployment environment (e.g. AWS), the pipeline manager (e.g. CloudBees), or the source code or container repositories (e.g. GitHub or Azure Container Registry).

This means that development and operations platforms and toolchains are increasingly going to depend on well defined interfaces and protocols by which they interoperate. I argue that this condition is ripe for adopting and even influencing a common standard for all cross-organizational event streaming. I believe that DevOps will be one of the first and most influential catalysts for an eventual World Wide Flow. 

It's Already Happening

In fact, if you take a look at the new DevOps tooling coming out of the Kubernetes community, you'll see that several are now supporting a CNCF standard, CloudEvents. CloudEvents has the strong inside track on being the standard metadata protocol for events in the World Wide Flow. It is a simple representation of useful metadata for identifying, evaluating, and routing events over a number of different connection protocols (like HTTP, MQTT, and AMQP). It is really well thought out, and is the default metadata standard for the kNative toolset.

It is also being adopted by other tools, like Tekton (a CI/CD framework) and Argo Events (an event-driven dependency manager for Kubernetes). By standardizing on the CloudEvents standard, projects can allow clients to utilize the standard curated libraries made available for a number of programming languages and connection protocols, eliminating significant toil and risk for developers.

I would expect to see more DevOps tooling adopt the CloudEvent standard and its libraries over the next 2-3 years, as well as intentional methods for integrating the tools at the payload format level. The result will be a set of services you can select from that can be strung together much like linux commands can be connected by pipes (the "|" symbol on the command line).

Composable build environments will adapt much more readily to new programming architectures and paradigms, such as serverless functions, AI-driven process development, or even things we have yet to conceive. At least until Internet-based applications are replaced by something new…

Understand flow to understand new DevOps models

If all of this is a little hard to picture today, #flowarchbook is available to shed some light on the subject. I like to think I give a pretty good overview of the types of use cases and programming models that will enable flow, and that flow will further enable in the future. Please consider purchasing a copy if you are interested in the subject.

As always, I write to learn. Please feel free to ask questions or leave feedback in the comments below, or on Twitter where I am @jamesurquhart.

Comments

  1. Software Development in Dubai
    https://www.nsreem.com/ourservices/software-development/
    NSREEM develop amazing desktop and web applications that are tailored to your specific requirements.
    NSREEM is #1 in Software Development in Dubai
    1634699354433-15

    ReplyDelete

Post a Comment

Popular posts from this blog

Two Ways To Evaluate Which Event-Driven Architecture is Right for Your Flow Systems

How AWS EventBridge API Destinations Are A Flow Innovation