We are all too aware that teams building software products have a hidden overhead that they must manage. Everyone involved understands that you are building a machine to solve your customers' problem, and that this machine is an asset that must be invested in and maintained, in order to maximize its value. What many fewer people recognize is that to do this successfully, you must also own a second machine, whose only purpose is to build the first.
The first machine, that is your product, is highly visible and commands all the attention and investment.
The second machine, that is critical to maintaining the value of the first, often sits in darkness, neglected and poorly understood, despite being essential to building all the value in your business.
When you design your product, you have the freedom to choose from an almost infinite palette of reusable, modular libraries and frameworks. When you design your build system, however, your choices are much more limited, often being constrained to accepting the features offered by your current cloud vendor, or building something from first principles, which must be maintained with scant resources.
Although many tools exist in the CI/CD space, they have been unable to talk to each other in any standardized way, until today. The Continuous Delivery Foundation’s CDEvents project has been created to solve this interoperability problem, providing a new evolution in CI/CD capabilities.
CDEvents builds upon the work done by the CDF to set out best practices in Continuous Delivery, to define a common language for CI/CD ecosystem events, permitting the decoupling of pipeline descriptions from physical implementations. A decoupled CI/CD architecture is easy to scale and makes the CI/CD pipelines more resilient to failures, which is critical as the end-to-end software production and delivery pipelines grow more and more complex, not least in a microservices architecture with thousands of independent pipelines.
Using CDEvents makes it simple to connect workflows from different systems, greatly increasing the pace at which you can migrate to a fully automated Continuous Delivery process for your organization.
Having standardized events, everywhere, means that it is much easier to improve the observability and auditability of your workflows across disparate technology platforms. Utilizing a common descriptive language makes it much simpler for your staff to understand all of your workflows, regardless of which team, or platform they are supporting, and has the added benefit of making it possible to easily switch tooling or infrastructure vendors, should you need to do so.
The CDEvents project’s mission is to standardize an event protocol specification that caters to technology-agnostic machine-to-machine communication in CI/CD systems. This specification is published, reviewed, and agreed upon between relevant Linux Foundation projects/members.
CDEvents are declarative events. By “declarative”, we refer to events through which the producer sends information about an occurrence, without needing knowledge of how the event will be used on the receiving side, or even who will receive it.
In contrast, imperative events would be events that are sent with the intent of triggering a specific reaction, like “start a pipeline” or “deploy an application”. Imperative events create coupling between producer and consumer as they typically require some form of acknowledgement to be sent back by the consumer of the original event back to the producer.
Use cases are key to understanding CDEvents. When defining CDEvents and their attributes, we must know what minimal set of information is needed to satisfy a particular use case.
The primary use case is interoperability, making it possible for one CI/CD tool to consume events produced by another without the need for static, imperative definitions. This use case focuses on how to make CI/CD tools work together in a more automated, streamlined manner.
The secondary use case is observability and metrics. Essential to improving the Continuous Delivery workflow is the ability of the pipeline to collect events from different CI/CD tools. Common events, reported from all involved tools, permits a unified, end-to-end process that spans many tools.
The goal of the CDEvents specification is to define interoperability between systems that allow services to produce or consume events, where the producer and consumer can be developed and deployed independently, in isolation. A producer can generate events before a consumer is listening, and a consumer can express an interest in an event or class of events that is not yet being produced.
The CDEvents specification extends the CloudEvents specification, providing a vocabulary appropriate to CI/CD events. This vocabulary is built upon the terminology created and maintained within the CDF interoperability special interest group.
The intention is to provide an Open framework in which you can describe your desired Continuous Delivery processes, in line with best practices and expressed in a familiar form that is aligned to your other Cloud-based event-driven processes.
CDEvents introduces a set of common abstractions to aid in this process.
Represent events associated with the management of source code assets
Represent events related to building, testing, packaging, and releasing software artifacts
Represent events related to environments where the artifacts produced by the integration pipelines are intended to run
Represent shared abstractions such as Task Runs and Pipeline Runs
There is interest in implementing in many of your favorite CI/CD tools, including ArgoCD, Flux, Jenkins, Jenkins-X, Keptn, Knative, Ortelius, Shipwright, Spinnaker, Tekton. Some of them have started proof-of-concept implementations with CDEvents.