Click here to read part 1 of the series
Click here to read part 2 of the series
Click here to read part 3 of the series
Click here to read part 4 of the series
Get everything and more in the Data-Driven DevOps eBook
In many ways, data-driven DevOps allows us to have an intimate look into the way a software business operates. Think of this type of strategy as an industrial catwalk over a manufacturing plant. Data provides remarkable visibility by spanning the entire software delivery life cycle, from idea all the way to the clients. This visibility is extremely beneficial. Not only does it allow us to know what is happening within an organization – it provides us the opportunity to reduce re-work, lower costs, and provide governance to the business to help mitigate risk.
It is important to understand that data-driven DevOps and Value Stream Management is not a “nice to have” set of functionalities. It is a “need to have” for businesses that want to get serious about solving the following issues.
Let’s start with a few questions. Are the features that will delight your customers and improve revenue actively being worked on? Are they sitting in the backlog? Are they ahead of schedule or behind schedule? When will you be able to deliver the value you committed to the client?
Put simply, business alignment is all about making sure that the development work that is being done by the engineering teams is as aligned as possible with the overall business goals. Companies with alignment between development and the business goals deliver better quality software, faster. Why? Because it is very clear what the business’s main priorities are. Individual Contributors are not forced to make “should I, or shouldn’t I” decisions several times a day.
Many companies are no longer mandating what development tools or practices engineering teams use. The hope is that if the development organization chooses the tools they want to use daily, they will be happier and more productive. While this isn’t necessarily untrue, it creates a real challenge for the business. Gaining visibility into what actual work is being done is already a very challenging problem. Trying to aggregate that data across hundreds of development teams, each using their own set of processes and tools, is an even more daunting manual task.
Data-driven DevOps provides unprecedented capabilities into business alignment. By visualizing the work flowing through a value stream, development managers, product owners, release engineers, executives – whoever needs to see it – can find out what is currently being worked on. Imagine being able to search against all the work that is both in-progress and yet to be developed (your backlog) and getting clear reports into where your actual business value sits.
In my experience, development teams want to do the right thing. They want to be focused on the features that are going to provide value because they are often the most fun and challenging problems to solve. Often, our best individual contributors are pulled in several directions, taking them away from the work that is closest to delighting our customers and delivering real business value. This work is sometimes even more challenging because it is unplanned. Unplanned work can suffocate productivity, and heavily tax an organization’s ability to deliver busines value on time.
While we cannot completely wipe out unplanned work, it is extremely important that we are able to manage it. Unplanned work provides a number of challenges:
- It takes focus away from planned, prioritized work
- The effort to complete the work is unknown
- There is often no clear definition for success
- It open the door for “scope creep”
How do companies allocate resources and time for something they did not plan? More importantly, how do they take a proactive solution to actively reduce the amount of unplanned work that comes their way? The answer, as you may have guessed, exists within the data of the tools we are using every day. In the same way that we track code commits and work items to track planned work, we can use the data coming from our support tools and work item management tools to start to visualize unplanned work and the effect it has on our ability to deliver business value.
An organization’s best individual contributors are usually the ones who suffer most from unplanned work. They are the most knowledgeable, are considered the best to help critical support issues, and have the most experience helping clients through challenging scenarios. The problem with only relying on top performers is that those same individual contributors are most likely the ones shouldering the majority of the work on high value, very important business deliverables. Therefore, if your default move is to always bring in your most skilled employees to put out the fire, it leaves planned work to suffer.
We can improve this situation by having visibility into which individual contributors have bandwidth for extra capacity. This provides opportunities for other team members to grow in their roles by identifying individuals who can pick up the extra planned work that is driving business value while the team temporarily shifts focus.
Unplanned work is unexpected, it is difficult, it takes effort away from the main business priorities, and perhaps more than anything, it is a drain on the culture of an organization. Unplanned work can lead to unsustainable work practices and an unhealthy culture. You can’t eradicate unplanned work, but you can use data to diminish its effects and preserve your organization’s culture and production value.
Validate Quality and Security Initiatives
How do we prevent security and quality initiatives from dying on the vine, failing too early, or never coming to fruition? It might not be a daunting task for an individual team or business unit, but to manage it across an entire organization is a huge task.
As mentioned earlier, many companies are letting development organizations pick and choose the tools they feel most comfortable with in the hopes that it will improve productivity. While there is some benefit to this, it also creates an entirely new set of challenges. How is an organization supposed to track and enforce security and quality initiatives across dozens of tool sets and decades of technologies? By aggregating the quality and security results across all an organization’s applications and teams, businesses can start to see which initiatives are healthy and which ones are could use additional nurturing.
Automated Governance to Mitigate Risk
Organizations spend a significant amount of time in meetings planning for and validating, the “readiness” of a particular release. Release Engineers and Change Advisory Board (CAB) members all provide a healthy dose of checks and balances, but this is largely a long, manual process. If we have learned anything from a decade of DevOps best practices, it is that long, arduous, manual efforts are simply begging to be automated. Manual processes, we know from experience, are error prone.
Providing visibility into quality and security metrics is only the first step. The next step is to use the data to protect the business and our clients. The best way to do that is to set up intelligent gating mechanisms that can manage the steady stream of live data coming from the value stream, and make intelligent decisions based on whether or not a certain build, deployment, release, component, feature, or set of business initiatives, has been tested and scanned.
Think of the cost savings for your Chief Information Security Officer (CISO) when they can simply look at all the value streams within their organization and know who is following the best security practices put in place by the company! And think of the benefits for release engineers when they can easily identify quality risks that could negatively impact their clients!
The best way to mitigate risk and to consistently improve security and quality is to make sure they are always a business priority. Value Stream Management helps clients keep these initiatives at the forefront of their daily tasks so they are never out of sight and out of mind.
Identify Best Practices
Identifying best practices is an important skill. Best practices help us communicate the most effective way to solve a problem. By looking at all the data across an organization’s value streams, companies can start to identify what is truly a best practice rather than just guessing. Team leads and other stakeholders finally have an opportunity to answer key performance questions. Why is one value stream better at quality and security (or some other technical aspect) compared to the others? Is it because of a lack of training? Are they not using the right tooling? Asking and answering these questions helps an organization get to the root cause of what is working within their company so it can be replicated across the organization, and what needs improvement so it can be addressed quickly.
The other major benefit of using data to identify best practices is that it makes identifying which value streams are high performing very apparent. High performing teams are the result of good, well defined, DevOps best practices. Once high performing teams have been identified within an organization it is culturally important that we celebrate their work. Their efforts should be raised up and applauded and used as the benchmark for other teams moving forward.
Get the Data-Driven DevOps eBook
Keep learning about the relationship between data and DevOps in my new eBook. Download it here.