The world of software development is used to welcoming new methodological approaches and solutions on a frequent basis. These quickly become new buzzwords, and one of the latest to come into vogue is “Value Stream Management“.
As with anything new, it is not difficult to find materials online that elaborate on why the adoption of a value stream management solution is important and why now is the time to seriously evaluate it for your organization. It is not my intention to join the chorus, but I would like to make available in the next few lines some considerations deriving from my experience with customers.
I must say that, in general, I am reluctant to believe that technology alone can solve the problems of those who are called upon to develop and manage software production chains. Clearly, not everyone thinks this way and this leads, for example, to a generalized and perhaps not very pragmatic ‘injection’ of ‘Artificial Intelligence’ … everywhere.
Instead, I find that there are many examples where a wise and pragmatic adoption of good technology, combined with a correct methodological choice, has made the difference. Cases like Uber, Flixbus, Airbnb etc. etc., to name the most referenced companies that have really ridden the Digital Disruption, have certainly done so thanks to innovative ideas and a great entrepreneurial spirit, but they could not have achieved their innovative enterprise without a balanced use of technology and methodology.
When introducing the topic of value stream management, it is important then to properly position it as a “solution” where technology and methodological approach bring tangible value, in which areas? Why today? A superficial glance already reveals a number of considerations:
- How much would I want, but above all, how much is it worth to bring my production chains “under control” by means of appropriate metrics?
- What would I give for a “visual” (i.e. immediate) approach to the performance of my processes? To have “at a glance” control over what goes wrong? And make my processes more efficient by eliminating unnecessary time? Or improve release times by up to 90%?
These considerations generally irritate those who have already been doing very well for years, using the right methods and wisely adopting the necessary technologies. In fact, the basic assumption would seem to be that up to now we have been working badly. In reality, what drives the adoption of new approaches is, as always, a changed context. Nowadays, many realities work ‘Agile’, some in a ‘pushed’ way, others have just started, others are considering the option (which at certain levels cannot be given up for a long time). The ‘Agile’ approach, although not trivial to fully realize, opens up the possibility of giving full governance, at the level of ‘business metrics’ on the factory floor.
This opens up the possibility of “instrumenting” the software factory with the same logic and vision with which, years ago, car manufacturers implemented control systems that make it possible to produce 200 customizations of the same car from the same line… while maintaining profitability. The only question now is how to implement this model in software factories without impacting on ROI and overall productivity. What is the starting point?
Certainly, to date, many investments in the DevOps area and elsewhere have been made both with vendor solutions and investments in open source (learning that open source is not entirely free) and in consulting activities at various levels. The first point is therefore to respect and safeguard previous investments. The challenge is, from this starting point, to build a mechanism with which to collect and make available all and only information useful for making business decisions (investments) based on objective data. The current economic situation means that many companies are reducing their investments without interrupt existing development, while others need to invest now and get results quickly. In both of these scenarios, there is a need to move.
Tools that are able to reinforce or allow for a review of the initial decisions made on one’s own analysis capacity with objective data are more necessary than ever to reduce the risk of the investment in adding further value to the solution. But it is afterward that you need to understand what value the investment has brought to your business. The same tool that helped you make the decision is able to quickly and objectively assess the value you brought. This is certainly one of the pluses of adopting value stream management solutions.
Some customers with value stream management solutions are taking a step-by-step approach to investments with the aim of reducing investment risk. How? By starting with a small investment on a pilot project (little investment to be made quickly), evaluating the results, and making the right decisions. Decisions that can be:
- Giving up the project because it proves not to add value to the business, but the investment made and the time spent has been contained in the attempt made, we are applying principle of DevOps And Agile: Fail Early, Fail Fast
- Understand the value of my project, have data that help me to better review some decisions, and have a more accurate idea of the time needed to adopt the solution, I can at this point make an investment with the certainty of having reduced the risks.
But there is a transformation taking place that is in fact changing the world of application development and which can be summarized as a transition from Just-In-Case (I size up for the worst cases) to Just-In-Time (I adapt quickly from time to time). This change in approach is not painless, but if properly implemented can certainly lead to significant competitive advantages. This shift from Just-in-Case to Just-in-Time is already happening with the advent of the Cloud. Cloud has developed by leveraging the simplicity of supplying HW and SW, having to pay only for the resources actually used, and being able to cope with peaks in processing power when necessary paying for this new power for the time strictly necessary.
Problems? The cost of the Cloud solution for companies could increase significantly in a short time; but why? The desire to reduce costs quickly leads many to acquire tools, analysts, and developers to the minor but always having as objectives time cost and quality of what is released. The result, forgetting for a moment the quality of what is released, can only lead to the fact that applications, developed with these constraints, require more and more resources to be used.
The usage of the Cloud that scales quickly and transparently in Horizontal and Vertical without problems is in itself a problem as the developer has no perception of the cost. But at higher levels, this cost increase starts to be addressed and in many articles, it is proposed that performance tests are something to add to my pipeline in the same way as API and functional tests.
It is now accepted by many to approach software development processes, not in deterministic terms and therefore easily measurable and predictable, but in probabilistic terms that involve more variance in the progression of the same. So the GANTT that used to be reviewed on a weekly basis with project meetings is no longer acceptable, as more and more decisions have to be taken as soon as a difficulty arises. This can only be done if I have made my processes visible and if the metrics I need to analyze the progress of the project in real-time. That’s why value stream management has become a necessity because its flexibility to let me evaluate individual processes and have metrics suitable for my role with a Just in Time approach is a winner.