5 Groups of Indicators to Measure R&D Effectiveness
Today, Mr. He Mian, a senior technical expert in Ali's R&D efficiency department, will share with you his thoughts and opinions for many years, hoping to inspire you.
This paper will clearly define R&D efficiency, and provide five major indicators of measurement, indicating goals for the improvement of R&D efficiency, and measuring the effect of improvement. This article is also the beginning of a series of articles on R&D efficiency improvement and product delivery methods, which sets the standard for the effectiveness of the product delivery methods introduced later.
Efficiency silos are the biggest problem for R&D efficiency improvement
Product delivery requires collaboration between front and rear functions (eg: product, development, testing, etc.) and parallel departments (eg: front-end, back-end, algorithms, etc.). The traditional approach focuses more on the independent improvement of individual functions and departments. Excessive local optimization, however, often leads to efficiency silos that damage overall efficiency.
What is an efficiency silo? The above figure describes the common dilemma faced by product delivery under traditional development methods - local optimization of each function and department brings a series of problems, such as:
Work prioritization based on local information causes different departments and functions to wait for each other, preventing the smooth flow of demand. For example, the priority processing of work in the front, middle, and back offices is inconsistent, and the progress cannot be aligned, so that the requirements that have already started cannot be delivered in time.
Batch-style job handover brings further waiting. In order to maximize the efficiency of a single link, each function tends to accept and hand over work in batches, such as batch integration, batch transfer testing, etc. This further creates a backlog of demand and waiting in the process.
Cross-departmental issues are often not dealt with in a timely and effective manner. Maintenance of the public environment is a typical problem that affects the smooth delivery of user requirements. In the process, the effective clarification of requirements across departments, interface alignment, and troubleshooting are other common public problems, which will cause the requirements to fail to progress smoothly.
The above are just some of the problems. They work together. The result is that each department feels busy and "efficient" from its own perspective; however, from the perspective of the overall situation and business, the system's external response is very slow. This is the so-called efficiency silo.
Efficiency silo: Caused by local optimization, it is manifested as: all links and departments are busy and "efficient", but the overall efficiency and response speed are very low. It is the common crux of R&D efficiency improvement.
The broken line in the figure above reflects the delivery process of a single requirement under the efficiency silo. The green line indicates that the demand is being processed, and the red line indicates that the demand is waiting. The workload is not large, but the delivery cycle is very long. Because most of the time the demand is in a waiting state. Every part is busy, but the outside complains again and again, I believe many people will feel the same.
"The ability to continuously and rapidly deliver value" is the core goal of performance improvement
To improve R&D effectiveness, we must step out of the efficiency silos. To this end, the organization must change the focus of improvement from focusing on individual resource links to focusing on the entire system.
The figure above reflects the key to efficiency improvement—from local resource efficiency as the core to value flow efficiency as the core.
Resource efficiency refers to the resource utilization and output of each link, such as: resource availability, utilization, code output, and test execution speed. Flow efficiency refers to the flow speed of user value in the system, such as: the time from user demand to delivery, the shorter the better; or the proportion of waiting time in the process, the smaller the better.
The flow of user value is the best choice to link the entire system and promote overall optimization. In order to improve the flow efficiency of value, organizations must pay attention to the end-to-end flow of user value in the system, and improve the entire system, not just local links. Based on this, the goal of performance improvement is: the ability to deliver value quickly and consistently. This is also the basic definition of R&D effectiveness.
The ability to continuously and rapidly deliver value is the core definition of R&D effectiveness. To this end, we must shift the focus of improvement from local resource efficiency to value flow efficiency to ensure global and system optimization.
Measurement of R&D Efficiency—Five Groups of Indicators Answer the Fundamental Question of R&D Efficiency
The above qualitative definition defines R&D effectiveness. Drucker, the father of management, said: "If you can't measure it, you can't improve it". Metrics help us gain a deeper understanding of R&D effectiveness, set improvement directions, and measure improvement effects.
A lot of data is generated during product development, but data is not a measure. A good measure is that it tells the full story for answering a fundamental question. The fundamental question that efficacy measures seek to answer is: What is an organization's "ability to deliver value consistently and rapidly"?
What kind of a complete story should be provided to answer this question? Based on continuous practice and exploration in Tmall New Retail, Xianyu, Youku, Ali Health, R&D Center, Alibaba Cloud and other departments, we have developed and verified the R&D efficiency index system of the system. As shown in the figure above, it consists of 5 groups of indicators, namely:
First: Continuous release capability. Specifically, it includes two subdivision indicators, namely:
Posting frequency. The speed of the team's external response will not be greater than its delivery frequency, and the release frequency constrains the team's external response and the flow of value. It is measured by the number of valid releases per unit of time.
Release lead time (also known as change lead time), which is the time it takes from code submission to feature launch, reflects the basic capabilities of a team's release. If the time overhead is large, it is not appropriate to increase the frequency of releases.
Second: the demand response cycle. Specifically, it includes two subdivided indicators, namely:
Delivery cycle time. It refers to the average time from the confirmation of the user's demand to the launch of the demand. It reflects how quickly the team (including business, development, operations, etc.) responds to customer issues or business opportunities;
Development cycle time. Refers to the average time from when the development team understands the requirement to when the requirement can go live. It reflects the responsiveness of the technical team.
Distinguishing between the delivery cycle and the development cycle is to decouple and clarify problems so as to make targeted improvements. Among them, the delivery cycle is the ultimate goal and inspection standard.
Third: delivery throughput. Refers to the number of delivery requirements per unit time. A common question about this is, does the number accurately reflect delivery efficiency? This is a problem. Therefore, we put more emphasis on the before and after comparison of the demand throughput rate of a single team, which is sufficient to reflect trends and problems in a statistical sense.
Fourth: the quality of the delivery process. It contains two subdivided indicators, namely:
Defect creation and repair time distribution during development. We expect bugs to be discovered consistently and in a timely manner, and to be fixed as soon as they are discovered;
Defective inventory. We want to keep the defect inventory under control throughout the development process so that the product is always near release-ready, laying the foundation for continuous delivery.
The core of the quality of the delivery process is the built-in quality, that is, the quality of the whole process and the whole time. Rather than relying on a specific phase, such as the testing phase; or a specific time period, such as the later stage of the project. Built-in quality is the foundation of continuous delivery, and detailed examples of its specific measurement methods are given below.
Fifth: the quality of external delivery. It contains two subdivided indicators, namely: 1) the number of faults (online problems) per unit time; 2) the average troubleshooting time. The two together determine the availability of the system.
As shown in the figure above, these five sets of metrics tell a complete story from three aspects: flow efficiency, resource efficiency and quality, answering the core question of how well an organization can continuously deliver value. Among them, two sets of indicators, continuous release capability and demand response cycle, reflect the flow efficiency of value; throughput rate reflects resource efficiency; and the two sets of indicators, delivery process quality and external delivery quality, together reflect the quality level.
A Metric Example: Defect Trend Graph
For these indicators, Cloud Effect provides a wealth of measurement charts, and the subsequent cloud effect product team will also conduct scenario-based sorting to improve its usability. I will follow up in time and introduce the complete measurement scheme of cloud efficiency with a special article. Here I first introduce an example - a measurement chart about process quality.
"Defect Trend Graph" is a newly designed metric graph of Cloud Effect, which reflects the time distribution of defect discovery and removal during the delivery process, as well as the inventory trend of defects.
As shown in the figure above, the abscissa of the graph is the date, and the red vertical bar above the abscissa represents the number of defects found on that day; the green vertical bar below the abscissa represents the number of defects resolved on the day; the orange curve represents the inventory of defects. The left and right parts of the figure compare the two delivery modes.
In the left half, the team belongs to the small waterfall development mode. Early in the "iteration", the team focused on designing, coding, introducing defects, but not integrating and verifying in real-time. Defects have been hidden in the system until late in the project, when the team began to integrate and test, and the defects broke out in a concentrated manner.
In the cascade model, the process quality is poor, resulting in a large number of rework, delays and delivery quality problems. In this mode, the delivery time of the product depends on when the defects can be fully removed, of course, continuous delivery cannot be achieved, and it cannot quickly respond to external needs and changes. Moreover, this mode usually leads to late work rushing and burying hidden dangers in delivery quality.
In the right half, the team begins to evolve towards a continuous delivery model. Throughout the iteration, the team develops small-grained requirements, continuously integrates and tests them, and finds and resolves problems on the fly. Defect inventory is controlled and the system is always near release-ready. This model is closer to a continuous release state, and the team's external responsiveness increases accordingly.
The defect trend graph reflects the team's development and delivery patterns from one side. It guides the team to continuously and early detect defects and remove them in a timely manner. Controlling the inventory of defects keeps the system in a near-release state all the time, ensuring continuous delivery and external responsiveness.
The defect trend chart is one of the cloud efficiency R&D efficiency measurement charts. Later, I will use a dedicated article to systematically interpret the use of these charts.
Goal Setting for Performance Improvement: A 2-1-1 Vision for Some Teams
Above, we introduced the R&D efficacy measure. Based on such a measurement system, what kind of goals should be set? In the implementation process of multiple teams, we gradually precipitated a target system for reference, which can be summarized into three numbers - "2-1-1".
"2-1-1" originally came from Tmall New Retail, and was later perfected and adopted by Xianyu, R&D Zhongtai, Alibaba Cloud and other teams. What is "2-1-1"?
"2" refers to a 2-week delivery cycle, and more than 85% of the demand can be delivered within 2 weeks;
The first "1" refers to a 1-week development cycle, and more than 85% of the requirements can be developed within 1 week;
The second "1" refers to the 1-hour release lead time -- the release can be completed within 1 hour after submitting the code. 
Achieving the "2-1-1" vision is not easy. The 1-hour release lead time requires continuous delivery pipeline, product architecture system, automatic testing, automatic deployment and other capabilities. The 1-week development cycle involves more capabilities and practices, such as: requirements splitting and management, development team division and collaboration mode, and continuous integration and continuous testing practices; the most difficult is the 2-week delivery cycle, first of all It will be based on the other two indicators, but also involve coherence and close collaboration of functions and departments across the organization;
"2-1-1" goals are all about flow efficiency, you might ask, what about resource efficiency and quality? We focus on flow efficiency because it is the trigger for organizational effectiveness improvement that can trigger deep and systemic improvements. As analyzed above, in order to achieve the "2-1-1" goal, the team needs comprehensive practice upgrades in technology, management, collaboration, etc., and the implementation of these practices will inevitably bring about improvements in resource efficiency and quality, and reflect to the corresponding metrics.
Of course, "2-1-1" is derived from a specific team, not all teams use the same value, for example, for teams involved in hardware development, a two-week lead time is often too challenging. Organizations should set goals appropriate to their context and, importantly, indicate the direction of improvement.
This paper defines R&D effectiveness, which refers to an organization's ability to deliver value continuously and rapidly, and can be measured in terms of flow efficiency, resource efficiency, and quality. Among them, flow efficiency is the core of improving R&D efficiency, which brings about systematic and overall improvement.
As shown in the figure above, R&D efficiency ultimately serves organizational efficiency and must be reflected in organizational efficiency such as profit, growth, and customer satisfaction; at the same time, the improvement of R&D efficiency can only happen if it is implemented in specific technology and management practices.
Definition and measurement are the basis for improving R&D efficiency. I believe you are more concerned about the specific practices and methods of improving R&D. In the follow-up, Mr. He Mian will integrate the practice of multiple teams and introduce the operational practice system and implementation method. Please continue to pay attention to the "Ali Technology" public account, we will send it to you as soon as possible.
Author of this article: He Mian
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Knowledge Base Team
Explore More Special Offers
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00