Metrics are an important part of continuous improvement. Once a process has been selected for improvement, data needs to be collected about that process. When you can measure what a process does and express it in numbers, you are able to understand it in concrete terms. When you cannot measure a process, and when you cannot express it in numbers, your process lacks meaning. However, care needs to be taken when reading metrics as there can be false-positive/false-negative errors.
Graphs are an important part of visualizing information, and if utilized correctly, allow the end user to better observe information over time. Care should be taken with reading graphs, however. Generally, the data in graphs is what is called “summary data”. You have to perform a global comparison to the full set of data, not just the average – which could obscure the full context of and may give you skewed insights.
Sometimes you just don’t have historical data. For example, with BIT Development’s Initial Estimate Kaizen Event, one of the key metrics identified as necessary for measuring progress was the accuracy of our cost estimates. Historically, we never tracked this information; therefore there was no method for collecting it in place. Following the event, we altered some systems to collect this data and centralize it in one place. Now, we collect data monthly to track how accurate our high level estimations are and if they’re improving.
Cost Slippage is the variation between the total amount of hours billed on a project and the estimated hours. Our goal is to be within 10% of our original estimate.
Once the Takt Board was in place, BIT Development was able to track and visualize how the agency projects were moving through the Software Development Lifecycle (SDLC) and derive metrics, performance, and flow. Some of the metrics we are now collecting include the following:
Projects Completed is the count of the total amount of projects completed per fiscal year and total number of projects completed by project size (graph).