I've never encountered cycle time recommended as a metric for evaluating individual developer productivity, making the central premise of this article rather misguided.
The primary value of measuring cycle time is precisely that it captures end-to-end process inefficiencies, variability, and bottlenecks, rather than individual effort. This systemic perspective is fundamental in Kanban methodology, where cycle time and its variance are commonly used to forecast delivery timelines.
> We analyze cycle time, a widely-used metric measuring time from ticket creation to completion, using a dataset of over 55,000 observations across 216 organizations. [...] We find precise but modest associations between cycle time and factors including coding days per week, number of merged pull requests, and degree of collaboration. However, these effects are set against considerable unexplained variation both between and within individuals.
Cycle time is imprtant, but three problems with it. First, it (like many other factors) is just a proxy variable in the total cost equation. Second, cycle time is a lagging indicator so it gives you limited foresight into the systemic control levers at your disposal. And third, queue size plays a larger causal role in downstream economic problems with products. This is why you should always consider your queue size before your cycle time.
I didn't see these talked about much in the paper at a glance. Highly recommend Reinertsen's The Principles of Product Development Flow here instead.
My current org can have a cycle time on the order of a year. Embedded dev work on limited release cadence where the Jira (et. al.) workflow is sub-optimal and tickets don’t get reassigned, only tested, destroys metrics of this nature.
If this research is aimed at web-dev, sure I get it. I only read the intro. Software happens outside of webdev a lot, like a whole lot.
A thank you to HN who told me to multiply my estimates by Pi.
To be serious with the recipient, I actually multiply by 3.
What I can't understand is why my intuitive guess is always wrong. Even when I break down the parts, GUI is 3 hours, Algorithem is 20 hours, getting some important value is 5 hours... why does it end up taking 75 hours?
Sometimes I finish within ~1.5x my original intuitive time, but that is rare.
I even had a large project which I threw around the 3x number, not entirely being serious that it would take that long... and it did.
[dead]
> Comments per PR [...] served as a measure to gauge the depth of collaboration exhibited during the development and review process.
That sounds like a particularly poor measure - it might even be negatively correlated. I'm worked on teams that are highly aligned on principles, style and understanding of the problem domain - they got there by deep collaboration - and have few comments on PRs. I've also seen junior devs go without support and be faced with a deluge of feedback come review time.