In this post, I’d like to tackle some widespread thought found in software development’s circles: the magical creatures known as 10x developers.
If you’ve been working in a software development team, even for a short period of time, you probably came to the conclusion that some developers contribute more than others. I came to this conclusion already some years ago: I noticed that some developers get their tasks done more quickly, other less quickly. Common sense also agrees that the very same developer gets more productive with experience. Hence, not all developers have the same productivity.
But why ten times? Where does this figure come from?
The foundation of the 10x developers figure
- The original study on developers' productivity is from Sackman et al., 1968
- The task used evaluate the productivity is debugging
- The sample’s size is 27(!)
- The study concluded in a productivity ratio of 28 to 1.
But later researchers recused this ratio because of the difference of environments in some measurements (batch vs. time-shared). They concluded that the apple-to-apple comparison was more than 5 to 1.
This terse simple summary proves that the study is flawed: in particular, how can it prove anything with such a small sample? If not, please refer to the original chapter of the book which contains the whole proof.
The problems of evaluating productivity
Ultimately, other studies also suffer from similar issues as the original study. Those include:
- Low sample size
- Quality of the population e.g. pros or not
- Task type: project or something else
- Measurement: from time to complete to
LOC and manager’s evaluation
The measurement criterion alone is enough to make you cough: in that regard, Java developers would be more productive because it’s a verbose language, which is in opposition to a commonly held belief.
I think there’s only one single way to conduct a study with meaningful conclusions:
- We should gather a representative sample of professional developers, say at least 1,000
- Make them code the same real-world project
- And measure the time it takes to complete it
Interestingly enough, this only opens another can of worms: science-based comparison states "all other things being equal". Aye, there’s the rub. Because a lot of things cannot be equal:
- Programming language of choice
- Usage of libraries vs. usage of low level APIs
- Usage and choice of IDE
- Degree of configuration of said IDE
- Type of project e.g. CRUD vs. math problem vs. something else
Hence, it’s quite hard to reliably evaluate productivity, and even harder to compare it across developers.
Worse than that, the complexity of developers' jobs has only increased with time. Gone is the time when one could code an entire Operating System in one’s garage: programming has become a social activity, involving at least other programmers - if not many more roles.
No proof of A doesn’t mean !A
Or in other terms, it’s not because you cannot prove it that it doesn’t exist. While it’s no proof, I’ve witnessed first-hand developers with negative productivity: someone whose code was so far away from solving the problem at hand that someone else took care of rewriting it every evening.
Of course, a vast majority of developers have positive productivity, otherwise, no project would ever find its way to production. It means that not all developers have the same productivity: most contributing positively, a handful negatively. From this point, the next step is to realize that on both sides - negative and positive - not everyone contributes the same value in the same amount of time.
From the above, only two points can be made for sure:
- There are productivity differences between developers
- As productivity cannot be easily evaluated, the ratio between the more productive and the least productive cannot be evaluated as well
- The value added to a project is much larger than the code written. What about one developer not writing code in order to help a more junior one?
All-in-all, projects are hardly a one-developer effort, it’s much better to evaluate the team’s overall productivity. Mature organizations have stopped evaluating individual contributions, because it makes no sense to try to promote collaboration, but then only evaluate/incentivize individual contributions. The hard-held belief that the sum of all local optimums result in a global optimum is flawed - but is a great subject for a future rant.