So you were tasked to prepare your next software project. Already, its overall architecture is designed, you already put some code-related parts in place such as the build file, the packages, perhaps a sample use-case. Then after a few months, the project is finished, and the final result has largely diverged from what you imagined:inconsistent packages naming and organization, completely different low-level implementations e.g. XML dependency injection, auto-wiring and Java configuration, etc.
This is not only unsatisfying from a personal point-of-view, it also has a very negative impact on the maintenance of the application. How then could that happen? Here are some reasons I’ve witnessed first-hand.
Your upfront architecture has been designed with a set of requirements in mind. Then, during the course of the project, some new requirements creep up. Reasons for this could be a bad business analysis, forgotten requirements or plain brand-new requirements from the business. Those new requirements just throw your architecture upside-down, and of course there’s no time for a global refactoring.
There comes a good idea: a refactoring that makes sense, because it would improve design, allow to remove code, whatever… Everything is nice and good until the planning changes, something takes priority and the refactoring has be stopped. Now, some parts of the application are refactored and others still use the old design.
New team member(s)
Everything is going fine but development is lagging a little behind, so the higher-ups decide to bring one or some more team members (based on the widespread but wrong assumption that if a woman gives birth to a child in 9 months, 9 women will take only month to do the same). Anyhow, there is no time for the current team to brief the newcomer about design details and existing practices, and he has to develop a user-story right now. So he does as he know how to do it since he has no time to get how things are done on the project.
Whatever the team, probabilities are there’s always a team member who’s not happy with the current design. Possible reasons include:
- the current design being really less than optimal
- the cowboy wants to prove himself and the others
- he just wants to keep control on everything
In all cases, parts of the application will definitely hold his mark. In the best of situations, it can lead to a global refactoring… which has very high chances of never being finished - as described above.
Let’s face it, there are plenty of technologies, languages and frameworks around here to help us create applications. When the right combination is made, they can definitely decrease development and maintenance costs; when not… This point especially highlights the 80/20 rule when the technology makes the development of 80% of the application a breeze, while requiring to focus on the remaining 20%.
Changing situations such as New requirements above are particularly bad since a technology well adapted to a specific set of requirements would be wrong if some new one creeps in.
Also, along with the Cowboy mentality above, you can end up with the Golden Hammer of the said cowboy into your application, despite it being completely unadapted to requirements. Another twist is for the Cowboy to try the latest hype framework he never could before e.g. force Node.js to your Java development team.
The myth of self-emerging design
I guess the most important reason for incoherent design in applications is the most pervasive one. Some time ago, a dedicated architect (or team for big applications) designed the application upfront. He probably was senior, and as such didn’t do any coding anymore, so he was very far from real-life and the development team a) bitched about the design b) did as they want anyway. This is what is well-know about the Ivory Tower architect effect. This was the previous situation at one end, but the current situation at the other opposite end is not better.
With books like The Cathedral and the Bazaar, people think upfront design is wrong and that the right design will somehow reveal itself magically. My previous experiences taught me it unfortunately doesn’t work like that: teams just don’t naturally come up with one design. Such teams probably exist, but it takes a rare combination of technological skills, soft skills - among them communication, and chance to assemble them. Most of real life projects will end up like the Frankenstein monster without a kind of central direction.
In order for maintenance costs to be as close as possible to linear, design must be consistent throughout the application. There are many reasons for an application to be heterogeneous: technological, organizational, human, you name it.
Among them, making the assumption that design is self-emerging is a sure way to end up with ugly hybrid beasts. In order to cope with that, balancing the scales again toward the center by making again one (or more) development team member responsible for the design and empowering him to enforce decisions is a way to improve consistency. Worst case scenario, the design won’t be optimal but it will be consistent at least.
Save yourself sweat, toil and tears: the latest meme is not always good for you, your projects and your organization. In this case, having some sort of hierarchy into development teams is not a bad thing per se.