Why We Want Solutions That Work For All Problems

In software development(and I am sure this applies to areas other than software development, I just am not knowledgable enough to point out which ones), there is a culture of solutions that work for all problems. Let me first elaborate what I mean. For some this means being sold on a particular technology or methodology - the solution, but no, I should rephrase, not sold, but being really excited, to the point that they see the solution as the solution to all their problems. For others this means that on any given problem that he is trying to solve, he is likely to seek a solution that solves the entire class of problems related to the problem at hand, and only after that, work on the problem at hand.

These are perhaps two different problems, and so this is probably two blog posts jammed into one. Is there a connection between the two? Well, yes, I am sure, but in what way? That's what I am going to try to explore a bit later.

Of course, I am probably the first to point out that these practices/approaches are bad, that you shouldn't base an architecture on a single technology - example: building an XSLT centric reporting engine even though your data isn't in XML to begin with, or making your domain model anemic just so that it would be more webservices-friendly. That you also shouldn't build your framework before you build your application, but rather build your application first - perhaps a couple of them - and then see what is useful that you can extract from them along the way or afterwards. These are things that I have learned over time through my experiences, and they are what I believe to be true now but it hasn't always been the case. But instead of making these criticisms and risk beating a dead horse too many times I will instead try to explore the psychology of why we take these approaches.

First, I will try to tackle the second phenomenon. Why is it that good software engineers tend to overgeneralize? One word that comes to mind is reuse. Reuse is widely accepted in the industry as good practice. Almost all software developers, at one point or another - most of them early on in their career - has come to the realisation - quite possibly in some form of rude awakening(as an aside, I think that many developers learn important lessons in this way. These lessons cut deep, and that's the reason why they voice their opinions oh so strongly) - that a copy-n-paste style of code reuse doesn't cut it, and therefore advocate real code reuse - in the form of shared functions or objects or modules or other types of abstractions. To focus a bit more on this rude awakening that was experienced: they very probably ended up with some code that was brittle and very painful to maintain. Bad experiences have a very lasting effect. Therefore, after that point in their career, developers become more cautious, not wanting to repeat the mistake once made. Ones that were especially affected watch their every step when writing code to stay true to code reuse, and code reuse, at it's core, is all about generalization. As a consequence of this, all good programmers value generalization. In fact, the Gof book: Design Patterns, is a book of recipes for how to generalize your code. Therefore, being able to generalize your code has become synonymous with being a good programmer. Good programmers want to show that they are good by making their code very general, sometimes to the point where they are anticipating things that don't even exist yet or situations that aren't possible yet, and sometimes making the code more complex than necessary because it having to account for many more different cases.

So there we have it. I have painted a picture of why programmers tend to overgeneralize. As for the first phenomenon, I have thought more about it and decided it more fitting to put in a separate post.

blog comments powered by Disqus