[Cheap] Good Practice is Unusually Hard to Create
The most common complaint about software is that it is "too buggy". The question is, "What does too buggy mean?" People making this complaint are often holding software to absurdly high standards, even when making comparisions to other engineering disciplines. In fact, bridges do fall down. Architects fail; often the designs can be seen to fail and corrected or maintained before catastrophic collapse, but it happens. Software is no more likely to be absolutely perfect than any other human endeavor.
Software is an engineering concern, and one of the things that means is that you can't have anything for free. If faced with the choice between a $100 piece of buggy or incomplete software, and a $50,000 piece of production-quality bullet-proof highly-tested quality software, it's unfair to complain that the $100 piece of software is buggy and incomplete.
Because software exists as an amorphous collection of numbers, and is mostly concerned with the manipulation of other amorphous numbers, when it fails, it is on average not as big of a problem as when other engineering artifacts fail. Software generally can't kill someone. (To the extent that it can, more care needs to be taken.) Thus, given a choice between a program that occasionally sort of eats your data but mostly works for $50, or a solid program that never ever eats your data but costs X*$50, people will generally take the former. Even if it's a bad idea. Even if the program will end up eating more than (X-1)*$50 worth of data. I'm not saying it's rational, I'm just saying that's how people are. The more expensive, higher quality program often won't even get made because nobody will buy it.
How many of you out there in the audience have complained about Microsoft's OS products? How many of you have even seriously considered spending many thousands of dollars more on robust UNIX-based systems? A few hands, yes, but not many. (Note that the quoted price includes some estimated training costs and such.) How many of you would actually shell out $2000 for a hypothetical version of Windows that never crashed, but didn't actually have any more features than your current Windows OS? Not many, I see. What about during the Windows 3.1 days, back when Windows itself crashed more often? Ah, that's a few more, but most of you are still picking the cheap-but-crashy software. Don't lie, I can see it in your spending patterns.
Here lies the core problem with finding good practice for software engineering. We can adapt the same basic processes used in other engineering disciplines. We have the examples from NASA and select other applications to show that software can be created with extremely high reliability. However, in the "real world" people simply aren't willing to spend the money necessary to create software with these heavyweight good practices, because thanks to the previously mentioned unique aspects of software (the number of interacting parts, mathematical chaos), this sort of software is extremely expensive. People want cheaper software. This is perfectly rational; often the thing that costs $X and does 90% of what you need is honestly the better choice than the thing that costs $100*X and does everything you need perfectly; it all comes down to a complicated and situation-dependent set of calculations for each choice.
The other problem is that it's not necessarily clear what the best practice actually is after all. Non-software developers will often be seen accusing software developers as a whole of not caring about process, but the truth is almost the exact opposite: Software engineering as a whole is nearly obsessed with process. From the Agile Methodology proponents, to those pushing UML, to any number of management methodologies ranging from the heavy to the light and everything in between, everything has been tried at one point or another. Metrics? Tried 'em, from the simple ("lines of code") to the obscure and mathematical ("cyclomatic complexity"). None of them are worthwhile. Testing methodologies all fail in the face of exponential state space. Design methodologies have experienced some ups and downs, but still there's nothing like a "one true answer". It's not that software engineers haven't tried to produce good process, it's that it's really hard to create a good process that meets all the constraints placed on us by customers.
Research into better methodologies is an ongoing process. Progress is slow due to the near impossibility of doing true scientific research on the topic, but some progress is being made. It's actually an amazing accomplishment for a 2007 program to have the same number of apparent bugs as a 1987 program; the same number of apparent bugs is spread out over a much larger code base, which implies that code bases are in fact improving in quality. This quality improvement happens as we improve our libraries, as we improve our methodologies slowly but surely, and as we tune our tools and libraries for these improved methodologies.
"Cheap, good, soon - pick two." In engineering terms, we are in fact learning how to make things cheaply and well, just as critics want, but it's at the cost of "soon". It's an extremely hard problem, so it's taking a long time. There's a long way yet to go. The way people want software to be all of "cheap, good, and soon" isn't really unique, but the degree which software is affected by these pressures is unusal... and as far as I can tell, the sanctimonious pronouncements about how we should do our job "better" from non-programmers do seem to be unique.
(One note: Throughout this section, when I talk about the costs of software, I am mostly talking about production costs, not actually the cost to the user. Thus, "free" software is not an issue here, because there is no such thing as software that is free to produce. "Free" or "open source" software simply pays for production costs in ways other than directly charging users; the mechanisms of such production are way out of scope of this book.)