Familinearity is when a person makes decisions in a new situation based on some other familiar situation while assuming that the relation between them is linear. This is also how software development planning is done in most companies.
The idea for this term came to me last Wednesday, when I have attended the FogBugz World Tour in Tel Aviv to finally see Joel in person and also learn a bit about version control and bug management. Both the organization of the event and the presentations were excellent and enjoyable completely making up for the apparent dryness of the subject. Even though I do not use FogBugz myself, I got an impression that it is an excellent and very powerful product, well integrated with version control systems and easy enough to use in both medium scale and large projects.
One feature in FogBugz that really caught my attention, however, was the Evidence Based Scheduling (EBS). The idea behind it was to match actual time of task performance to the estimated one for each developer over some period of time and then statistically process it to derive meaningful information about project planning and scheduling. Such information could, for example, include the developer timelines and velocity as well as estimated completion times that would allow you to determine a completion date for any subset of the project with definite probability.
A priori, I am extremely skeptical about the overall precision and usefulness of such methods. Not to say that evidence based approach in general is incorrect, it is certainly in fashion and gaining speed in medical community while also taking shots at other areas. It is other considerations, both practical and theoretical nature that cause my doubts and I will explain them in brief.
The practical concerns for the applicability of this tool are too long for this post, probably too long an entire book. Just a quick look at the article called "How to estimate software tasks" from FogBugz documentation can give a great example of the problems we face in this area. One paragraph towards the end of the article claims that giving people unrealistic schedules and hope they will be "motivated" by them is "brain-dead", a statement with which I completely agree. It is still a fact, however, that there are entire companies that live by this principle and lots of managers who did not read "A Mythical Man-Month" even though it is, as the article claims, a requirement. Joel Spolsky himself, of course, knows more than most people about those problems and have written numerous articles on the subject. These companies, by the way, can at times be very successful, and you will have a hard time trying to convince them that their management practices are flawed.
My point, in this case is that when a tool like EBS is deployed in an organization that has that specific management culture and later, one of the team leaders comes up to his boss to claim that based on previous experience his estimate has only 2% chance of being correct, it will more likely result in fireworks than in the manager amending his ways. Another problematic situation might occur when the manager will come to the above team leader and inquire why the feature that was a week ago 99% on time still late. All that, assuming the organization really deployed EBS and paid all the additional costs of constantly updating all estimates and drilling down all tasks without which the method will not work at all at best or produce highly inaccurate results at worst.
But even if we leave the practicalities aside, the theoretical aspects of using bootstrapping based statistical methods seems a bit problematic. As Joel himself would tell you, in software you rarely do the same things twice. It is design, and design is similar to an exploration of a highly fluid terrain, with high complexity. Using methods that use linear extrapolation based on precision of your previous estimates to provide precise answers seems at least problematic and might raise similar objections to the ones Nassim Taleb provided for the use of VAR formula in economics.
I am not saying that this tool is not useful, it might for example, reveal the "epistemic arrogance" of the management by showing the ill precision of their previous estimates and thus benefit tired and demotivated programmers. It might provide a good bottom margin for the prediction errors. But not taking into account the complexity and non linearity of the domain at hand it could hardly offset the existing "familinerarity" of the software development planning process.
The paradox of insular language
1 year ago
No comments:
Post a Comment