Archive for January, 2008

Back in the late 90’s, I dreamed of building an immersive mapping application that would let people travel through time to any place in the past and see what it was like. It was an impractical idea at the time, but things have changed recently and the result is Concharto. Last June, I alluded to the project when I noted that Leo Tolstoy, author of “War and Peace” proposed applying the scientific method to history, asserting that a complete understanding of an event could be obtained by slicing that event into smaller and smaller pieces, in exactly the same way that a math student performs integral calculus.

While not actually creating a calculus of history, Concharto does attempt to slice history into smaller pieces. There are three recent technological advances that make this possible:

  1. Advanced database software and cheap server hardware have made it easy to search huge repositories of information.
  2. Geographic web services have simplified the task of placing events in a spacial context.
  3. Wikipedia has demonstrated the awesome power of mass collaboration.

Hopefully, Concharto will one day be a comprehensive repository of thin slices of notable events from every place and time.

How can that happen? Concharto is a Geographic Wiki. It looks like Google Maps and works like Wikipedia. It has the all of the illustrative power Google Maps and all of the strengths and weaknesses of Wikipedia.

Unlike virtually all other mapping sites on the internet today, Concharto is not about places – it is about events. Unlike Wikipedia, it is about small discrete bits of information, rather than comprehensive articles.

You can read more about it on the community wiki and in the Concharto blog. .


View A Larger Map

Expansion of the Inca Empire of South America.

(Updated 4/29/08 to reflect our name change from Time Space Map to Concharto)

  • Share/Bookmark

This is a follow-up to an earlier post, Platform Peril

I once worked for an ambitious company that aimed to create a new type of web service. We ultimately succeeded, despite the tale I am about to tell.

Book the First: Reusable Code

Our company had developed some core capabilities in an obscure vertical market which showed some promise to investors. My group made money by applying our special skills to mostly fixed price software development contracts. It was decided that we should roll up all of our capabilities into a software platform of libraries and specialized data processing tools which I will call Platform LG. The project goals for LG were:

  1. Speed development of all projects
  2. License LG to our biggest customers for their own internal and external projects
  3. Reduce operational costs by standardizing our internal support infrastructure

These are the standard reasons to invest in reusable code projects. Unfortunately, the project suffered from the standard reasons that such projects get into trouble:

  1. Behind schedule. Big, ambitious reusable code projects are notoriously hard to manage, especially in a dynamic environment where requirements are uncertain. Worse still, the schedule slips are very expensive, because ongoing projects are affected. A lot of people relied on LG.
  2. Last year’s requirements. LG failed to meet the needs of our newest, biggest project. The platform was slow, ran on the wrong operating system and wasn’t sufficiently customizable to support the new requirements.
  3. Packaging the platform for use by our big customers exacerbated the other problems because the effort necessary to productize the code made it harder to respond to new requirements.

Thus, the LG failed to meet two of its three goals. As a result, our biggest project, which I will call Operation Bandwagon, decided to abandon the platform, instead resurrecting some old code and creating custom capabilities tailored exactly to their own needs.

Lesson 1. Platforms are expensive and slow to adapt to new requirements. They are best used in situations where requirements are well understood and relatively constant.

Book the Second: Operation Bandwagon

Operation Bandwagon focused on building an application, not a platform and so was able to create a great deal of new and innovative features very quickly with a small team. To some people in management, it made the LG look bad. This should have been no surprise however, since Bandwagon developers weren’t nearly as constrained as the LG team.

Lesson 2. It is easier and cheaper to build custom code than reusable code.

Book the Third: Platform Redux

Within a year, Bandwagon was a smash hit. During that time, the software development organization was split into two competing and antagonistic groups. The Bandwagon core code began to get the petrified feel of a platform. Unlike the LG however, Bandwagon was originally conceived as an custom application, and then retrofitted to act like a platform – and it showed.

Meanwhile, LG had finally grown into its own as a stable and capable base on which to build applications. The company now had two competing platforms, complete with release and support organizations. We were paying through the nose for the original schism.

Lesson 3. Pay attention! People love to build platforms, but there should be only one.

Book the Fourth: Unification

It was left to a small bad of brave developers to reunite the two warring platforms, a process that took years to accomplish. I wasn’t there to witness the effort, but I’ve heard stories. Some claim that good ultimately won out over evil. Others say that the two were synergistically merged into a new platform that was better than the individuals combined.


Dickens’ A Tale Of Two Cities is a tragic story centered on the French Revolution. Like its namesake, our story has a bittersweet ending. It was the best of times. We built some truly remarkable software. But it was the worst of times too. If we had been more careful we could accomplished much, much more.


History seems doomed to repeat itself. Revolutions come and go and so do tech bubbles. Two years later, I found myself enmeshed in platform peril that was weirdly similar to Bandwagon vs LG.

  • Share/Bookmark

Most software developers understand the relationship between complexity and cost; more complex = more expensive. Unfortunately many equate code complexity with lines of code, slavishly following design patterns that reduce code counts while actually increasing complexity.

Value Engineering

For years, engineers have noted that the overall cost of a manufactured device is roughly proportional to the number of parts it has. A whole discipline, known as Value Engineering, is dedicated to assigning a cost to each function of a product so that designers and manufacturers can can make wise choices about which functions to improve and which to throw away. A classic value engineering excercise is to take a common item like a circuit breaker, tear it apart, and then redesign it with less pieces. Removing one part from the design can yield dramatic cost savings and big improvements in reliability.


Figure 1 – Cost and complexity for two competing solutions

In Figure 1, two companies are designing competing products with similar market requirements. The company that solves the problem with the least number of parts (Team A) is the big winner. Note that cost often increases exponentially with the complexity of a particular solution.

In many respects, software engineering bears little resemblance to other engineering disciplines, but in this case, there are real parallels. Just as in circuit breakers, more parts means more $, both in initial cost and ongoing maintenance and operations.

Virtual Parts

The analogy is useful because code reduction techniques often increase the number of virtual “parts” while decreasing lines of code. My favorite example is indirection. Many popular design patterns aim to reduce code duplication by introducing levels of indirection (for example, adapter or dependency injection patterns). If you think of each indirection as a new part, it is easy to see how some designs can have less code yet more parts. I use adapters and injection all of the time, but I also acknowledge that they complicate the design, development, testing and maintenance of the code.

Let’s say you are adding a new feature to an existing project and there are some similarities to other parts of your code. You must decide whether to build a new common module and refactor your existing code to use it, or ignore the existing code and build the new piece without regard to the existing stuff. Many developers, especially dogmatic adherents to agile development methodologies, would blindly choose the former approach, without regard to the costs involved. A better strategy is to choose based on a reasoned trade off between cost and benefit. And there are many hidden costs to certain complex design patterns:

  • Clarity. Some code is just too hard to understand. For example, xml based configuration files can make your code easy to configure and impossible to understand.
  • Unit Testing complexity. Multiple levels of indirection require multiple levels of testing. This usually means more support classes, including utility dao’s, mocks, etc.
  • Debugging time. You would think that designers would avoid any architecture that hinders efficient debugging, but many architecture decisions are made without even considering the effect on debugging and deployment.
  • Operational costs. If it is hard to understand the code, then it will almost always be hard to keep running.

Too Many Notes

too-many-notes.pngIn the play, Amadeus, the Austrian King tells Wolfgang Amadeus Mozart that his opera is too complicated, it has too many notes, and he should “take some away.” Mozart, who feels that his opera is perfect, asks the king which notes he would like to take away. It is a moment full of meaning for any creative person. Unfortunately, many creative software developers empathize too much with Mozart, favoring the ornamentation and flash of 18th century classical music. I believe we should all take the tone deaf King’s advice and take some away.

  • Share/Bookmark