Monthly Archives: April 2016

Reviews Of Software Design and Programmers

How important are software design skills to a programmer? Programmers, in the traditional, and perhaps most widespread, view of the software development process, or others .

The job of the programmer, after all, is to write code. Code is viewed as a “construction” activity, and everyone knows you have to complete the design before beginning construction. The real design work is performed by specialized software designers. Designers create the designs and hand them off to programmers, who turn them into code according to the designer’s specifications. In this view, then, the programmer only needs enough design skills to understand the designs given to him. The programmer’s main job is to master the tools of her trade.

This view, of course, only tells one story, since there is great variety among software development projects. Let’s consider a spectrum of software development “realities.” At one end of the spectrum we have the situation described above. This hand-off based scenario occurs especially on larger, more complex projects, and especially within organizations that have a longstanding traditional software engineering culture. Specialization of function is a key component on these kinds of projects. Analysts specialize in gathering and analyzing requirements, which are handed off to designers who specialize in producing design specifications, which are handed off to programmers who specialize in producing code.

At the opposite end of the spectrum, best represented by the example of Extreme Programming (XP), there are no designers, just programmers, the programmers are responsible for the design of the system. In this situation, there is no room for specialization. According to Pete McBreen, in his excellent analysis of the Extreme Programming methodology and phenomenon, Questioning Extreme Programming, “The choice that XP makes is to keep as many as possible of the design-related activities concentrated in one role—the programmer.” [McBreen, 2003, p. 97] This reality is also well represented in a less formal sense by the millions of one or two person software development shops in which the same people do just about everything—requirements, design, construction, testing, deployment, documentation, training, and support.

Many other realities fall somewhere in between the two poles a) of pure, traditional, segmented software engineering, where highly detailed “complete designs” are handed off to programmers, and b) Extreme Programming and micro-size development teams, where programmers are the stars of the show. In the “middle realities” between these poles there are designers, lead programmers, or “architects” who create a design (in isolation or in collaboration with some or all of the programmers), but the design itself is (intentionally or unintentionally) not a complete design. Furthermore, the documentation of the design will have wide disparities in formality and format from one reality to another. In these situations, either explicitly or implicitly, the programmers have responsibility over some portion of the design, but not all of it. The programmer’s job is to fill in the blanks in the design as she writes the code.

There is one thing that all of the points along this spectrum have in common: even in the “programmers just write the code” software engineering view, all programmers are also software designers. That bears repeating: all programmers are also software designers. Unfortunately, this fact is not often enough recognized or acknowledged, which leads to misconceptions about the nature of software development, the role of the programmer, and the skills that programmers need to have. (Programmers, when was the last time you were tested on, or even asked about, your design skills in a job interview?)

In an article for IEEE Software magazine called “Software Engineering Is Not Enough,” James A. Whittaker and Steve Atkin do an excellent job of skewering the idea that code construction is a rote activity. The picture they paint is a vivid one, so I will quote more than a little from the article:

Imagine that you know nothing about software development. So, to learn about it, you pick up a book with “Software Engineering,” or something similar, in the title. Certainly, you might expect that software engineering texts would be about engineering software. Can you imagine drawing the conclusion that writing code is simple—that code is just a translation of a design into a language that the computer can understand? Well, this conclusion might not seem so far-fetched when it has support from an authority:

The only design decisions made at the coding level address the small implementation details that enable the procedural design to be coded. [Pressman, 1997, p. 346]

Really? How many times does the design of a nontrivial system translate into a programming language without some trouble? The reason we call them designs in the first place is that they are not programs. The nature of designs is that they abstract many details that must eventually be coded. [Whittaker, 2002, p.108]

The scary part is that the software engineering texts that Whittaker and Atkin so skillfully deride are the standard texts used in college software development courses. Whittaker and Atkin continue with this criticism two pages later:

Finally, you decide that you simply read the wrong section of the software engineering book, so you try to find the sections that cover coding. A glance at the table of contents, however, shows few other places to look. For example, Software Engineering: A Practitioners Approach, McGraw-Hill’s best-selling software engineering text, does not have a single program listing. Neither does it have a design that is translated into a program. Instead, the book is replete with project management, cost estimation, and design concepts. Software Engineering: Theory and Practice, Prentice Hall’s bestseller, does dedicate 22 pages to coding. However, this is only slightly more than four percent of the book’s 543 pages. [Whittaker, 2002, p. 110]

(I recommend seeking out this article as the passages I have quoted are only a launching point for a terrific discussion of specific issues to consider before, during, and after code construction.)

Given a world where “coding is trivial” seems to be the prevailing viewpoint, it is no wonder that many working software professionals sought a new way of thinking about the relationship between and nature of design and construction. One approach that has arisen as an alternative to the software engineering approach is the craft-based approach, which de-emphasizes complex processes, specialization, and hand-offs.1 Extreme Programming is an example of a craft-centric methodology. There are many others as well.

Extreme Programming, and related techniques such as refactoring and “test first design,” arose from the work Smalltalk developers Kent Beck and Ward Cunningham did together. The ideas Beck and Cunningham were working with were part of a burgeoning object oriented movement, in which the Smalltalk language and community played a critical role. According to Pete McBreen in Questioning Extreme Programming, “The idea that the source code is the design was widespread in the Smalltalk community of the 1980s.” [McBreen, 2003, p. 100]

Extreme Programming has at its core the idea that the code is the design and that the best way to simultaneously achieve the best design and the highest quality code is to keep the design and coding activities tightly coupled, so much so that the they are performed by the same people—programmers. Refactoring, a key XP concept, codifies a set of methods for incrementally altering, in a controlled manner, the design embodied in code, further leveraging the programmers role as designer. Two other key XP concepts, “test first design” and automated unit testing, are based on the idea that, not only is the code the design, but the design is not complete unless it can be verified through testing. It is, of course, the programmer’s job to verify the design through unit testing.

It is not much of a stretch to conclude that one of the reasons Extreme Programming (and the Agile Methodology movement in general) have become so popular, especially with people who love to write code, is that they recognize (explicitly or implicitly) that programmers have a critical role to play in software design—even when they are not given the responsibility to create or alter the “design.” Academics and practitioners who champion the traditional software engineering point of view often lament that the results of their research and publications do not trickle down fast enough to practicing software developers.

The Solution Of Software Maintenance

The standard rationale for that standard answer is look how much of the budget we’re putting into software maintenance.

If you just only built the software better in the first place, then you wouldn’t have to waste all that money on maintenance.

Well, I want to take the position that this standard answer is wrong. It’s wrong, I want to say, because the standard rationale is wrong.

The fact of the matter is, software maintenance isn’t a problem, it’s a solution!

What we are missing in the traditional view of software as a problem is the special significance of two pieces of information:

  1. The software product is “soft” (easily changed) compared to other, “harder,” disciplines.
  2. Software maintenance is far less devoted to fixing errors (17 percent) than to making improvements (60 percent).

In other words, software maintenance is a solution instead of a problem because in software maintenance we can do something that no one else can do as well, and because when we do it we are usually building new solutions, not just painting over old problems. If software maintenance is seen as a solution and not as a problem, does that give us some new insight into how to do maintenance better?

I take the position that it indeed does.

The traditional, problem-oriented view of maintenance says that our chief goal in maintenance should be to reduce costs. Well, once again, I think that’s the wrong emphasis. If maintenance is a solution instead of a problem, we can quickly see that what we really want to do is more of it, not less of it. And the emphasis, when we do it, should be on maximizing effectiveness, and not on minimizing cost.

New vistas are open to us from this new line of thinking. Once we take our mindset off reducing costs and place it on maximizing effectiveness, what can we do with this new insight?

The best way to maximize effectiveness is to utilize the best possible people. There is a lot of data that supports that conclusion. Much of it is in the “individual differences” literature, where we can see, for example, that some people are significantly better than others at doing software things:

Debugging: some people are 28 times better than others.

Error detection: some people are 7 times better than others.

Productivity: some people are 5 times better than others.

Efficiency: some people are 11 times better than others.

The bottom line of these snapshot views of the individual differences literature is that there is enormous variance between people, and the best way to get the best job done is to get the best people to do it.

This leads us to two follow-on questions:

  1. Does the maintenance problem warrant the use of the best people?
  2. Do we currently use the best people for doing maintenance?

The first question is probably harder to answer than the second. My answer to that first question is “Yes, maintenance is one of the toughest tasks in the software business.” Let me explain why I feel that way.

Several years ago I coauthored a book on software maintenance. In the reviewing process, an anonymous reviewer made this comment about maintenance, which I have remembered to this day:

Maintenance is:

  • intellectually complex (it requires innovation while placing severe constraints on the innovator)
  • technically difficult (the maintainer must be able to work with a concept and a design and its code all at the same time)
  • unfair (the maintainer never gets all the things the maintainer needs. Take good maintenance documentation, for example)
  • no-win (the maintainer only sees people who have problems)
  • dirty work (the maintainer must work at the grubby level of detailed coding)
  • living in the past (the code was probably written by someone else before they got good at it)
  • conservative (the going motto for maintenance is “if it ain’t broke, don’t fix it”)

My bottom line, and the bottom line of this reviewer, is that software maintenance is pretty complex, challenging stuff.

Now, back to the question of who currently does maintenance. In most computing installations, the people who do maintenance tend to be those who are new on the job or not very good at development. There’s a reason for that. Most people would rather do original development than maintenance because maintenance is too constraining to the creative juices for most people to enjoy doing it. And so by default, the least capable and the least in demand are the ones who most often do maintenance.

If you have been following my line of reasoning here, it should be obvious by now that the status quo is all wrong. Maintenance is a significant intellectual challenge as well as a solution and not a problem. If we want to maximize our effectiveness at doing it, then we need to significantly change the way in which we assign people to it.

I have specific suggestions for what needs to be done. They are not pie-in-the-sky theoretical solutions. They are very achievable, if management decides that it wants to do them:

  1. Make maintenance a magnet. Find ways to attract people to the maintenance task. Some companies do this by paying a premium to maintainers. Some do this by making maintenance a required stepping stone to upper management. Some do this by pointing out that the best way to a well-rounded grasp of the institution’s software world is to understand the existing software inventory.
  2. Link maintenance to quality assurance. (We saw this in the previous essay.)
  3. Plan for improved maintenance technology. There are now many tools and techniques for doing software maintenance better. (This has changed dramatically in the last couple of years.) Training and tools selection and procurement should be high on the concerned maintenance manager’s list of tasks.
  4. Emphasize “responsible programming.” The maintainer typically works alone. The best way to maximize the effectiveness of this kind of worker is to make them feel responsible for the quality of what they do. Note that this is the opposite of the now-popular belief in “egoless programming,” where we try to divest the programmer’s personal involvement in the final software product in favor of a team involvement. It is vital that the individual maintainer be invested in the quality of the software product if that product is to continue to be of high quality.

There they are…four simple steps to better software maintenance. But note that each of those steps involves changing a traditional software mindset. The transition is technically easy, but it may not be socially or politically quite so easy. Most people are heavily invested in their traditional way of looking at things.

Human Impact About Software

The first job in the computer software business was as an entry level help desk technician. The computer user for many years since brought home the family’s first Tandy Color Computer.

I joined this tiny software firm on the cusp of the 1.0 release of their first application. If I remember correctly, when I came on board they were in the process of running the floppy disk duplicator day and night, printing out address labels, and packaging up the user documentation. As I pitched in to help get this release out the door, little did I know that I was about to learn a lesson about software development that I will never forget.

The shipments all went out (about a thousand of them I think, all advance orders), and we braced ourselves for the phone to start ringing. In the meantime, I was poring over Peter Norton’s MS-DOS 5.0 book, which was to become my best friend in the coming months. We knew the software had hit the streets when the phone started ringing off the hook. It was insane. The phone would not stop ringing. Long story short, the release was a disaster.

Many people could not even get it installed, and those people who could were probably less happy than the ones who could not. The software was riddled with bugs. Financial calculations were wrong; line items from one order would mysteriously end up on another order; orders would disappear altogether; the reports would not print; indexes were corrupted; the menus were out of whack; cryptic error messages were popping up everywhere; complete crashes were commonplace; tons of people did not have enough memory to even run the application. It was brutal. Welcome, Dan, to the exciting world of software.

Eventually, we just turned off the phones and let everyone go to voice mail. The mailbox would fill up completely about once an hour, and we would just keep emptying it. We could not answer the phones fast enough, and when we did, people were just screaming and ranting. One guy was so mad that several nights in a row he faxed us page after page after page of solid blackness, killing all the paper and ink in our fax machine.

It took us months to dig us out of this hole. We put out several maintenance releases, all free of charge to our customers. We worked through the night many times, and I slept on the floor of the office more than once. Really the only thing that saved us was our own tenacity and the fact that our customers did not have any other place to go. Our software was somewhat unique.

It was obvious to everyone in our company what caused this disaster: bad code. The company had hired a contract developer to write the software from scratch, and, with some help from a couple of his colleagues, this guy wrote some of the worst code I have ever seen. (Thom, if by some slim chance you’re reading this, I’m sorry man, but it was bad). It was total spaghetti. As I learned over the years about cohesion, coupling, commenting, naming, layout, clarity, and the rest, it was always immediately apparent to me why these practices would be beneficial. Wading through that code had prepared me to receive this knowledge openly.

I stayed with the company for three years, and we eventually turned the product into something I am still proud of. It was never easy, though. I swear I packed ten years of experience into those three years. My time working with that software, that company, and the people there who mentored me have shaped all of my software development philosophies, standards, and practices ever since.

When I got some distance from the situation, I was able to articulate to myself and others the biggest lesson I learned there: software can have a huge impact on the lives of real people. Software is not just an abstraction that exists in isolation. When I write code, it’s not just about me, the code, the operating system, and the database. The impact of what I do when I develop software reaches far beyond those things and into people’s lives. Based on my decisions, standards, and commitment to quality (or lack of it), I can have a positive impact or a negative one. Here is a list of all of the people who were effected negatively by that one man’s bad code:

  • Hundreds of customers, whose businesses were depending on our software to work, and who went through hell because of it.
  • The families of those customers, who were deprived of fathers and mothers that had to stay up all night re-entering corrupted data and simply trying to get our software to work at all. (I know, because I was on the phone with them at three in the morning.)
  • The employees of these customers who had to go through the same horrible mess.
  • The owner of our company (who was not involved in the day-to-day operations), whose reputation and standing was seriously damaged by this disaster, and whose bank account was steadily depleted in the aftermath.
  • The prominent business leaders in the vertical market who had blindly endorsed and recommended our software—their reputations were likewise damaged.
  • All of the employees of our company, for obvious reasons.
  • All of our families, significant others, etc.—again for obvious reasons.
  • All of the future employees of the company, who always had to explain and deal with the legacy of that bad code and that disastrous first release.
  • The programmer himself, who had to suffer our wrath, and who had to stay up all night for many, many nights trying to fix up his code.
  • The family of that programmer (he had several children) who hardly saw him for several weeks.
  • The other developers (including myself) who had to maintain and build on that code in the years to follow.

Learn More About The Computer Programming

What exactly is software development, and why is it so hard? This is a question that continues to engage our thoughts. Is software development an engineering discipline? Is it art? Is it more like a craft?

We think that it is all of these things, and none of them. Software is a uniquely human endeavor, because despite all of the technological trimmings, we’re manipulating little more than the thoughts in our heads. That’s pretty ephemeral stuff. Fred Brooks put it rather eloquently some 30 odd years ago[Bro95]:

“The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures. (As we shall see later, this very tractability has its own problems.)”

In a way, we programmers are quite lucky. We get the opportunity to create entire worlds out of nothing but thin air. Our very own worlds, complete with our own laws of physics. We may get those laws wrong of course, but it’s still fun.

This wonderful ability comes at a price, however. We continually face the most frightening sight known to a creative person: the blank page.

1. Writer’s Block

Writers face the blank page, painters face the empty canvas, and programmers face the empty editor buffer. Perhaps it’s not literally empty—an IDE may want us to specify a few things first. Here we haven’t even started the project yet, and already we’re forced to answer many questions: what will this thing be named, what directory will it be in, what type of module is it, how should it be compiled, and so on.

The completely empty editor buffer is even worse. Here we have an infinite number of choices of text with which to fill it.

So it seems we share some of the same problems with artists and writers:

  1. How to start
  2. When to stop
  3. Satisfying the person who commissioned the work

Writers have a name for difficulties in starting a piece: they call itWriter’s Block.

Sometimes writer’s block is borne of fear: Fear of going in the wrong direction, of getting too far down the wrong path. Sometimes it’s just a little voice in your head saying “don’t start yet”. Perhaps your subconscious is trying to tell you that you’re missing something important that you need before you can start.

How do other creative artists break this sort of logjam? Painters sketch; writers write a stream of consciousness. (Writers may also do lots of drugs and get drunk, but we’re not necessarily advocating that particular approach.)

What then, is the programming equivalent of sketching?

Software Sketches

Sometimes you need to practice ideas, just to see if something works. You’ll sketch it out roughly. If you’re not happy with it, you’ll do it again. And again. After all, it takes almost no time to do, and you can crumple it up and throw it away at the end.

For instance, there’s a pencil sketch by Leonardo da Vinci that he used astudy for the Trivulzio equestrian monument. The single fragment of paper contains several quick sketches of different views of the monument: a profile of the horse and rider by themselves, several views of the base with the figures, and so on. Even though the finished piece was to be cast in bronze, da Vinci’s sketches were simply done in pencil, on a nearly-scrap piece of paper. These scribblings were so unimportant that they didn’t even deserve a separate piece of paper! But they served their purpose nonetheless.[1]

Pencil sketches make fine prototypes for a sculpture or an oil painting. Post-It notes are fine prototypes for GUI layouts. Scripting languages can be used to try out algorithms before they’re recoded in something more demanding and lower level. This is what we’ve traditionally called prototyping: a quick, disposable exercise that concentrates on a particular aspect of the project.

In software development, we can prototype to get the details in a number of different areas:

  1. a new algorithm, or combination of algorithms
  2. a portion of an object model
  3. interactions and data flow between components
  4. any high-risk detail that needs exploration

A slightly different approach to sketching can be seen in da Vinci’s Study for the Composition of the Last Supper. In this sketch, you can see the beginnings of the placement of figures for that famous painting. The attention is not placed on any detail—the figures are crude and unfinished. Instead, da Vinci paid attention to focus, balance and flow. How do you arrange the figures, position the hands and arms in order to get the balance and flow of the entire piece to work out?

Sometimes you need to prototype various components of the whole to make sure that they work well together. Again, concentrate of the important aspects and discard unimportant details. Make it easy for yourself. Concentrate on learning, not doing.

As we say in The Pragmatic Programmer, you must firmly have in your head what you are doing before you do it. It’s not at all important to get it right the first time. It’s vitally important to get it right the last time.

Paint Over It

Sometimes the artist will sketch out a more finished looking piece, such as Rembrandt’s sketch for Abraham’s Sacrifice Of Isaac in 1635. It’s a crude sketch that has all of the important elements of the final painting, all in roughly the right areas. It proved the composition, the balance of light and shadow, and so on. The sketch is accurate, but not precise. There are no fine details.

Media willing, you can start with such a sketch, where changes are quick and easy to make, and then paint right over top of it with the more permanent, less-forgiving media to form the final product.

To simulate that “paint over a sketch” technique in software, we use a Tracer Bullet development. If you haven’t read The Pragmatic Programmer yet, here’s a quick explanation of why we call it a Tracer Bullet.

 

By the time you’ve set up, checked and rechecked the numbers, and issued the orders to the grunts manning the machine, the target has long since moved.

In software, this kind of approach can seen in any method that emphasizes planning and documenting over producing working software. Requirements are generally finalized before design begins. Design and architecture, detailed in exquisite UML diagrams, is firmly established before any code is written (presumably that would make coders analogous to the “grunts” who actually fire the weapon, oblivious to the target).

Don’t misunderstand: if you’re firing a really huge missile at a known, stable target (like a city), this works out just great and is the preferable way to go. If you’re shooting at something more maneuverable than a city, though, you need something that provides a bit more real-time feedback.

Tracer bullets.

With tracer bullets, you simply fill the magazine with phosphorus-tipped bullets spaced every so often. Now you’ve got streaks of light showing you the path to the target right next to the live ammunition.

For our software equivalent, we need a skeletally thin system that does next to nothing, but does it from end to end, encompassing areas such as the database, any middleware, the application logic or business rules, and so on. Because it is so thin, we can easily shift position as we try to track the target. By watching the tracer fire, we don’t have to calculate the effect of the wind, or precisely know the location of the target or the weight of the ammunition. We watch the dynamics of the entire system in motion, and adjust our aim to hit the target under actual conditions.

As with the paintings, the important thing isn’t the details, but the relationships, the responsibilities, the balance, and the flow. With a proven base—however thin it may be—you can proceed in greater confidence towards the final product.

Group Writer’s Block

Up till now, we’ve talked about writer’s block as it applies to you as an individual. What do you do when the entire team has a collective case of writer’s block? Teams that are just starting out can quickly become paralyzed in the initial confusion over roles, design goals, and requirements.

One effective way to get the ball rolling is to start the project off with a group-wide, tactile design session. Gather all of the developers in a room[2] and provide sets of Lego blocks, plenty of Post-It notes, whiteboards and markers. Using these, proceed to talk about the system you’ll be building and how you think you might want to build it.

Keep the atmosphere loose and flexible; this gets the team comfortable with the idea of change. Because this is low inertia design, anyone can contribute. It’s well within any participant’s skills to walk up to the whiteboard and move a PostIt-note, or to grab a few Lego blocks and rearrange them. That’s not necessarily true of a CASE tool or drawing software: those tools do not lend themselves readily to rapid-feedback, group interaction.

Jim Highsmith offers us a most excellent piece of advice: The best way to get a project done faster is to start sooner. Blast through that writer’s block, and just start.

Just Start

Whether you’re using prototypes or tracer bullets, individually or with a group, you’re working—not panicking. You’re getting to know the subject, the medium, and the relationship between the two. You’re warmed up, and have started filling that blank canvas.

But we have one additional problem that the painters do not have. We face not one blank canvas per project, but hundreds. Thousands, maybe. One for every new module, every new class, every new source file. What can we do to tackle that multiplicity of blank of canvases? The Extreme Programming[Bec00] notion of Test First Design can help.

The first test you are supposed to write—before you even write the code—is a painfully simple, nearly trivial one. It seems to do almost nothing. Maybe it only instantiates the new class, or simply calls the one routine you haven’t written yet. It sounds so simple, and so stupid, that you might be tempted not to do it.

The advantage to starting with such a trivial test is that it helps fill in the blank canvas without facing the distraction of trying to write production code. By just writing this very simple test, you have to get a certain level of infrastructure in place and answer the dozen or so typical startup questions: What do I call it? Where do I put it in the development tree? You have to add it to version control, and possibly to the build and/or release procedures. Suddenly, a very simple test doesn’t look so simple any more. So ignore the exquisite logic of the routine you are about to write, and get the one-line test to compile and work first. Once that test passes, you can now proceed to fill in the canvas—it’s not blank anymore. You’re not writing anything from scratch, you’re just adding a few routines. . . .

2. When to Stop

We share another problem with painters: knowing when to stop. You don’t want to stop prematurely; the project won’t yet be finished.[3] But if you don’t stop in time, and keep adding to it unnecessarily, the painting becomes lost in the paint and is ruined.

We had a client once who seemed to have some difficulty in the definition of “done” with regard to code. After toiling for weeks and weeks on a moderately complex piece of software, Matthew (not his real name) proudly announced the Code Was Done. He went on to explain that it didn’t always produce the correct output. Oh, and every now and again, the code would crash for no apparent reason. But it’s done. Unfortunately, wishful thinking alone doesn’t help us get working software out to users.

It’s easy to err on the other side of the fence too—have you ever seen a developer make a career of one little module? Have you ever done that? It can happen for any number of political reasons (“I’m still working on XYZ, so you can’t reassign me yet”), or maybe we just fall in love with some particularly elegant bit of code. But instead of making the code better and better, we actually run a huge risk of ruining it completely. Every line of code not written is correct—or at least, guaranteed not to fail. Every line of code we write, well, there are no guarantees. Each extra line carries some risk of failure, carries an additional cost to maintain, document, and teach a newcomer. When you multiply it out, any bit of code that isn’t absolutely necessary incurs a shockingly large cost. Maybe enough to kill the project.

How then, can we tell when it’s time to stop?

Painting Murals

Knowing when to stop is especially hard when you can’t see the whole thing that you’re working on. Mural painting, for instance, takes a special eye. In corporate software development, you may only ever see the one little piece of detail that you’re working on. If you watch mural painters up close, it’s quite difficult to discern that the splash of paint they’re working on is someone’s hand, or eyeball. If you can’t see the big picture, you won’t be able to see how you fit in.

The opposite problem is even worse—suppose you’re the lone developer on a project of this size. Most muralists are simply painting walls, but anyone who’s ever painted their house can tell you that ceilings are a lot harder than walls, especially when the ceiling in question covers 5,000 square feet and you have to lie on your back 20 meters above the floor to paint it. So what did Michelangelo do when planning to paint theSistine Chapel? The same thing you should do when faced with a big task.

Michelangelo divided his mural into panels: separate, free-standing areas, each of which tells a story. But he did so fairly carefully, such that the panels exhibit these characteristics:

  • High cohesion
  • Low coupling
  • Conceptual integrity

These are things we can learn from.

Cohesion

What is cohesion? As used here, cohesion refers to the panel’s focus and clarity of purpose. In the Sistine Chapel ceiling, each panel tells a single Old Testament story—completely, but without any extraneous elements.

In software, the Unix command line tool’s philosophy of small, sharp tools (“do one thing and do it well”) is one example. Each tool is narrowly focused on it’s primary task. Low cohesion occurs when you have giant “manager” classes that try to do too many disparate things at once.

Coupling

Coupling is related to orthogonality[HT00]: unrelated things should remain, well, unrelated. Following the object-oriented principle of encapsulation helps to prevent unintended coupling, but there are still other ways to fall into the coupling trap. Michelangelo’s panels have low coupling; they are all self-contained; there are no instances of figures reaching from one panel into the next, for instance. Why is that important?

If you look closely at one of the panels that portrays angels gliding about the firmament of heaven, you’ll notice that one of the angels is turning his back to, and gliding away from, the other angels. You’ll also notice that said angel isn’t wearing any pants. He’s rather pointedly “mooning” the other angels.

There is surely a tale that explains the bare tail of the mooning angel, but for now let’s assume that the Pope discovered the mooning angel and demanded that it be replaced. If the panels weren’t independent, then the replacement of one panel would entail replacing some adjacent panels as well—and if you had to use different pigments because the originals weren’t available, maybe you have to replace thenext set of panels that were indirectly affected. Let the nightmare begin. But as it stands, the panels are independent, so the offending angel (who was apparently on Spring Break) could have been easily replaced with a less caustic image and the rest of the project would remain unaffected.

Conceptual Integrity

But despite that independence, there is conceptual integrity—the style, the themes, the mood, tie it all together. In computer languages, Smalltalk has conceptual integrity, so does Ruby, so does C. C++ doesn’t: it tries to be too many things at once, so you get an awkward marriage of concepts that don’t really fit together well.

The trick then is to divide up your work while maintaining a holistic integrity; each Sistine Chapel panel is a separate piece of art, complete unto itself, but together they tell a coherent story.

For our projects, we have several techniques we need to use inside code, including modularity, decoupling, and orthogonality. At the project level, consider architecting the project as a collection of many small applications that work together. These interacting applications might simply use a network connection or even flat files, or a heavier-duty component technology such as Enterprise Java Beans (EJB).

Time

Up until now, we’ve concentrated on splitting up a project in space, but there is another very import dimension that we need to touch on briefly—time. In the time dimension, you need to use iterations to split up a project.

Generally speaking, you don’t want to go more than a few weeks without a genuine deliverable. Longer than that introduces too large of a feedback gap—you can’t get the feedback quickly enough in to act on it. Iterations need to be short and regular in order to provide the most beneficial feedback.

The other important thing about iterations is that there is no such thing as 80% done. You can’t get 80% pregnant—it’s a Boolean condition. We want to get to the position where we only ship what really works, and have the team agree on the meaning of words like “done”. If a feature isn’t done, save it for the next iteration. As the iterations are short, that’s not too far off.

In time or space, feedback is critical. For individual pieces of code, it is vital to have competent unit tests that will provide that feedback. Beware of excuses such as “oh, that code’s too complicated to test.” If it’s too complicated to test, then it logically follows that the code is too complicated to write! If the code seems to be too complicated to test, that’s a warning sign that you have a poor design. Refactor the code in order to make it easy to test, and you’ll not only improve the feedback loop (and the future extensibility and maintainability of the system), you’ll improve the design of the system itself.

3. Satisfying the Sponsor

Now comes the hard part. So far, we’ve talked about problems that have simple, straightforward answers. Organize your system this way; always have good unit tests; look for and apply feedback to improve the code and the process; etc. But now we’re headed into much more uncertain terrain—dealing with people. In particular, dealing with the sponsor: the person or persons who are paying to make this project happen. They have goals and expectations all their own, and probably do not understand the technology with which we create the work. They may not know exactly what they want, but they want the project to come out perfect in the end.

This must be the artist’s worst nightmare. The person paying for the portrait is also sitting for it, and says simply “Make me Look Good”. The fact that the sitter is royalty who commands a well-oiled guillotine doesn’t help. Sounds pretty close to the position we find ourselves in as we write software, doesn’t it?

Let’s look at it from the sitter’s point of view. You commission an artist to paint you. What do you get? Perhaps a traditional, if somewhat flat looking portrait such as da Vinci’s Portrait of Ginevra Benci in 1474. Or maybe the realistic, haunting face of Vermeer’s Girl With a Pearl Earring. How about the primitive (and topless) look of Matisse’s Seated Figure, the wild and fractured Portrait of Picasso by Juan Gris, or the stick-figured jumble of Paul Klee’s Captive?

All of these are portraits, all interpretations of a commonplace thing—a human face. All of which correctly implement the requirements, but all of which will not satisfy the client.