The Twelve Days of Software Development
One of the problems with building a software product is that you will generally have a particular kind of user that you are targeting your product to in mind, however, you won't generally have any control over what kind of users end up using your products. If you built your product for one kind of user, the others will complain that your product isn't adequate for their purposes. However, if you built your product to suit the second group, the first group will complain.
Some groups are more vocal than others, such that it often skews real overall opinion. For instance, technical end users are very vocal on newsgroups and weblogs such that their opinions may seem more important than others, despite that they may -- for a particular product -- be in the minority of users.
Thus, when building a product, you need to choose the right parts to focus on so that you'll satisfy your target users and the users you think you might end up with. There are a number of things that your application can choose to focus on. Here are twelve possibilities:
Pick two. That's all you'll be able to implement well enough for your first version. For each subsequent version, you might be able to add one more.
Unfortunately, different target groups of users will desire different items from the list above. Technical users often wish to have efficiency and perhaps customizability. Developers would like extensibility. Macintosh users tend to like usability. Your company would like marketability.
One possibilty is to do a bit of each. That's possible, but rarely happens. If you try, you'll fall on your face, since your application will be really good at nothing.
I'm going to duscuss each of the above in more detail over the next twelve days (or probably a little bit longer than that).
When Opera decided to create a browser, they focused very heavily on efficiency. Mozilla decided instead to focus on portability and customizability. For Mozilla Firebird, those features were already implemented, so the developers began to focus on efficiency and usability instead. (Note that they sacrificed some customizabilty and portability to some degree.)
Web browsers have this unfortunate reputation of being slow. This likely stems from people having dialup connections and having to spend significant amounts of time waiting for content to download. This leads to many people believing that browsers need to be faster and so they spend time discussing the speed of one browser over another. When a new browser is released, one of the main topics mentioned on the various news sites is how fast or slow it is compared to competitors, often having a chart showing various simple tests they did. This only perpetuates the problem and makes more people believe that efficiency is of critical importance to the usage of a browser. Many people will choose one over another, simply becuase of a 10% speed benefit, despite that the faster product is worse in all of the other eleven categories listed above.
In reality, performance doesn't really matter in many cases. The difference between a task taking three seconds or four seconds is of trivial importance for many tasks. For instance, let's say application takes seven seconds to start up over a competitor which takes only two. A user that launches the application once a day, five days a week, will end up saving just over 20 minutes of waiting time per year with the second application. Hardly worth fretting over.
For a developer though, you'll likely end up with users who do fret over such things, so you may wish to focus on improving performance. However, each improvement you do costs more time to implement incrementally. The first few changes are simple, but as more performance tuning is done, you'll find that it requires much more effort to implement. At some point, it will just not be worth the effort.
One problem with making performance improvements, is that some systems simply can't be improved. Architectural decisions made in the first version or in later versions simply pervent significant improvements from being made. Much of this is programmer style. Some styles simply aren't designed for performance but for other benefits, for example modularization.
So deciding on whether to focus on efficiency is dependant on what your application does and whether you think your users will demand it.
There are two kinds of performance, actual performance and perceived performance. The former is the speed at which something actually runs, which is easily measured by various tools, while perceived performance is how fast the user thinks something is working, which is much harder to measure. The kind to focus on depends on what's happening. If you're creating a database, actual performance is more important, since often nobody is actually looking at it. If you're creating a end user application, perceived performance is key.
Improving perceived performance can be difficult. You have to trick the user into thinking that they are getting a benefit without necessarily giving them one. A common method is to make sure that it always looks like something is happening. Many applications that take a while to start use a splash screen to disguise the real time it takes. The part of the user's brain that controls their patience resets when something happens such a an image appearing, or a window changing. For greater effect, PhotoShop and other products stream off a list of components they are loading in the splash screen, such that the user has something to look at.
Another trick is to preload some components when the operating system starts up, thus improving the speed at which the application starts later. Unfortunately, one group of users sees right though this and complains about the time and extra memory it requires. This group of people are the 'Too Much Bloat' people.
It seems that every now and again some new word processor or competitor to MS Office comes along. They often promote that they are small and compact and do not contain any of the features that nobody uses. Which features nobody uses that were selected by the new competitor to leave out are fairly arbitrary though. They probably just left out the features that the application designer or the president of their company doesn't use. Unfortunately, they don't represent your target audience.
Different groups of users will use a different set of features that nobody uses. There really aren't any features that nobody uses actually, even those that seem really obscure. After all, do you really think that Microsoft would be spending time implementing hundreds of new features in the next version of Word if nobody wanted them? Chances are that some CTO at a big company with 10 000 employees asks for a bizarre feature every day.
A great way to find out which features of an application are actually used by users is to not implement them. Or, remove them from a later version. Users will complain or will not use your product. If you produce a new Word competitor and advertise that it has none of the features that nobody uses, you'll only end up implementing those features anyway in your next release. You'll need to keep adding features or nobody will bother upgrading. Many people like upgrading just to see how different 2.0 is from 1.0.
One problem with promoting your product as small, lean and isn't 'bloated', it that you'll certainly have to live with that reputation throughout the life of your product. I'm sure that this puts some serious limitations on the way in which Opera implements their browser. They already have this reputation of having a small product, but they will have to live with this forever. The design style of some programmers simply goes counter to this. It will require them to think about size constraints whenever they implement something, which isn't generally worth the extra time.
So, be careful about promoting your new application as small, lean and with no unnecessary features, because you'll have a dificult time maintaining that rule. There's only one reason for going for small and lean anyway -- to please the "Too Much Bloat" people.
These are an unusual group of people. They worry about application sizes, usually to the point of obsessiveness. I used to be like that. Then I realized that it made no sense. Remember though that this group doesn't represent all of your target users, likely only a very small portion. They are very vocal though and will post angry messages on newsgroups. Most people will use your product anyway as they won't even notice the size as they will just use whatever the IT department has installed on their systems.
It's possible that the "Too Much Bloat" people have been in the computer industry too long and remember when 64K was a lot of memory, so when they see a product that requires 32 megabytes of memory, they panic. It's also possible that these people are very neat and need to have everything organized exactly the way they want it. In the non-computer real world, there may be more reason to be concerned about unnecessary extras. For example, if you buy a car, you may not wish to pay for all of the extras like a sunroof if you don't want them. You would also be upset if you spent money on a new car and found that it came with a pile of dirt on the back seat.
In a computer though, the extra cost of those additional features is so minimal that is isn't worth worrying over. In addition you won't have to look at those features even if they were included, unlike the real dirt, since you can just ignore them. Software takes up actual space but is hidden inside your computer so you don't have to look at it. To a "Too Much Bloat" person though, that extra "bloat" is like that closet full of ugly sweaters you keep just to prove to your grandmother that you haven't thrown them out. You have to keep them, but you desperately want to get rid of them.
One thing many people don't seem to understand is that the unused portion of your computer's memory or disk drive is, by definition, being wasted. A computer that doesn't say 'Zero Megabytes Free' is wasting resources. For example, I would prefer these kind of browser cache settings.
Anyway, don't let this group of people confuse you into thinking that your application needs to be small to be popular. Very few heavily used applications are small.
When building a product, focus on your target user and build the features that they really do want. You won't be able to implement everything for 1.0. In fact, if your competitor is already at version 8, it probably isn't worth building the product in the first place. Sometimes, though, you may have other reasons for building your product. For example, you may wish to use your monopoly in a field to stamp out competitors. Or, you may want to promote a related product. In these cases, features may not matter that much.
Let the programmers and designers build the application in the way they say fit. That way, they can focus on making those features work.
Generally, users don't like to use faulty products. That's probably obvious though. Unfortunately, your users won't know how difficult it was for you to ensure that the product works properly. Users will tend to make assumptions about a product's complexity based upon what the product does. "Looks simple from the outside, therefore it must be simple on the inside," they might say. Of course, they don't know that it took you four months just to create a diagram of it.
People like to complain about things. They will expect you to fix a problem right away. Unfortunately, a particular issue may be very difficult to fix for architectural reasons. You simply won't be able to fix some problems quickly.
Whatever your product is, there will always be a significant number of people for which the product will fail. Due to the nature of computers, each person will have a slightly different configuration. In most circumstances, you won't be able to test your product on any of your user's configurations. Some people will have unusual hardware, some will have installed something that they don't remember, and some users will be 500 miles away.
You won't be able to fix every problem, even if it means a few irate customers. This applies to all things, not just software. Perfection just isn't possible. Sometimes the pasta will be a little cold. That not unacceptable. Cold pasta is just a bug. It can be fixed. Software does have the difference that fixing the problem for one user may be too costly. The trick is determining whether that one user is really only one, or whether he or she is only representative of a larger group of users.
Deeciding whether to focus on reliability depends on how critical it is to the use of your product. There is almost always a level of acceptability when it comes to products. Finding that level is important. A car that reaches 20 000 kilometers and suddenly swerves off a cliff isn't acceptable. (Car ads have taught us that most people drive new cars along mountain roads). I recently discovered that if my microwave oven is cooking while it's time of day clock switches from 23:59 over to the next day, it crashes and powers off. That isn't good, but it's not past the unacceptable level.
Many people like to laugh about how Windows is unreliable. Is it less reliable than other systems? Probably, but that's because it's such a large and complex system where reliability is difficult to maintain, while still maintaining all of the other eleven categories listed above. Fixing it would probably mean rearchitecting the whole system, which would be too costly.
Some applications are busier than others, meaning that they tend to do more within the same timeframe as another.
For example, a typical hour long session using Word goes like this: Open a document created by Word, enter a few paragraphs, change a few font sizes, add some bold, correct some spelling mistakes, then print.
A typical hour long session using a web browser goes like this: Load a file over a network which was created by one of a thousand other products, none of which were the browser, Many were created dynamically one second ago by a mishmash of different tools. Wait while the browser displays parts of the content, often redrawing earlier parts as new data arrives. Load associated stylesheets, scripts, images and plugins, all created by various products, containing data defined by 10 to 20 different specifications. 90 percent of the loaded files contain errors, either in the data itself, or in the network. The browser is expected to handle this data anyway. Repeat this whole process 50 times.
Which do you think is more likely to crash?
When building an application, the ability to focus on reliability is very dependant on the complexity of the application. Some kinds of applications are more prone to problems than others. If you have complete control over the kind of input received, and/or the environment in which the application is deployed, it is a lot easier to make the application reliable. Internet applications such as browsers and mail clients are especially tough since they have to deal with information they've never seen before.
Programmers, like all people, are fairly lazy. For any sufficiently complex problem, a programmer will implement the basic part well enough, but will frequently leave a segment for implementing later. If you look though the code of an application, you will no doubt find comments marked up with initials, a few asterisks or an X or three. These are the sections the programmers decided to leave for later, possibly hoping that when later came, it would be someone else working on it.
Various bug tracking tools exist which are a better way to track issues than comments in the code. That way, you know what isn't implemented or doesn't work, even if you have no plans to fix the issues. Expect that many bugs will never get fixed.
Deciding which bugs are more important is dependant on how critical it is, whether someone could be killed if it isn't fixed (or not killed if you're building a missile) and its impact on users. There will always be complaints about problems from users -- the developer's goal is to find the right level of acceptability.
Of course, many complaints about a product are because a user doesn't know what they are doing.
There is lots of information available about creating usable applications. I'm not going to go into details about specific improvements, because there are so many of them. Instead, I'll focus on some general ideas.
As always, some groups of users can use less usable applications more easily than others. Applications designed for a single simple purpose should be designed to make that single purpose as efficient as possible. For example, an interface for entering product numbers as they get shipped out should be made as efficient as possible to enter product numbers. It might provide a list as well to allow a user to choose amongst the most common products sold. Stores have made this more efficient by adding a barcode on the product so the clerk doesn't have to enter anything.
There are two kinds of usability benefits, those that are user noticable, and those that are user invisible. Users will notice the former, or they will notice when they are missing. The latter are benefits that users will never realize, unless they know to look for them. The only people who look for them are usability specialists, or people with an interest in usability.
Let's say you've discovered that for some operation, using the mouse is faster than using the keyboard. Using the mouse in this case is a user-invisible usability benefit. It may actually be faster, but all the people who use the keyboard won't notice. One could force the user to use the mouse by removing the ability to use the keyboard for that task. However, that's a bad idea. Because now you've removed a user-noticable usability benefit. The keyboard user will get frustrated and may not use your application any more. That's worse than what may only be a marginal improvement in productivity by using the mouse.
Applications should add user-invisible enhancements, but only when they don't sacrifice a user-noticable enhancement in the process. If the user gets frustrated using an interface, then it isn't very usable.
A lot has been said about using well-known conventions about what keyboard shortcuts to use, how to display interface elements, and so on. The debate about using platform native widgets, versus using themes rages on. The advantage of the former is consistency with the other applications. If the user knows that pressing Control and C copies text to the clipboard in one application, they will expect it to do so in other applications as well.
One curious thing is that the usability people have been telling us that native platform interfaces are better, yet millions of people are still able to figure out how to use the multitude of different buttons and menus found on web sites. This is because conceptually, a button is just a button and a menu is just a menu. The people designing web sites usually think only in these terms. After all, if it looks just like a menu, and opens and closes like one, it must be a menu, right?
Well, partially. People understand the concept of a menu. But usability is about ensuring that a menu isn't an absolutely positioned block with div tags inside it and attached mouse events, disguised as a menu, and that it actually is a menu. That way users won't get frustrated when it doesn't work just right, which, of course, is a user-noticable usability problem.
It wasn't long ago that people spent time arguing about the benefits of a GUI versus a command line environment. Is a GUI better? It could be, but it depends on the purpose and design of the interface. A well-designed command line interface is better than a poorly designed graphical interface.
Consider this experiment. Build a time machine, go back in time four hundred years and bring back two people. Sit one in front of a command line interface with only a keyboard and screen. Sit the other in front of a graphical interface with a keyboard, mouse and a screen. Tell them both that they can use this device in front of them to turn the light on in the room. Indicate that the first to do so will receive a sack of gold and the other will receive nothing.
With the command line system, the user must type 'lights' followed by Enter. With the graphical system, the user must double click an icon labeled 'Lights'. I'm not sure who would win, since both systems are difficult to use to someone who has never seen them before. They will probably spend time touching the screen, or hitting and moving things about. The keyboard may give a visual clue to both users since they would likely recognize the letters on the keys (assuming they use the same alphabet). The mouse would be difficult -- first one has to determine how to move it, then associate it with the on-screen arrow, and then determine not only to click on the icon, but to double-click.
The command-line user could be helped with a message 'Type lights and then press the button labeled Enter to turn on the lights.' Unfortunately, many interfaces don't provide you with any clue of how to begin. Neither the typical command-line nor graphical interface provide this kind of getting started information. This is why Windows has a button labeled 'Start', and why browsers have home pages. These give clues to users about what to do first.
When building an application make sure that the user knows how to begin. As with our two time travelers, it helps that users can associate what they see with things they understand already. Don't introduce too many new things without ensuring that the user understands them. If you say "I'm going to comment out this code", to a programmer, they will understand. Say that to a barber however, and they won't know what you're referring to, since 'comment out' and perhaps even 'code' may be meaningless to them. Ensure that user can associate the application with something they are familar with. Most people start learning how to drive a car when they are a baby -- they just don't realize it. They watch their parents for years. Once they reach the age where they are allowed to drive, they already know what to do. Don't go too far however. Users will get confused with a car interface in an application.
An application that is usable does not necessarily mean that it is simpler. Removing all of the extra menus and toolbars doesn't make your application more usuable. It may happen to do so, but that's just a side effect. However, an interface that let's the user focus on what they are doing is better. Avoid adding elements to the interface that would distract the user.
The key is to ensure that users understand where to begin, what to do next, and how to do it. In addition, make sure that the user isn't distracted and can focus on the task at hand. That way, a user would realize that it's a lot simpler just to turn on the light using the switch on the wall.