Monday, October 12, 2009

Future of Operating Systems

I've been watching with interest how the operating system wars are playing out. The traditional war has been dominated by Microsoft with players like Mac and linux picking up a significant but minor role.

All of that is changing now. Today smart phones are more capable as computing devices than the hardware that these other operating systems were developed on. Which begs the question why don't they use one of these operating systems?

In fact they do. Google, Nokia and probably others are basing their operating systems on linux and of course the iPhone from Apple is based on the Mac OS which is based on unix.

They try to hide this as much as possible, but it gives us a peek at the future. The biggest reason that linux or unix work so well is that these operating systems were well designed from the start and run on very light weight hardware.

Of course Microsoft has their CE operating system and will continue to push that as well.

One thing that has puzzled me for awhile is why each vendor seems to want to create a closed system. Software developers who develop for one smart phone will no doubt want to develop for all smart phones. The typical way this is done is by supporting applications that run within the web browser. This is a great strategy, but I think it is a bit short sighted.

In talking to people and using online applications, such as the editor for this blog, I have found that while browser based applications seem to work, they have issues. The most notable is frequent updates that break things. This is fine for social networking or casual blogging, but it isn't fine for a mission critical application that you use to make money.

For example, if you are a reporter that has a daily blog, you want the software that you use for blogging to work the same today as it did yesterday. If one day you start using it and it's broken because a bug was introduced, you want the option to fall back to the version you used yesterday.

Some sites do a good job of this, but many do not.

Also, while many applications do require internet access, some do not. If an application doesn't require internet access or it only requires occasional access then it seems logical that it could be independent of the browser.

Most applications written up until a few years ago didn't have any internet connectivity.

What we have is a very large set of applications that could be potentially ported to a smart phone and possibly be very useful there. These applications are often only Windows based or linux, or Mac. Some have been ported to all the major platforms.

One of the portable application frameworks is QT. QT was recently acquired by Nokia and now Nokia is creating a smart phone operating system that supports QT. I believe this is a very astute move by Nokia. It is the kind of move that I would hope other vendors recognize and step up to supporting QT on their smart phones as well.

While every vendor would love to have their smart phone be the phone that everyone uses, the fact remains that the more applications that you can offer for your phone the better it will sell.

We saw Apple initially close the door on apps saying that they should be developed to run in Safari. They quickly changed that and we saw a huge number of apps written in a short period of time.

Now we are seeing all the smart phone vendors putting together their answer to the iPhone. If they want to beat Apple then they need to adopt a standard API so that software developers can write applications that can run on any phone. Open systems will ultimately win.

We have seen linux start to take over the OS market and ultimately I believe all operating systems will be based on linux. There is really no reason that even Microsoft can't move to linux and in fact there have been rumors that they are ultimately going that way.

Creating standard API's that are supported on all devices is reasonable and I believe necessary in order for hardware vendors to stay in business. Think of the operating system as if it were the road and the application as if it is the car. Every car can drive on every reasonably smooth highway. Certainly there are special cases, but in general if you were to build a car that could only be driven on half the highways you it wouldn't sell.

I will watch with interest the unfolding of the smart phone operating system wars. It is my belief that the winners will be those that embrace open standards, like linux and QT. The losers will be those that try to create a closed standard.

While Apple hit a home run with the iPhone if they want to remain a viable competitor they need to embrace an open standard so that developers can develop applications for all smart phones. While they clearly have an advantage today, if the combined market of other smart phone makers were to adopt many open standards they could be a minor player a few years from now.

Wednesday, September 30, 2009

Ooh! Shiny Thing

Over the years I have seen a lot of new technology come and go. Some of it sticks, some of it was just a bad idea. The problem is that often we as software engineers tend to get excited about some new bit of technology or a design pattern etc. This is the "Ooh! Shiny Thing" syndrome that many of us share.

While I'm a fan of new technology, it is a double edged sword. What tends to happen is that some new technology is used in one place in the code but other places that were using and older technology don't get updated. Now there are two ways of doing the same thing. Then someone else comes along and uses a third technology for the same task.

Maintaining a code base having several ways of doing the exact same thing means that you need to learn each method. While not a bad thing to learn, it takes time. Time devoted to maintenance can be costly. Given that pretty much any technology will have bugs, you are also taking on that problem as well.

Over the years I've found that the shininess tends to wear off of new technology pretty fast.

Use new technology, but be careful when doing so. In an ideal world the new technology should replace the existing technology that does the same thing. But only after it has been proven to be reliable and viable for the future.

One of your best gauges is the old-guy. Sure he may be old and set in his ways, but there are good reasons for this. It isn't that he can't learn new things. Certainly as we age it is more difficult to learn new stuff. Of course there are some that are unwilling to learn anything new. You should ignore them.

The ones you want to use as a gauge are those that are continually learning new stuff. Look at what they are learning. Let them know about what you are interested in. They may have some insight that say's it's good or bad. Take that input and use it to make a more informed decision.

Recently I've invested quite a bit of time learning stuff out of the boost library. Some of it is very good. For example shared_ptr is so essential that it has been pulled into the next C++ standard.

Other things are less compelling. Some of them while nice in theory will put a world of hurt on your compiler. For example boost::format while a really nice way of doing type safe formatting in a printf sort of way will produce a very large amount of template code. In many cases the much more lightweight stringstream is just as readable.

This isn't to say that you shouldn't use it, but what appears nice on the surface can have some very serious side-effects that are not at all apparent.

Another easy to abuse boost function is bind. In some cases it can make C++ into a write only language. Once you get a bit beyond the simple syntax to bind a function into a std::for_each the code can become almost unreadable and also is also susceptible to errors such as passing by value instead of by reference.

I'm sure that you can look at your own code base and find similar cases where someone was overly clever.

Just because it can be done doesn't mean it should.

Be careful with your new technology. Before you use new technology make sure the rest of your team is OK with it. Getting their buy-in is important and the act of doing so may change your mind about the viability of the technology.

Saturday, September 19, 2009

Do What You Enjoy

While at some point in time we all need to get to the bottom line of making enough money to live and perhaps a bit more to do some of the things we enjoy, doing a job just because it makes you money is ultimately unsatisfying, unless making money is what excites you.

I happen to enjoy writing software. I also enjoy other things that are creative, such as the occasional remodeling job, blogging, cycling etc. The software thing is what currently pays the bills and as it turns out is enjoyable to me most of the time as well.

Every once in awhile you will get excited about something. Perhaps it's a new technology. When that excitement peaks you should follow it. Over the years I've seen many great ideas thrown around that eventually turned out to really big. Very often those ideas were implemented by others who also had that idea, but instead of pushing it into the background pursued it.

The reasons for this are varied. Often the idea seems impossible to implement. I'm reminded of an idea that some of us tossed around when I was in college studying mechanical engineering. We thought it would be cool to have a pen that would be able to write by itself. It would have tiny gyroscopes in it to control it's movement. We also thought that we could have the pen record what you were writing using accelerometers.

Interestingly the second part of that idea is now a reality in the "Pulse Smartpen". This pen uses a camera and tiny dots on the paper to record what you write. In fact it goes even further in that it knows where on the paper you are and so can play back stuff.

Obviously back in 1980 when we had this idea there were massive impediments to doing what we talked about and it was really only an idle conversation that none of us were even slightly motivated to try.

The thing is that there were many more practical ideas that could have been implemented given the technology of the day that I and others were sometimes excited about. But we often let the practicalities of life get in the way of pursuing them.

Or worse we knew they couldn't be done.

Many of the greatest inventions were done by those that didn't know it couldn't be done. Don't let someone else talk you out of doing something you are excited about. Push forward, even if it takes years of working nights and weekends to get the job done. While you are doing that you will learn something. And maybe that will lead to a breakthrough. When it's no longer fun put it away and do something else.

In any case always have something that you are pursuing that excites you.

America's Got Talent?

I recently watched the "America's Got Talent" show. Interestingly the winner was not the most talented person. Two singers got the top two spots, one country and western the other opera. The opera singer was in my opinion and others that know music a significantly better singer.

This got me to thinking about what makes software popular. In my experience the best written software from user interface design and from the point of view of software quality is not necessarily the most popular. The program that I work on now, "Chief Architect", is much more popular than the program I worked on before "HiQ".

The quality of the code I'm working on now started out significantly worse than what I had thought was average quality code in the industry today. Even now I don't believe the code quality is significantly better than average even with all the work our team has done to improve it. It has improved dramatically over the years as has been attested to by our customers and our software metrics.

The interesting thing is this poor quality code produced a consumer product that has several times been the #1 product in it's category. So clearly code quality isn't what makes a program popular.

The user interface design in this example is also interesting in that it had and still has numerous cases where the design is inconsistent, cumbersome and visually unappealing.

What is it that makes one product more appealing than the other? In the case of "America's Got Talent" there are a lot more people who like country and western music than those that like opera. The same idea applies to "Chief Architect" and "HiQ". "Chief Architect" appeals to anyone who wants to design a house while "HiQ" appeals mainly to engineers who are trying to solve some hairy computational problem.

This ties in with my previous blog post "Know Your Market". If your goal is to make the most profit then the idea of working on things that are really useful should be extended to include the idea of working on things that appeal to a wide audience.

Monday, August 31, 2009

Know Your Market

A recent article by Tom Demarco "Software Engineering: An Idea Whose Time Has Come and Gone?" is a must read. It makes the "obvious" point that we should be working on projects that are really useful and not working on projects that are relatively useless.

Knowing the bottom line is implicit in the article, but often ignored by some really smart people. Most of the projects I've worked on over the years, even very successful ones, have rarely done the leg work to get estimates of how much money a project could potentially make and even more rare have looked at what the project could realistically make.

Instead the project is often done because the person funding it thinks it's a good idea or the person selling the idea to them is really good at selling the idea.

I like giving engineers the freedom to create and work on things that they are passionate about. This is generally a good idea because they work on things they like and generally if one person likes it many will. However, sometimes this isn't true.

So before you start your next project spend some time figuring out the bottom line. If it's marginal go on to another idea.

Saturday, August 29, 2009

Converting a Monolithic Application to DLLs

Over the last several months I have embarked on a rather difficult task of converting a monolithic application into several DLLs. This application has been in need of this for about 10 years now, but we kept putting it off because it was hard and adding new features seemed to be the thing to do.

Plan from the beginning to use DLLs. It is easy to do, leads to organizational improvements, and will pay off many times over in time savings.

The hard part in all of this is to determine what to move first. This is where being familiar with your code base can pay off. The first thing we did was to start moving some of the core items. As it turned out IO code and error handling came first for us. Then we were able to hit some of the core classes.

When working on this project I kept thinking that I would hit a point where things would be easier to move. However, it seems that there is a never ending supply of couplings caused by poor choices about where code was placed.

There are several design choices that make moving code to a DLL hard. One is when base classes rely on derived classes. Intuitively this seems like it is generally a bad idea. It is amazing how easy it is for this occur though. Often the reasons for doing this don't seem horrible at the time and there often aren't any short term repercussions that make doing this obvious.

About all you have to prevent this sort of thing is peer code reviews and developing better coding habits.

Another problem is collections of classes that are all stored in a single file. Adding a new file when you create a new class isn't difficult, but it does take a couple of minutes to do and it is awfully tempting to save those couple of minutes for that little class you are creating.

Yet another problem are methods that don't really belong in a class. It is a bit of a trap to think that all methods should belong in a class. However, there are cases where you have what I would call a bridging routine that deals with two or more classes but doesn't clearly belong in either. Often what I see is the method put into one of the classes taking as an argument the other class or classes.

This creates a coupling between the classes. It is very easy to end up with couplings like this that end up linking loosely related classes. In a social network it doesn't take very many hops until you find a group of people that you don't know. It is the same with classes. These loose couplings create a network of interdependency. Social networks also often lead back to you. The same can happen with classes.

When trying to move a set of classes that are coupled like this it is very easy for a few loose couplings to spread out and pull in very large chunks of your application. A few cuts here and there is often all that is required break a single class out. The more couplings the harder it is to break out.

Obviously there need to be couplings between classes otherwise you can't have an application. In observing these couplings it occurred to me that just as in other networks there are nodes that naturally form in classes. Some classes have very few links and can be thought of a little bit as leaves. They are referred to by other classes but don't refer to any other classes.

I've observed two categories of nodes. Node classes which have methods that dispatch to many classes and base classes which have many other classes derived from them.

When moving to a DLL the base class nodes need to move early. The dispatch nodes end up needing to move late.

I've been thinking about this observation of node classes and have come to the conclusion that we should design node classes to do little more than be dispatchers to other classes. They shouldn't do any work beyond what is required to do the dispatching.

These node classes will form naturally as your classes form and start interacting. If you don't recognize the formation of one of these classes you can easily find that a node class has turned into a monolithic class that does too much. Since all roads tend to lead to these classes it is a natural reaction to put more and more stuff into them.

In object oriented design there are guidelines, like a base class should be a pure virtual. This idea is supported by the observation of node classes. Make your base classes as simple as possible. Possibly without data or any methods. When you add methods they should be only to support the required communication between classes. If you need more function in a derived class then create a new class that provides that function.

The second form of node class is the dispatcher. It is easily identified because it's implementation will include many header files. These tend to be more problematic for moving into a DLL as they have many couplings. They are also problematic for other reasons.

The problem that I've observed is that these classes are also often classes that were designed to perform a specific task and they grew into dispatchers that pulled in a lot of other classes because of poor design choices.

One trick to dealing with these classes when you port is to create a pure virtual mix-in class that defines pure virtual declarations of the methods you need to access in the DLL. In the end the set of these methods that get created can indicate a set of possible functions that should be pulled into a mix-in class that defines the communication interface.

After working on this project for awhile I'm wholly convinced that all applications should be built as libraries from the very start with a very minimal main executable to kick things off. Besides enforcing better modularity in design this also helps to cut down build and link times which can easily get out of hand.

We use a build tool called IncrediBuild to speed up the compile process. Without it we would be waiting hours to build the entire application. With it the build can complete in minutes. With good modularization you should be able to treat your DLLs like third party products that only get updated infrequently. The more of these you have the less you have to recompile when making changes.

Hopefully you find this information useful. Even with my many years of experience I was unprepared for the cost of moving things to a DLL. Hopefully you can prevent your projects from getting into this state or justify starting to work on this sooner. It will cost more and more the longer you put this off. So getting started on it early.

Thursday, August 27, 2009

Bug Debt

I work on a product that has a backlog of logged bugs that is larger than the team can fix within a single release cycle. It got this way because of a number of factors. Regardless of the reasons we have what I call a bug debt.

Just like real debt bug debt has an interest penalty. There is the cost of entering the bug, the cost of prioritizing the bug, etc. Often there is also an associated support cost with customers who run across the bug. This can also result in lost income due to returns or failure to buy. In addition there can be the penalty of working around the bug in new code.

The point of this is bug debt is real. Given enough time you can create enough bug debt that you don't have time to work on new features. The interest penalty can take enough time out of your day that eventually you could find yourself working 100% of your time but still have a growing number of bugs.

To calculate your bug debt look at the number of bugs you have assigned to you. Then look at the number of bugs you fix on average for a typical period of time when you are writing new code. Use those numbers to calculate how many weeks it will take you to fix those bugs assuming no new bugs are reported. That is your debt.

I like to keep my debt under 1 month. It is generally impossible to keep it at zero as bugs tend to come in at a fairly constant rate.

Once you have your debt calculated you can also look at the average number of bugs reported and figure out how long it will take to work off your debt.

In the long run keeping your bug debt low will allow you to be more productive and spend more time working on new projects.

Of course if you don't create the bug in the first place that is even better. But that is a subject for another time.

Bad code is an infection that spreads

I've worked on my current project for many years now. When I first started I was appalled at the quality of the code. But I jumped in thinking that I could clean things up and make it better.

While I have made it better over the years, I am disappointed that the code is still no where near the quality level that I would like.

My question is why after nearly 10 years of working on this code base can't I make more progress in cleaning it up. I know my team is capable of writing excellent code and when I look at the new code they write it's a pleasure to work with.

However, when I look at code they and I write that interacts with bad code I find that the bad code doesn't get better. It seems to grow and infect the new code around it. It's a lot like a fungus, like athletes foot. If you don't entirely eliminate the bad code it will come back.

We are often faced with the tradeoff of delivering a new product vs. cleaning up a bad design. If necessary make the tradeoff of buttoning things up and delivering the product. But, when you have to go back into the bad code to fix bugs or to extend functionality, clean it up first. Then fix you bug.

My nephew taught me that when building a house spend a lot of time getting the foundation spot on. If the foundation is off level by a half inch then you have to adapt the framing to correct for that. This domino effect runs all the way up the house.

So clean up that bad code before you extend the functionality or it will infect your new code.

Sunday, June 14, 2009

Vista Experience--Moving on to Windows 7

Since my last post about Vista I have come to realize that it actually is a pretty decent operating system. While I do have to say that MS screwed up big time in several key areas, for the most part they are moving in a pretty good direction.

What MS screwed up

Mainly the biggest problem is that MS insisted on keeping old APIs around rather than deprecating them and moving them out after a bit. In doing this they have allowed applications to continue using outdated modes of operation that went away several OS versions ago. While in the short term this would lead to more complaints, in the long run it allows the OS to lose complexity. We all know that complexity is the biggest issue in software development.

The second thing that MS screwed up was to not recognize that their longstanding effort to keep old APIs around had led to a lot of software that wasn't ready for Vista. Indeed most of it wasn't ready for NT. In particular much software didn't support limited access accounts. This caused one of Vista's biggest features, limited access, to be turned off on a regular basis. I would wager than probably at least 25% of all Vista boxes out there have this turned off so that one or another software can run.

OpenGL support was another area that they messed up. But it isn't just Vista that is a problem. We have found that transitioning from XP to Vista is a trade off in bugs. XP had certain bugs that Vista fixed and Vista brought new ones to the table. Supporting both systems has led to certain compromises.

Finally, they screwed up because they didn't deal with the press that popped up with faulty claims. For example, I've heard that Vista is a resource hog and is slow. The problem is that I've found it to be significantly faster in many cases. Mostly due to the fact that with desktop composition turned on it doesn't need to redraw repeatedly. 

There are cases where it does seem to bog down from time to time, but XP did the same thing.

Vista is Good, Windows 7 is Better

I've more recently switched to using the Windows 7 beta and subsequently the RC. It is stable and has some nice additions to the interface that are pretty nice. It is in fact becoming more Mac Like.

However, if you are waiting for Windows 7 to cure all the Vista problem, think again. While MS has addressed some of the compatibility issues with old software there are still problems with UAC. I'm still having to run DevStudio with UAC turned off in order to use IncrediBuild. And there are certain debugging tasks that can't be done without turning it off system wide.

If you can't make your development tools play well with your OS then what kind of a message are you sending developers?

Window 7 looks like it will be well received and should help MS turn the tide on losing ground to the Mac. Although I hope not. I think that we need more parity in OS market share percentages in order to improve the overall competition. 

Heres hoping that Apple releases a nice response to Windows 7 that pushes things to the next level.