Wednesday, October 10, 2007

Why Developers Need Great Equipment


I wrote this as a justification to the managers at my company as to why developers should have better computers. After a while, I figured that it would make a decent blog entry. I have edited this a bit mostly because I was lazy. I left out the list of workstations, specifications and price guides that I submitted. Go to Dell or HP and do your own research. I also left out my list of references. If you read my blog, you already know the authors I read. I probably would not be able to attribute any of these ideas to the correct individuals even if I tried. None of these arguments are new and they have previously all been laid out by authors with far superior linguistic skills than mine. Here it is anyway.

Cost of Development

The cost of development is both directly and indirectly affected by the quality of equipment that developers have. is through developer productivity. Typically, the direct influences are easily measurable while the indirect influences are much less so. That being said, it is probably the indirect influences that have an overall greater affect on the quality of work produced by your developers.

The average developer’s workday consists of a cycle of change-compile-test. This is a cycle that developers can go through many times a day. Every fraction of a second that can be taken off of the cycle anyplace the developer is waiting on the computer is a measurable gain. In most cases at the company I work for, we’re talking about very noticeable lengths of time. I sometimes give up waiting when changing from one document tab to another within the IDE because it just takes so long. This time is just plain lost. It costs money and the oportunity to work on more features. It is very easy to sit and measure just how much time is wasted if you want to get out a stopwatch and keep track.

A less measurable influence is that when people are forced to wait for the computer to finish longer running tasks, they tend to start doing something else: reading a book, surfing the web, answering email. Those activities do not stop immediately when the computer is done compiling or testing. They stop when it makes sense to stop: at the end of a paragraph or chapter, at the end of the web page, when the email is complete. This is extra time not spent on the task of developing your product. Even if the secondary task is work related, the interruption is a break in flow which can take on average fifteen minutes to achieve. A developer not in flow is both less productive and less happy.

Cost of Developers

A less than happy developer is more expensive. Again, this comes in measurable effects and not so measurable. Granted, simply having better equipment is not the only influence in a developer’s happiness, but it is one of the cheaper and more powerful influences that a company has control over.

On the measurable side, if a developer starts to be less happy it can prompt them to ask for more money, if you’re lucky, or leave your company outright. Either way, the cost is much higher than buying good equipment for each developer. A nice $3000 workstation with cost spread out over three years is less expensive than giving that developer even a modest $2000 dollar bump to keep them happy. That $2000 increase amounts to $6000 for those same three years plus a percentage raise each year plus any extra increase when the developer figures out that the money really did not keep them that much happier and they ask for more or quit anyway. And we all know that the cost of losing a developer you don't want to lose and trying to replace them is much more than $3000.

One not so measurable effect is that a developer’s work, through no fault of their own is just not as good. This can affect not only their code, but directly and indirectly the morale of the entire team.

Why should developers’ computers be so much more powerful than other machines?

A developer has many more applications running and documents open than workers in almost every other discipline. At my current place of employment a developer must have open at bare minimum: one IDE instance (code editor, debugger and more), source control, unit test utility, database, web server, and a web browser preferably with its own debugger. On top of that each developer will probably also have a few of each of the following running: local copies of the document processing engine, indexer, scheduler, database utilities, code compliance software, bug tracking software, requirement and design documents and document management software, multiple browsers to test JavaScript and CSS and additional instances of the IDE when necessary. Add to this all the day to day utilities of email, IM, Word, Excel, other windows utilities that all employees need and other tools that I forgot and you get a large list of programs that more often than not require many processing resources to keep running smoothly.

The resources needed to make all of these programs run fast include multiple fast processors to make the simultaneously running, computation intensive tasks faster: compiling, file indexing and generation and code testing. Memory is needed so that a computer does not need to move running software from fast RAM to noticeably not as fast virtual memory. Fast and preferably multiple hard drives are needed for any kind of data access: file generation, compilation, source control check-ins/outs, etc.

Monday, October 8, 2007

Experience and Abstraction

I had one of those light-bulb-over-the-head moments today. I was waiting for my computer to compile the changes I had made to our project so I could test them out so I picked up the book I am currently reading. Yes, it takes that long to compile... on the computer they gave me at work at least. I've opted for a more technical manuscript this time so the book is Pro WCF and the topic I was reading covered the three approaches to setting up and using the new Windows Communication Foundation: attributes, configuration and programability. At one point, the authors were compelled to point out that the list of types was also the order of precedence. Attributes are included in the code and are set first. Any configuration defined in the configuration files is then applied. And last, any changes to the set up made in code happened last. To which I replied, "Duh." And then the light bulb went on.

Before I describe the futuristic fuel that powers said light bulb, there's some more background you need. I recently read an article by Joel Spolsky. The article talks about the different abstraction layers of software these days. When one of those bottom layers blows up, developers are forced to code around it. It helps a lot if the developer doing the work spent a few years working with that lower layer before it was conveniently paved over. That way, they don't need to go looking for the cause, they just know what the problem is and how to fix it.

I always understood this from an intellectual level, but I never really thought I had that much depth in my coding experience. In truth I probably still don't. I've only been doing this programming thing for coming up on seven years now. But I got a taste of what it means to really have some experience. Oh, lets go ahead and put a capital E on that: Experience. I realized that not everyone working with .NET really understands what attributes are. Even those that do probably don't understand how they really work. If they don't know how attributes work, they probably won't know how they connect to configuration values or changes made pragmatically in code. The authors of the book went out of their way to explain this to everyone because they realized not everyone has enough experience to know this.

So what fuels that light bulb? A couple of things really. First, I learned that I have a bit more software development Experience then the average Joe out there. That's nice and comforting even if it is a bit conceited. But it does help to get the job done faster once in a while. Second, I learned another reason why I need to keep my audience in mind all the time when writing. Fine all you regular bloggers and technical writers that already knew that can rub it in my face... Go ahead... Are you done yet? That's too bad. I'm moving on. Maybe I already knew both of those things on some level or another, but just like it makes coding WCF easier when you already know how some of the basics of .NET fit together, I think I'll be able to communicate a bit more effectively after living through this little revelation of mine.

Tuesday, October 2, 2007

Best Practices: Does anyone here want to learn anymore?

I have been having this running argument with a couple of the other developers where I work. A couple of them believe that the best way to use source control is to keep the working code in the trunk. That's not the part that we argue about. Then the first time the code is pushed to a production environment, the code is branched. So far so good... well... sort of. The problem is that there is not enough isolation around the branch. It's OK to only have one production branch, but it should be isolated from the development branch by another level. This is called forward integration by some. Or the branch can exist on its own and bug fixes from the branch move to the trunk. When a new version is ready, a new branch is created and pushed. Either of these approaches would rid us of the following trials and tribulations.

What actually happens is that deployments go something like this. One person resolves all of the conflicts for the past four to six months in order to merge the trunk directly into the production branch. It takes this person somewhere between two and four hours when we're lucky to hopefully accurately resolve all those version conflicts. I certainly hope he remembers all the changes and their far reaching effects that I made six months ago, because I don't. On top of that, no one can check in any changes while this is going on. In my years of developing I have used five different source control systems and two of four different techniques that I am aware of for branching. Not a single one of the utility manuals or forums ever suggested that merging the trunk directly into into the production branch as standard operating procedure was a best practice for using source control.

I try to explain all the subtle reasons why one of the best practices will work better. Why it is better to merge bugs as soon as the fixes are made (smaller pieces of work, merged by the person that made the change while the code is fresh in their mind). How this allows for faster, less error prone code pushes. The reasons why isolating the branches makes sure that people are not getting in each other's way. There are many more reasons why one of the documented techniques will work better in the long run, and all of those reasons are in the source control user manuals. "Don't listen to me talk about source control if you don't trust me. Just read the books and you'll find for yourselves all the good reasons to follow the best practices," says I.

It just seems like no one wants to understand. Even on those rare occasions when a book is cracked or the web consulted. It seems like people just look at the examples without reading about why things work the way they do. Says one of these developers, "I saw this great piece of code on the web today that dynamically hunts through all the configuration files in a directory looking for a particular section. So I implemented it." Sigh... "Great," says I. "What you failed to read is that it probes through many directories looking for one particular file by name, not one section in all of the files. Now we can not have multiple configurations in the same directory because only the configuration section from the first file will ever get read."

Maybe the problem is not that people don't want to understand. Maybe they have actually lost the ability to reason things out:
"Here's all these great reasons, let's change how we do stuff."
"Show me how that works."
"That will take two weeks. Let's just talk about it so I can convince you."
"If you can show me a better way, I'll consider it."
"OK I'll get started on that tomorrow."
"Two weeks? Are you crazy? We don't have time to do that! We don't even know if it will work!"
"Then let's sit down and I'll show you how it would work. It's not that hard to understand. Really."
"You better show me how that works first."

I kid you not. Why are people just not willing to sit down to talk and truly grok a concept? I don't think anyone has lost that much brain power. It cannot be that reading is too difficult. Maybe they just got ground down. I know justifications with as much common sense as the preceding conversation come down from on high all the time. And to be fair, it does seem to be the more senior members of the team that need to interact with the powers that be more often that are afflicted by this apathy. I bet they are getting trained to not care because they are in a bad environment. Then again, it could be that age is setting in. The younger guys on the team right now really seem to be hungry for information. And the young ones that are left go out of their way to know all all those new concepts. Biblically almost. If I wasn't such the optimist I'd think I was being deliberately stonewalled. Wait a moment!?!?

I love learning about technology. Both through conversation and reading. Speaking has the advantage that, if the other person's message isn't clear the first time through, they can rephrase their point. Books just aren't good about rearranging the characters on the page when a new approach is needed. But I love learning new things from all sources. It's especially gratifying to have that "Aha!" moment when true understanding slides into place and I see just how much more productive me and my team could be. Unfortunately, as of late that feeling turns rapidly to frustration when those I need to convince won't listen to reason or read the books.

I suppose there's always that chance that we just don't have time to make those changes I'd like to make right now. But if that's the case. Just say so. I can learn that much easily. I realize that this is a cry for help. Maybe in more ways than one. But I really want to understand what I need to do in order to engage those brains once again. Help me learn how to help them learn.

Monday, October 1, 2007

Hello October, Will You Design My App?

In a painfully ironic... ironically painful? Let's just go with "In one of those twists of fate called life that is both ironic and painful," I find myself coding a feature in our application for the second time. The reason being that the user interface does not match the new user interface which is shortly going to be replaced by the newer user interface. That's not the painful part yet, nor is it the ironic part. The painful part is that while I am working on moving all sorts of HTML around to support the new CSS organization (which actually is an improvement), one of the designers lets slip that the entire feature is going to redesigned in the future, yet again, after the new version that is not even being worked on yet is completed. Ah well, I can only try and tell them to do at least some design up front so many times. The ironic part about all of this is that I just started reading User Interface Design for Programmers.

The reason I bother I suppose is that some day, I imagine myself working at a company where we gather requirements. Where management is smart enough to understand that prototyping an application in HTML or Illustrator or even just using paper allows one to do usability testing a whole lot cheaper than writing all the application code first and putting the product out in front of people. No one ever gets the UI right the first time. Boy does it just not happen that getting it right on the first try thing. Do you really want to pay an expensive developer to write two sets of code, or would it be better to pay an expensive designer to come up with a few refinements to some drawings?

And I've just been talking about usability problems so far. I won't even get into all the logical issues that we've run into due to designers not thinking about what happens when each and every button on the UI is clicked. Sure, there will always be holes in the spec. But, I prefer a spec with a few small holes to a bunch of vapor with just a few main ideas holding it all together.

Will this new project work in the end? Absolutely. We've got a bunch of great developers working on it and a bunch of really smart people supporting it. Will it work as well as it could? Definitely not. That might be a bit harsh. I mean, nothing is perfect after all. Almost nothing is really ever as good as it could be. But I look at all the wasted time. The time wasted rewriting the application time and again because no one did any usability testing with cheap prototypes. All the time wasted fixing logical errors and wrong decisions made because no one though about how the permissions model should behave in a business environment where community features such as tagging and commenting would be so prevalent. My favorite excuse was "not being able to see that far out." All that time that could have been spent completing more and more usable features that would have blown the users away. And I get frustrated that those of us that believe we as a company can do better don't seem to be able to push through new ideas, and it's not for lack of trying.

Another lesson that I recently learned however, is that there is always a way to accomplish your goals. Don't let my little rant get you down too much. If at first you don't fall off your horse, learn how how to pick up that bridge. Or something.