Monday, December 8, 2008

IT as a Profession

Should the concept of IT be considered a profession? I think most of us would like to say, "yes it should." A lot of training and experience is required in order to design, create and run large systems efficiently with high levels of service. People can not just walk in off the street and accomplish this. Even though vendors are making certain tasks easier to do, all that means is that workers are expected to be able to manage complexity at a higher level. As an example, we don't *necessarily* need to know how to build a computer, but we do need to know how to spin up virtual machines on the fly and balance the load correctly, quickly and securely.

Should everyone that works in IT be considered a professional? Probably not, but where does one draw the line exactly? I can think of many ways to try and measure professionalism: customer satisfaction, specification fulfillment, information security, conduct, etc. In the end it probably needs to be a balance of all of these. Customer satisfaction on its own is not good enough because stakeholders may not know that you are not supposed to store credit card CVV/CVC numbers, but I would expect a professional to know that.

I think the reason why we don't have such a definition at this point is that the IT industry changes so fast. In contrast, the fundamental knowledge required to successfully build a physical structure such as a bridge or building, while by no means trivial, hasn't changed in a long time. Those rules also tend to have fewer layers of abstraction between them and the finished product. New building materials may change the way in which you meet the parameters, but a building must remain standing and you can use physics to determine if it will. But due to new types of hardware and software constantly being developed even the basic requirement that an information system must remain secure causes us as professionals to investigate new ways of implementing security rather frequently.

At times I am tempted to think that maybe the computer sciences just haven't been around as long as other professions. Maybe once we have enough systems existing out in the world, we will identify common requirements and basic rules that all IT professionals will need to follow. And then I remember that the only reason to write new software is to solve new problems. The new software is usually written on new platforms that were created for new hardware. It's hard to imagine finding hard physics-like rules that will continue to be valid for so many shifting purposes and layers.

Thursday, December 4, 2008

Sell Solutions To Your Own Problems

"Create tools to solve your own problems because other people are probably looking for a solution as well." Better words of advice have rarely been spoken about identifying new products. As is the theme of this site, I am stealing other peoples' wisdom and... well... merely passing it along.

I used to think that I never had any problems that needed solving. Professionally, either I thought my problems were so trivial that I just wrote the code, or I was solving a domain problem for my current employer so I just wrote the code. There are so many tools out there for developers that there always seems be a solution for any chore I come up against. In the not so professional parts of my life, problems just didn't seem big enough that anyone would need a better tool. I felt that I must be the only person that couldn't find a better way or one would already exist. I'd either tough it out, or go read some web comics.

The funny thing is, I kept running into an issue when trying to find new web comics to read. The problem was that for various reasons, I couldn't get Google to find new strips. The results I got were sites that have all sorts of cluttered lists that are hard to sort through and browse. And who knows if you have the same tastes as the people that created the lists.

Enter Is It Funny Today. While at first it looked like another one of those lists that has always disappointed me, the site is different. It is easy to read and allows users to vote for and comment on comics. It also has an excellent browsing feature that can show random comics which is the perfect way to sample what the internet has to offer in the way of humor. This problem may not be as important as the recent need to monitor the housing and banking industries, but maybe if those bankers and general contractors had been able to find just one more funny comic to read, they wouldn't have grown into such greedy people and screwed up our economy.

The point is that the Is It Funny Today guys said the exact same thing I did. "Finding webcomics is just so hard." But they had the presence of mind to do something about it. I feel like I got beat to the punch in a way. But that is not giving them enough credit because they were also the ones paying attention to their own problems in the first place. Maybe they didn't find a way to implement world peace, but I thought it was a darn good solution to an everyday problem us web comic fans have.

I guess the lesson I have learned here is that I have to think more critically when a task seems too difficult. Any time I say something along the lines of "I hate [some task]," or "this job is too [hard, tedious, etc.]," I need to try looking for a way to make that chore easier. And I need to remember that I am not so uniquely special OR especially unique that someone else isn't having the same issue.

Tuesday, December 2, 2008

Why Developers Should Write

Every developer that takes to blogging eventually has something to say on this topic. Come to think of it, the ground we tend to cover isn't that original even if some of the insights and details are from time to time. Regardless, I figured it was my turn to offer what I hope will be some nugget of inspiration to someone. Chances are it will turn out to just be an anecdote about myself allowing others to see some of my insecurities, but here it goes anyway.

The reason you always hear as to why developers should start writing is that it will improve communication skills. I always knew this was true, but I didn't understand how it happened. After writing this blog on and off for a year, I finally figured out what is probably the biggest problem in the way I communicate. And that is the most important step in improving just about any skill; identify an aspect small enough and well defined enough that you can do something about. If you can't plan out a path to get better, your goal probably isn't well enough defined. Keep refining until you can come up with a solution.

And now it's anecdote time. I was always upset that I couldn't convince others of the merits of my designs as often as I liked. Sure, my concepts are not always the best and even if they are they are, they are still not chosen from time to time. But I still felt that I could do better. That is the main reason why I started writing this blog. I just wanted to practice being convincing.

That's too big. I just didn't realize it. I kept writing trying to determine if I was making any head way. I didn't feel that I was, but you can be the judge. There are just too many ways to be convincing. Sincerity, passion and reasoning can all affect how convincing you are. Being convincing is just too broad a concept to tackle.

I kept writing. It took me a year, but I just noticed a couple things. First, I'm slow at self improvement. That's a topic for another post. More germane to this piece is that I realized that when I am writing or speaking about topics that I am excited or passionate about, I tend to jump around and try to cover twenty points all at the same time. Yes, they are all important ideas that have baring on the on the discussion. But unless my audience was sitting over my shoulder for weeks on end sharing all of my experiences, they were likely getting left behind or just outright confused.

Now that I have a well defined deficiency I can do something about it. (I have many deficiencies I'm sure, I just haven't tried to define many of them.) Armed with this little piece of knowledge I have thought of several ways to improve my writing. I will try to focus more on the point that I am making. I can bite off smaller topics to talk about. I don't need to cram all twenty ideas into a single post or conversation. If I absolutely need to write that much, I need to be more aware of the organization of my thoughts. These are just some of the several guidelines for writing you learn throughout school. My problem was that without a significant amount of my own writing to study, I didn't know which out of all those best practices I was ignoring the most.

So the story is just the long way of illustrating my initial advice. If you are trying to improve your communication skills, you need to identify an issue and come up with a solution. Coming up with solutions to well defined problems is the easy part. If you are having a hard time with the solution, you probably haven't defined the problem well enough. It turns out this is the hard part. You just need to keep writing and narrow down your scope until you can easily think of solutions.

Getting Developers Interested in Security

I find it amazing in this day and age that there are still so many common software security issues released to the wild. I'm not even talking about flaws in OS code or database or other server platforms. I'm only focusing on the common, run of the mill issues in the software written day in day out.

There is little excuse for SQL injection to work anymore. Parametrize your queries people. It only take a couple of seconds longer if you have any competency typing.

Cross site scripting attacks should be a thing of the past. HTML Encode all the content that you receive from users before showing it on a page at the very least. It's just an extra function call here and there.

Even cross site request forgery, while not much is heard about it, is very dangerous. Yet it has a simple solution. Double submit a unique value in a cookie and form field with every post you make.

The list goes on, but many developers don't take the time to make these small habitual changes in the way they code. And it's these small changes that would prevent most of the security issues in today's internet applications. Why do these vulnerabilities keep making it into production software?

One reason is that security is not usually seen as a glamorous part of the application. Yes, it's vitally important. But few others in most companies pay it much attention. They expect it, but they don't come back with stories of customers raving about security.

Another reason is that at companies writing products, most of the security work goes in (or should go in) up front. Once it is done, there is not much else to do when compared with adding new features. I'm not saying the work can stop. Good security is an ongoing task. But like any interest, if time is only sporadically allocated, becoming an expert is difficult and the interest will fade.

Furthermore security is rarely of constant interest to the managers and directors of companies. Again they expect it, but they can forget that it takes time to secure software. Time to learn and time to act on the information. If they will not make the time in their never ending road maps and milestones of new features, the developers will follow their lead and only make it an after thought.

The task of getting other developers to take enough of an interest to change their habits can be challenging though.

The first step that I see is to get the directors and managers interested. Without their support and attention, security is just another nice to have. Take the time to have your software audited by a third party and spend the resources fixing the issues. Create a position in your team that has the authority and resources to address these issues. Educate entire teams on vulnerabilities so that designers design correctly, QA tests for know problems and developers develop code to specification in order to pass the tests.

That's fine within an individual company. The next step is to get developers at large to take an interest. I don't even know how to go about that. Everyone would need to help. Software security might be a good required course in college. Bloggers need to keep on blogging to raise awareness. Make security a selling point on web sites and sales catalogs. When hiring new employees, insist that they know about basic issues and the appropriate solutions. Provide material when new employees enter the company that helps indoctrinate them into a culture that takes security seriously.

In the end, getting developers to write more secure code is not just a job requirement for programmers, it's a wider change in mind set that needs to happen. It's not only the coders' responsibility to see that the applications are written securely. It's a right of clients to demand better software. It's the duty of the directors and managers to allocate appropriate resources. And effort must be made by all employees to understand the issues and help make software more secure in whatever way they can; plan, design, write, test and buy with security in mind.

Monday, December 1, 2008

Predictions of Microsoft's Demise

I'm tired of reading blog posts by people that don't do their research. It seems like anyone that talks about Microsoft's downfall or the coming obsolescence of a piece of Microsoft software due to competition fall into one of the two following patterns, or both.

First, they compare the up-and-coming software of their favorite Microsoft competitor to the last piece of Microsoft software that they are familiar with. Most often this is a past release. Of course an old piece of software won't stack up against something new and shiny. Try picking on the current crop.

Second, even when they do pick the comparable piece of Microsoft software to talk about, they forget that Microsoft writes much of their software to work for both home users and enterprises. There are features and integration points that the average user just doesn't see.

If you're going to do product comparisons, please do a fair amount of research on all of the products you are talking about. Don't skimp or provide misleading information on Microsoft just because it's not your choice. Unless you are a shill in which case it's OK because that's your job.

Now, I will be the first person to admit that there are plenty of other options besides Microsoft software out there. And in many situations, other solutions will even provide a better value.

For instance, the movement of home, school and even small business users towards Open Office makes a lot of sense under the right circumstances. Despite that, Open Office has a ways to go before its feature set makes it a viable replacement for enterprises using the more advanced capabilities of Microsoft Office. Open Office is not going to kill Microsoft Office any time soon.

People have been predicting the demise of Microsoft or its products at the hands of competitors for a while now. At this point it all sounds like Rasputin and his prognostications. Sure people kind of sort of get close to guessing correctly once in a while. They should if they make countless vague predictions on different topics nonstop.

But even if the pundits get one right now and then, Microsoft can afford the occasional mistake. That's not a luxury that many companies have. Microsoft has proven that they can recover from large blunders even in their core market. ME was considered quite the unsuccessful stab at an operating system but it was followed up by XP which is generally thought of as a decent platform.

Wednesday, November 26, 2008


All I really have to say about merchandising in general is that I don't know anything.  What I do know is that Milyli Inc now has a product line consisting of our logo on great t-shirts.  

Are you tired of guessing what other people want?

Not exactly sure what Jesus would do?

Tuesday, November 18, 2008

Freeware During a Recession

Fine, we may not actually be in a recession, yet, but I don't think anyone will begrudge me should I say that the economy isn't as strong as it used to be. Because of that, many companies may start taking a harder look at using freeware to solve their business problems. I've already gone on and on about how using freeware doesn't necessarily save a company money in the long run, so I'll try not to revisit that topic... much. But how is the economy going to affect the open source movement and what will be the ongoing fallout?

First off, let me say that personally, I don't think the recession should play into the decision of using free software. You may get the code for free, but you still need people to set it up, integrate it, support it and train your employees to use it. When all is said and done, I have yet to see any conclusive study that shows using free software saves a company money in the long run just because it is free. (Though I would love to see such a study should one exist.) That should be all for the rehashing. 

What with the economy being what it is though, maybe only the accounts payable column of the budget is all that will matter and the cost of the software will be what swings the decisions. I for one would hate to run or work for a company where the right tools for the job "cost too much" causing employees aggravation and lower productivity. However, some productivity gain is better than no productivity gain and if the money really isn't there, maybe freeware will end up being the path companies take.

Let's take a look at the development side of open source. Many of the best open source initiatives are sponsored by large companies. Will those companies be able to continue paying developers to work full or even part time on software that brings in no direct revenue? If developers get laid off, will they be able to support themselves on jobs that pay them less money, if they can find them? Will they need to work longer hours making money instead of spending time on freeware initiatives?

The truth is that the open source movement was started during times of abundance. I wish I was one of the lucky software programmers that, thanks to stock options that lined my savings and retirement accounts, I could take a couple years off with little to no pay to work for the benefit of people that want free software. Yes, I am jealous and maybe a little bitter. But jealousy doesn't pay the bills and neither do positive intentions. They also don't affect my ability to write good code. I just need to get paid for it.

If there are more developers out there in my situation than there are those that that are financially independent and can code for only the warm, fuzzy feeling of living to a higher ideal, the open source movement will stall. Even in the worst case scenario I'm not saying that it will die out. But with less time devoted to them, those projects will not be able to maintain all the ground that they have made up in terms of quality and features.

This brings us back to the companies that bought into free software. 'Bought free software?' Moving on... The software that they got for free will not come in as cheap if it takes longer to get bug fixes or those bug fixes need to be made by new in house developers. If the feature set of that software falls behind the software that competitors are using, not having the advancements that the paid software later incorporates will hurt their competitiveness. Recovering from such situations will be even more costly as once again, companies will need to install and learn new tools if they need to switch back.

To me, it looks like the downturn in the economy is not going to help the open source movement in the long run. In general, if companies start using free software solely in response to a recession in order to cut costs, I think they will only be postponing larger costs until later. Maybe that's what it will take to survive for now, but it will probably end up costing more.

Thursday, November 13, 2008

Content Control vs Content Presenter

I ran into a problem with how some of my controls were displayed in Silverlight recently. I had created my own extender panel back in Silverlight 2 Beta 2. It turns out it was much the same as the one that has now been released in the Silverlight Control Toolkit. But when Silverlight 2 was finally released recently, my control went from looking like this...

... to looking like this ...

Basically, my content area shrank to nothing. Yes, something changed between the beta and the release, but complaining about it wasn't going to solve my problem so I started trying to figure out how to bring my code back in line.

Fortunately, the control toolkit was also released at the same time. I took a look at the Expander control and realized that I wasn't displaying my content the best way. There seem to be two ways to show content in a content control: use another content control or use a content presenter. The latter is a much better way and the changes are minimal.

To fix the problem I changed my XAML in generic.xaml from this...

Content="{TemplateBinding Content}"
VerticalAlignment="Stretch" />

... to this...

Content="{TemplateBinding Content}"

And all was once again well. There are many other values I could bind to here including the alignment values, but that was not the focus of this post so I simplified it. The point is that when creating your own content control, you want to use ContentPresenters to display the content. Chances are very good using the ContentPresenter is the recommended approach. But that doesn't stop many people from giving examples using the ContentControl, and I just happened to find one of those examples first.
This is becoming a popular post so I decided to revisit it. Memory told me that this wasn't well written. After a re-read wowever, I realize the only thing that I can really do is explain the issue a different way.
When creating a custom control that you want other developers to be able to determine the contents of, it is easiest to extend ContentControl. ContentPresenter is what is used to show the definition of the Content DependencyProperty within the content control. It is possible to achieve the same effect without extending ContentControl, but then you need to explicitly define your Content DependencyProperty.
Also, if you have multiple content areas within your control (e.g., a title area and data area), you can either use a ContentControl and define the second DependencyProperty or you can extend Control and define both Content properties yourself (e.g., TitleContent and DataContent).
I hope this edit makes things more clear.

Friday, November 7, 2008

Keeping the Company Busy

A new experience that I have had while building this little company now known as Milyli is that there is always something to be done. In the five months or so since my partners and I started this venture, there has never been a time where I did not have a list of things to get done. Dealing with a long list of tasks is not hard. All you need to do is break tasks down, prioritize them and start knocking them down.

If a task seems like it is too large to prioritize because it will take too long, you should probably break it down first. These tasks are the types of ongoing chores that never seem to end: design and write the code for the summer release, be a part of your user community to build customer awareness and credibility. If these tasks have deadlines far out in the future, they can be broken down into more discrete units and prioritized with everything else. Design the whizbang feature on Friday, code the bell feature by Tuesday, write that new blog post between 9 and 10 tomorrow. If you are unable to prioritize large tasks, they probably just haven't been broken down and planned out enough.

Prioritizing is usually fairly easy. There are the things that need to be done at a certain time: go to the bank, go to the client meeting, deliver and deploy the project. These are items you put on your calendar, set a reminder and go do them at the correct time. Sure there will be times when you need to be two places at once, but that's what great partners and employees are for. If you don't trust them to do their jobs, you probably shouldn't have gone into business with them or hired them.

You also have the sort of tasks you need to do in order to successfully complete those time sensitive tasks: get your business documentation together for your banker, put together an agenda for you client meeting and answer any outstanding questions from the last meeting, package up all the components you need to deploy and make sure your scripts are in working order. It's important to browse your calendar for a few weeks out to make sure you allocate time to get these things done.

You don't need to tell me that this is an over simplification. There are all sorts of conflicts and hard decisions that need to be made. But I find the trick has been to get all the information you can or need and then make a decision. After that, spend the time getting the work done instead of prolonging the deliberations. Get informed, make a decision, and act. Hm, maybe it really is simple.

One of the nice things about keeping busy is that there is a real sense of accomplishment. Some tasks get marked as done in a physical system and some only get scratched off a mental list, but all of them are valuable. Another benefit is that, because work is getting done, less of it piles up. Sure there is always something that needs to get done; your to-do list will never be empty. But if you manage your time well, you will finish everything by the time you need to and also have time for all the other things you like to do besides work.

And that is where the real change for me is. I have both more sense of accomplishment and a better work life balance than I did at my last job. I think the problem there was that the developers were either not kept busy or they were overloaded. We could either have been implementing more features or we could have been tackling better, more intricate features and we would have been kept busy. Peaks and troughs of the lack in quality or quantity of designs and the overloading could have been mostly evened out by making sure all the work that came before development was correctly prioritized and appropriate deadlines were enforced.

I guess that's a task I will need to tackle. Figure out how to keep employees at my company busy without overloading them. I don't exactly know how to do that just yet. I figure that when it becomes an issue and when others are entrusted to do the work that I am doing now, I will make the time to break that task down, set up some defined goals and keep myself busy making sure others are as fulfilled at the company as I am.

Thursday, November 6, 2008

Silverlight 2 on Vista Problem

I'm not writing this blog post to recreate the solution to the problem I had.  If that's what you're looking for, here's the link.  I just wrote this to document another situation where this problem has arisen and help promote the solution to make it easier to find.

I ran into this snag when updating my Vista box to the latest version of the Silverlight tools.  The Visual Studio and Silverlight updaters worked fine for the most part.  There were no problems running those tools or opening Visual Studio after running them.

What I did run into was a problem that a few developers have seen in other circumstances.  When trying to open a WCF Services project that was part of my solution, Visual Studio showed me the error:
Unable to read the project file "".  The Web Application Project xxx is configured to use IIS.  To access local IIS Web sites, you must install the following Windows components:
Internet Information Services
IIS 6 Metabase and IIS 6 Configuration Compatibility
In addition, you must run Visual Studio in the context of an administrator account.
The difference between my experience and others was that I already had these features installed and I was running in administrator mode.  When I tried uninstalling or adding features, I always received an error from the windows feature installer.
An error has occurred.  Not all of the features were successfully changed.  
After a good deal of searching I found a solution that fixed everything for me.  It was a bit difficult to find because most of the answers only dealt with the first error.  Rupak Ganguly, thanks for the full scoop.

Tuesday, October 28, 2008

Does Open Source Save You Money?

I ran across an interesting question a while back on LinkedIn.  The author addresses the age old... well... at least ten year old question... does using open source software save you money?  Granted, the question is not phrased as such.  "Are enough IT departments exploring open source as a potential way to reduce costs," is the way he put it.  Reading further into the question,  I find it interesting that the person believes that using open source software saves a company money is a foregone conclusion.  It is also interesting to note the tactic of reminding everyone that times are tough.

First off, let me just say that there are definitely times to use freeware.  From what I have seen, the best places to use freeware are where the fewest people need to change how they do their jobs.  You probably can save money on your IT budget by going with a mail service provider  based on freeware instead of exchange.  As long as it all ties into your other infrastructure who cares.  You're buying the service in this case, not the software.  As long as those services solve your problems to the same extent, go with the cheaper one.  Or, if you are introducing a new system of a type few of your employees have used before.  Again, if the features solve your business problems better than other solutions, learning curve is really not an issue in this case.

However, there are many myths about open source software.  It is more than arguable that using open source software does not necessarily save you money in the long run. For that matter open source software does not necessarily save you money on licensing costs.  Fine, I'm picking nits on this one, but open source does not mean the software costs no money to obtain.  Open source means that the source code is readable and distributed with the software.  You may do what ever you want with the source code within the limits of the license under which you obtained it.  Open source products can cost money.  

Fine, that's not what many people mean when they say open source.  I understand that a good number of open source applications can be obtained free of charge under licenses such as the GNU GPL and the GNU LGPL to name a couple.  In these cases, the software really does cost less to obtain.  Today, at face value, off the shelf, without any bulk discounts, the most expensive version of Microsoft Office costs $680 per license.  Ouch.  That alone would scare many people into feeling like they should start looking for a zero dollar solution.  

So they download some freeware, spend some time installing it and turn it loose on their employees.  Depending on the size of your company, creating new images and installing the software on existing computers will start costing you the money you would have spent on licenses.  How many hours will it take for the workers to get up to speed?  How much do those employees make per hour?  Even after they learn the new software over a few days (at best), people will still lose time every day for quite a while until they grok the ins and outs of the new software the same as the old.  Not only that, but the long-term, small frustrations will add up even if only subconsciously and can affect overall productivity. 

What do you do when software breaks?  If your paid license includes some level of support, and most do, you can just call the experts at the company that wrote your software.   If not, well...  Sure, there are people that support open source solutions.  Those services are not free.  Hard to believe, I know.  And such services really do exist and some are really good at what they do, they are just not as abundant or it can be hard separating the wheat from the chaff.  This means you spend more time finding and evaluating them.  If you are afraid of losing them because they were so hard to find, you may also end up paying them more.  And that cost is usually an ongoing fee or salary instead of the one time cost of a license.

Why aren't there that many people out there to support your open source package?  Part of it is just inertia.  Not as many people know it, so not as many people use it, so not as many people learn it, etc., etc.  Maybe the world will get past this someday, but it is powerful.  Another reason though is that open source projects are notoriously prone to forking.  People can only specialize in so many pieces of software.

Let's take a look at that forking issue from another angle.  How sure are you that the next versions of all of your open source applications, the ones that finally have those much needed features, are going to work with the next version of your operating system when there are so many application-OS permutations that need to be tested?  I still don't entirely trust that any of my software will work on any given future version of the Apple operating system and only one company was ever working on each of those.  I'll stick with the relatively few companies whose software will most likely work on the next operating system they put out until they let me down, thanks.

When it comes down to it, open source products really only have one guaranteed strategic business advantage over proprietary software.  If they don't work for you, you can make them work.  You have all of the code and you can make them do whatever you want so they will play nice with all of your other software.  If your business needs this sort of flexibility and has the resources (notice the concept of total cost rearing its head again) open source is the way to go.

In general, I think it is important to be honest about all of the costs and pain points you are addressing when considering a software package.  If open source really addresses the most issues, by all means, use it and be happy with it.  But if you are just trying to get some freeware to save money up front, remember that all those other little pains that you leave unsolved for your company and your individual employees add up a lot faster than the price of software.

Built On More Technology

I have seen comments by other developers claiming ASP.NET is not a good technology because, "It abstracts reality away from you and produces troves of developers who don't understand the basics of a simple form post."  It cannot be argued that ASP.NET helps abstract away some of the aspects of the technologies it is built on. It is also true that many developers that begin their careers by learning ASP.NET might never fully understand the technologies that it is built on top of.  But I would not say that the ASP.NET technology "sucks" because of this.

Building easier to use abstractions on top of older systems is more or less how computers and software have improved in their short history so far.  Third generation languages are built on top of assembly languages and those are built on top of machine code.  The machine code runs on computers that are made of enough components that are changing frequently enough that many developers don't know how to put a computer together.  Even people that build their own computers would be hard pressed to fully understand the inner workings of each of those components let alone be able to manufacture or even just modify one of them.  Or take a look at the web and how popular JavaScript libraries are that abstract away the differences between the various web browsers.

Hiding lower level abstractions makes writing software more accessible to to more people.  When more people are able to use technologies, they are able to solve larger problems for less.  Isn't that the purpose of technology?  Isn't this how progress is made?  I'm sorry that the old skills are not in as much demand, but that is the nature of computers.  I was told many times in school that as a developer, it would be necessary to constantly be learning in order to keep up with the systems I would be building software on.  You either need to take that to heart or relegate yourself to being a master of older and less used technologies as time goes on.

Just because developing is easier for more people though, having knowledge of the lower levels systems is by no means useless.  No abstraction is perfect and when they break, some of the gory details of the lower levels are exposed for all to see.  Having knowledge of those lower levels allows a developer to fix those problems or work around them when they occur.  People that haven't worked with the lower level technologies will need to scour the internet for the information needed to understand what is happening and then develop a solution.  In the end that is simply the difference between an experienced developer and a junior developer.  That is the reason why experienced developers are paid more money.

To the experienced developers that say that this situation "sucks" I say that it is part of your job to enlighten the less knowledgeable.  To those that don't  see the need to learn new technologies I would say that you don't need to as long as you don't mind being pigeonholed into technologies that will be used less and less as time goes on.  Even though abstractions can leak for a while, the holes tend to get filled eventually and you will probably need to learn something new.  

Earlier I stated that for the most part, computers have largely improved by building new technologies on top of old.  When progress is instead made by recreating  those systems from the ground up, developers will have even more skills that they will need to relearn. To me these complaints seem to indicate people that prefer the status quo for fear of having to learn new tools as technology changes.

Friday, October 24, 2008

Cloud Security

The biggest legitimate concerns I can think of for using applications hosted outside of the corporate infrastructure are integration, privacy and security.  Integration points might not be there yet, but they probably will be.  Getting locked in to any one service is not a great selling point.  As far as I can tell, privacy is mostly an issue that will take time for our legal systems to catch up with, if they ever do.  But the one that I just had an interesting thought about is security.

In general, a hosted service should probably be able to handle security better than an individual company.  They hold the data for all of their customers and their data centers will end up being huge, storage wise anyway.  They need to understand security and spend the resources on it to make sure it is up to the task.  But even large companies these days that know they need to have secure systems fail at this from time to time.  I tend to just start forgetting about the last 130,000 social security numbers that were leaked when news of a shipment of untold numbers credit card records disappears.

I was reflecting about how one reason why there are so many known security vulnerabilities in Microsoft products is because that is where people look for them because there are more computers to take advantage of those flaws on.  If more people used Macs, the world would be trying to break into OSX.  And don't kid yourself, viruses do exist for the Mac.  The Mac may really be more secure making it harder to find the flaws.  But as more people start using them, the viruses and other attacks will follow.  It reminds me of the Willie Sutton (mis)quote, "...because that's where the money is."

I also had been reading a bit about about cloud computing and I thought to myself, "Boy, won't all that centralized data be a tempting target."  Don't get me wrong I know that such services will have far fewer security vulnerabilities than the average business network.  But it only takes one flaw in your system found by one person of the many that will likely be heavily scrutinizing your network to bring it down.  

But I'm not saying that I feel this is a significant enough concern right now to keep me from using such services.  The flip side of that logic is that small networks won't get hacked as often simply because they will not be the focus of much attention.  To my nose, that just reeks of the security through obscurity principal.  It only takes one flaw out of the many your system might have to be found by the single person that happens to take a passing interest in your network to bring it down.

Most modern security practices make it unlikely that the whole system of a company so heavily invested in data and security would be disrupted, damaged or compromised all at once.  But every now and then a SQL Slammer is created that can affect computers across a system even as large as the internet.  Are the ingredients for such a fiasco likely to be present at the same time?  Not at all.  But let's just say that, if those circumstances should arise, the fallout of the first data service to get royally hacked could be spectacular. 

It was just a thought I had.

Thursday, October 23, 2008

Genuine Advantage

If many hundreds of thousands of other people had not already done so, I would be more than willing to be the first person to admit that the advantage in Microsoft's Genuine Advantage program is almost entirely Microsoft's.  Oh, Microsoft can try to spin the moniker in such a way that there is a huge advantage to each and every user knowing that their own copy of Windows is really, really licensed by Microsoft.  But even I know that the real advantage of the program is that Microsoft gets to try and collect more of the license fees that they are owed.

I will also admit that the best approach to take may not be to temporarily disable a user's computer as a result of a first time failure when the genuine advantage tool is run on a computer.  Effectively shutting down the OS in such cases does not build the best relationships with customers.  However, it is a right that they have.  If you use Windows on your computer, you have an agreement with Microsoft that you will abide by the licensing terms that come with their software.  If they feel the best results in preventing pirated copies of Windows are gained by blanking the screen of computers running illegal copies, it is their prerogative.  I can't say I necessarily agree with the business logic, but I would guess they have spent more time thinking about it than I have.

Recently, strong emotions are rising again to the latest changes in how the Genuine Advantage software enforces licensing.  I just don't happen to agree with most of the people voicing their opinions.  "Why is Microsoft  automatically connected to my computer?  The computer is mine!"  When it comes down to it, Microsoft is not automatically connected to your computer.  You made the decision to buy a computer with Microsoft software on it.  Microsoft would not be connected at all if you bought an Apple or installed any one of the flavors of 'nix.  When you use Microsoft software, you are subject to the terms of their licenses and one of the terms of using their software updater is that you must have a legitimate copy of Windows and run Genuine Advantage to verify it.

"Microsoft has no right to control my hardware without my agreement." Um, now that you mention it, that is the exact purpose of an operating system and software, to make the hardware do useful things.  You agreed to let the software control the hardware when you bought the computer or installed the operating system.  You don't seem to mind that Microsoft is controlling your hardware when your business tools, internet applications and games are all working.  That may be a bit flip, but if you don't take the time to verify you are running a legitimate copy of Windows, why should that software perform it's job for you?  Countless pieces of software in the world shut themselves down if they are not bought in a certain amount of time.  What gives users the right to expect anything different from Windows?

"If the price of genuine software was lower than the fake one, who would buy the fake one?"  The total cost of copying of an operating system, even one that needs to be modified in order to work without a license, is such a small fraction of a percentage of creating the operating system in the first place that to match the monetary price of a fake, the OS would need to be given away for free.  Some organizations do just that.  If you want a free operating system, feel free to download any one of the many that are available.  Unfortunately there are many reasons why Microsoft can not give away their operating system at the moment.

"If, when I am programming, the computer screen goes black, that will probably cause some important information to be lost.  Who will pay me for my loss then?"  I had a few different responses to this one.  As a professional developer writing my own software, I feel this person can't be a very good programmer if when his computer stops, he loses a lot of work.  He should save early and often, use source control, make backups, have a disaster recovery plan, etc. etc.  Microsoft should be the least of your worries when it comes to losing work.  On a less critical note, the speaker may be expecting the software to issue a warning, give the user a chance to save and then slowly start reducing functionality.  This goes back to the argument that blanking the screen might not be the best approach for dealing with users that came by their unlicensed version of Windows unwittingly. Taken yet another way the statement could possibly be the most hypocritical I have seen on this topic.  A programmer expecting to be paid for his time and work complaining that another software company can not take measures to ensure that they receive what is due to them.

There are even some lawyers that want to get something out of this.  A surprise, I know.  I also know that not all lawyers are greedy and evil, but hey, it's the stereotype.  "[Microsoft is the] biggest hacker in China with its intrusion into users' computer systems without their agreement or any judicial authority...  Microsoft's measure will cause serious functional damage to users' computers and, according to China's criminal law, the company can stand accused of breaching and hacking into computer systems."  Microsoft is not hacking into your computer.  You installed the software or bought the computer of your own free will.  The software was already there, it did not need to be hacked.  Furthermore, Microsoft is not damaging the computer in any way whatsoever, they are just making Windows show a black screen.  You can buy a legitimate copy of Windows and install it or you can uninstall Windows and install another operating system and your computer will continue to work fine.

Many better informed individuals do understand that Microsoft does have the right to protect its intellectual property, but they feel that such tactics can harm users who turn out to be the victims of less scrupulous resellers offering fakes disguised as originals.  To this they say that Microsoft should be going after the distributors.  And while I agree, it seems naive to expect Microsoft to be able to do this without the end user's help.  And evidently users are not helping or else the problem would not be so rampant in parts of the world that Microsoft feels the need to resort to such tactics.

On a day when my belief in the general good nature of people is at a low, I would say that all of these arguments are flimsy justifications from people that deep down are just trying to get something for nothing.  Like it or not, Microsoft is in the operating system business and so they charge money for their product.  There are many discussions that can be had about whether or not paying for operating systems at all is a concept past its time.  But for now, if you want to gain the benefits of using the operating system that has the largest ecosystem of supported hardware, software publishers and users; legally obtaining the licenses to do so requires that you pay for them.

On a good day, I see these reactions coming from people that are ignorant of how computers work, ignorant of where their software came from, or ignorant of how to fix the problem.  But even on a good day, I sense these people don't seem to care to become knowledgeable.  Microsoft does try to inform people in less invasive ways that give you a chance to fix the problem before the operating system is rendered useless.  When an activation key is mistyped, Windows shows links to information.  A simple Google search quickly lands you on the Genuine Advantage website.  The Genuine Advantage tool gives more information before it even runs.  All of these places contain information with straight forward instructions on how to test your OS and what to do if that test fails.  Despite all of this information, people would rather blame Microsoft than take responsibility for their lack of information, their choice in software or their choice in computer vendors.

Thanks to Reuters for the original story.

Saturday, October 18, 2008

Wisdom of the Crowds in the Enterprise

People have been talking a lot lately about enterprise 2.0. Heck, people have been talking about Enterprise 3.0 and we haven't even taken advantage of 2.0 yet. And I don't know if we ever fully will. When people talk about Web 2.0 features, I see them fall into two broad categories; technology and social design.

On the technology side of Web 2.0 we have AJAX and RIA frameworks. These new tools have allowed designers and developers to create much more inviting, intuitive and responsive web applications. While they may have been around in some form or another before Web 2.0 they first started catching on at the same time as the social features of Web 2.0. That's mostly just a timing issue. The new technologies would have caught on eventually, but the idea of social networking just happened to come about at the same time. The technologies are spreading through enterprise applications now and have been for a few years, but what about the social aspects?

When I first start thinking about the social aspects of public web sites, I think about the specific features. There is tagging to organize data. There are the communication mediums of blogs, comments on everyone and everything. You can rate the content to let others know if you like it. Most importantly is that by using those features together the group will benefit as a whole by identifying the best content available and making it better. Tagging allows other people to find content faster. Comments, blogs and other communication conveys new ideas to others on how to improve existing and new content and allow anyone in the community to create content. Ratings tell other people which content is worth while and which can be ignored. We use the wisdom of other people to create better content for our communities which attracts more people to those communities. Wash, rinse, repeat.

Most of us knew that already and, yes, some of the ideas of social networking have made it into enterprise applications. Tagging is a better way of organizing information. Blogging has helped many companies become more transparent. Even ratings have been introduced to try to help organize business information. And while the systems have been implemented, I have found more often than not, they are nowhere near as effective as the same features in public facing web sites. The truth is that those systems in the enterprise are not benefiting from the wisdom of the crowds anywhere near as much as the same features in the wild. What I have seen is that policies or habits of the work place prevent workers from taking advantage of 2.0 features.

The reasons are many. One is that only authorized people should be allowed to make changes because that is their job. Another is that employees don't feel that their work should be rated and visible to all. Some companies have policies geared to keep clients ignorant of the deals they each get so as to maximize sales. They go on and on but the commonality is that barriers are built up that keep the crowds from convening.

Maybe the problem is that businesses just aren't ready to be transparent. Companies try to maximize their sales by not publishing information so they can segment their clients. But, the crowds will only gather when as much information is available for clients to speak about. I find it naive to think that a company can keep people that ignorant in this day and age. The worst case will be that the crowd forms anyway outside of the watchful eye of the company that is the focus for the group.

Individuals and environments inside the companies also need to change. When employees are more worried about how their poor performance will be judged instead of jumping at the chance to receive feedback, improve their work, and then shine; I can only think that either people are lazy or that the corporate atmosphere is just all wrong. Maybe they are one and the same. If you are only hiring lazy people, it may be time to raise the bar just a little. Lower performing people are not necessarily bad to have, people that do not improve are.

And that is the real problem. Companies need to change to take advantage of these technologies. Where the bottom line is concerned, I don't know that a change will necessarily be better than the old ways of running companies. But I suspect it will be; look at where the old way has gotten us. I also have a strong suspicion that a half and half approach will yield the worst results. Community tools without communities seem to have the same problem as healthy cupcakes to me.

Friday, October 3, 2008

Reading ClientConfig values in Silverlight

This turns out to not be too difficult.  It's just a bit nonobvious since there are no configuration reading classes included in the Silverlight runtime as there are in the full CLR.  Since I have seen the question posted from time to time, I figured I would explain why you might need to do this and how it is done.

While writing our new Silverlight application, we ran into a problem with authentication.  I have a whole other blog post about that, but one of the issues that needed to be resolved was how to read the WCF service addresses from the client configuration file.  The solution to our authentication problem required that we instantiate our service clients with the constructor overload that accepts a ChannelBinding and an EndpointAddress.  Since we needed to provide the address as a string to the EndpointAddress constructor, we needed a way to read the values from the configuration file.

The approach requires just a few short lines of code.

private static EndpointAddress ReadAddress(string contract)
EndpointAddress result = null;

var streamInfo = Application.GetResourceStream(new Uri("ServiceReferences.ClientConfig", UriKind.Relative));
var config = XDocument.Load(streamInfo.Stream);
var endpoints = from endpoint in config.Descendants("endpoint")
where endpoint.Attribute("contract").Value == contract
select new { Address = endpoint.Attribute("address").Value };

foreach (var endpoint in endpoints)
result = new EndpointAddress(endpoint.Address);

if (null == result)
throw new InvalidOperationException(string.Format("Cannot create endpoint for contract {0}.", contract));

return result;

Obviously, you can change the file name and create your own settings files, and the LINQ query can be tailored to any XML schema you come up with.  This piece of code specifically extracts an endpoint address from the ServiceReferences.ClientConfig file.  We wrapped this method in a factory that is used to create all of our message level authenticated service clients.  We also cache the values so that we only read them from the file the first time.

Wednesday, September 17, 2008

Hiring a Developer for Non-Developers

OK, this won't be definitive guide on the subject for several reasons. First and foremost, it's just going to be a relatively short blog post. Second, I have no experience hiring developers as a non-developer because I am a developer and I was not hiring developers when I wasn't one. Um... yeah, that sentence works. Third, I am not now nor have I ever been a recruiter or HR professional and so I don't know all the other myriad tips and tricks that could be gained having spent a career as one. What I do have is experience as a developer working with recruiters and might be able to give a bit of insight from the development team side of the relationship.

When hiring for a development team, while it is important to know what technologies and methodologies a candidate is familiar with, there is only a little bit more than looking at the alphabet soup on their resume that can be done in that regard. Yes, you must do what you can to make sure the candidate isn't lying. Any help you can give on determining whether or not a candidate really worked with and knows the MAYDEOP framework will help eliminate the individual lying about it before they or their resume gets to the development team. And it will be appreciated. Well, maybe not, but at least the developers won't get frustrated when a dud does get through. Regardless, the development team will still need to drill candidates on all the details that having years of a successful programming career is the only way to really understand.

What is important to remember is that hiring a developer is all about reducing risk. To that end, what I have found more helpful when working with HR or recruiters is to try and explain the indicators that make a good developer, the indicators that signify increased or reduced risk. Do they come from a good school? Do they take an interest in technology outside of work? Do the have interests other than technology (are they well rounded)? What technology publications do they read? What social / technology groups do they participate in? Do they take an active interest in advancing their career? The list can go on for a while and varies from department to department. The idea is that each question has answers that indicate the probability of whether or not a recruit will be more or less successful in a position at a given company.

It is also important to remember that no one answer is necessarily an automatic ding. They may only have gone through "Bob's How to Talk to Computers" training course as far as formal education goes. But if they have five years of experience, have a successful record of completing projects and have every technical certification in the book, it's probably an individual the team should speak to. I'm not saying that an individual that has singlehandedly caused five lawsuits to be leveled against previous employers should be considered regardless of any positive indicators. What I am saying is that in most cases it is the combination of all the answers, the overall package if you will, that needs to be evaluated.

This definitely is not easy to do. It would be nice to think that you could simply scan a resume and match the acronyms up against the job description. Depending on the position though, this is not even a good first step. Some of the best junior developers I have ever been involved with hiring had no experience in the primary technologies used by the teams I was on. Filtering out candidates based on the acronyms in their experience would have eliminated these individuals before the hiring developers ever saw their resumes. True, we wouldn't know we had missed them and thus probably wouldn't be angry about it. But without those successful hires, it would be even more difficult not to get frustrated interviewing dud after dud just because they have "client side programming" listed on their resume.

Monday, September 8, 2008

IT Consultants, Pro Bono?

I found another interesting question on the internet today, "Isn't it about time that IT consultants start doing pro bono work along the lines of lawyers?" I am inferring from the question that the asker doesn't believe that IT professionals perform that much pro bono work which I think is a big misconception that should be fixed. But there are reasons why that misconception exists and I figure I'll take a stab at explaining that is as well.

The definition I found for pro bono is this:
done or donated without charge
By that definition I know of several ways in which many developers contribute work pro bono. Open source is a great example. All types of software from operating systems, to office productivity to educational games as well as countless other types and many versions of each have been created by the software writing public with no expectation of payment. Another excellent example is after school programs where professionals donate their time to educate children in the use of computers and the basics of programming. I myself spend time watching various forums for people that need help and answer questions when I can. Yes, that's where I find these questions. And let's not forget about the friends and family program that every IT professional I know donates a hefty amount of time to every year cleaning viruses and setting up printers and software. Yes, it is a huge misconception to say that IT professionals don't do pro bono work.

Besides, not everyone feels the need to give back to others in the same way. The best software developers I know are intelligent people with diverse interests. The ones that I would accept work free of charge from simply don't want to spend their entire lives, day-in day-out coding. The best developers I know are musicians, writers and athletes in their spare time. They give back to the community in song, fund raising races, informational blogs, writing stories and sometimes just helping out at the local shelter. They may not be contributing by working in their daily trade, but I would never dare say that they lack the drive to help others.
But this misconception must be coming from somewhere. I see this question in various guises from time to time and I think I know what part of the problem is. When I run across this sentiment it is usually coming from the vicinity of an individual that cannot find an IT professional to set up an office network or write a web site or other application for free even if it would be contributing to a worthy cause. The simple reason for this is lack of time. You may be able to find a lawyer to give legal advice and even argue a case or two pro bono. But in general, I know of no profession where a worker can dedicate themselves without pay to a single project, case or individual for the amount of continuous time usually required by such large IT tasks.

Let's take a look at some guidelines. The Washington State Bar Association recommends that lawyers spend 30 hours per year doing pro bono work. Let's just call that a week. There is no software of significant use that can be written in one man week. Software projects and IT tasks of any consequence just take longer to accomplish. Even a simple website takes more time if you want it to be effective. I have seen small, five-page, web sites created in one week and the results are always less than spectacular. Most results are less than mildly pleasing for that matter. In order to build a web site: the message has to created and different ways of breaking down and conveying that message have to be decided on, a user experience that facilitates the message must be generated, colors need to be chosen, images created, content must be written, domain names registered and hosting environments found or built and set up just to name some of the tasks. If this type of job is done in a single week at best you have a web presence. What you never get is an attractive and effective web site.

Even if you can find an individual that wants to spend 160 hours to write your custom web site or rewire your office network, they probably can not take that time off from their day jobs and dedicate themselves to that project. If your company or organization can afford to be without their web site for six months or their office network for one month while an IT professional does the job in their spare time, you might be able to find someone. But most organizations I know would lose more money without those resources in place than the cost of paying someone to get the job done. One of the reasons why large projects can work as open source is that there are very few if any time constraints.

While the length of a significant IT project is fairly large, IT professionals tend to have less time to work on charitable causes because they usually make less money than lawyers. According to some very rough numbers, attorneys make in the range of %30-%50 more than a programmer. The difference can be even bigger in the real world and it probably isn't any less. A simple fact is that people that make more money can afford to spend more time pursuing interests other than making money. Whether or not they do is a question for someone else to answer on some other day.

All of these factors line up in such a way as to create very few possibilities for a real world IT professional to spend time on large projects free of charge. I would hazard a guess that organizations and web sites geared towards matching available professionals with projects in need of free services don't exist because the match success rate would not be that high. And that just makes it even harder to get people and projects together. By contrast, it is much easier for lawyers to find opportunities to do pro bono work as the framework is in place. At the least, most state bar associations have a group or committee to help with just that task.

There are probably many more facets to the issue that I am missing, but the short of it is that IT professionals do give back. Most just can't do it in large enough lump sums to accomplish the same tasks they generally get paid for. Time constraints, money constraints, life and sanity just prevent IT workers from being able to give large projects away for free on someone else's schedule.

Sunday, September 7, 2008

Google Growing Pains

I ran into something I got a bit of a chuckle out of this morning.  Not a malicious chuckle mind you.  Just an "oops, that's kind of ironic", lighten-up-the-day, smile-to-my-lips sort of thing.
I have an odd sense of humor at the best of times, so just in case you don't get it here is why I found this mildly amusing.  This is a screen shot of Google's virtual reality program directing users to browsers other than Google's own browser, Chrome.

I realize there's a whole bunch of reasons for this that I could only wish were the case for my little company.  Google has many successful as well as yet-to-be-proven products.  Many people in many countries use those products driving Google's remarkable profits.  Despite those facts, this is a sign that there are some far flung departments at Google that just aren't keeping in touch.  This isn't a problem unique to Google by any means. It's just a reminder that anytime a company get's to be of a certain size, roll out timings, acquiring third parties, communication difficulties and any of a plethora of other complications can cause chinks to show in what seems to be an otherwise unified corporate strategy.

On an almost completely random topic, some movie quotes:
El Guapo: Jefe, what is a plethora?
Jefe: Why, El Guapo?
El Guapo: Well, you told me I have a plethora. And I just would like to know if you know what a plethora is. I would not like to think that a person would tell someone he has a plethora, and then find out that that person has *no idea* what it means to have a plethora. 

Tuesday, September 2, 2008

Silverlight, WCF and Authentication

Here is what I wanted to do. I wanted my WCF services to be completely disconnected from my Silverlight web application. I wanted third parties to be able to generate proxies without authenticating and use the web services without authenticating against a web site first. I didn't want to rely on impersonation because the user accounts will be created from within the services and new accounts would then need to be registered with SQL Server security. Ick. I just wanted basic WCF services with message level authentication and a web site hosting a Silverlight application built on top of the services. Should be easy, correct?

Well, as anyone who has tried knows, easy isn't the case. However, I was able to get all this working with the help of quite a few articles. And you can bet accomplishing my goal required hacks, custom code and overcoming several mind bending what-the-f***s along the way. Here I document the journey at a high level with links to details if you're interested. Forgive me if the steps aren't in quite the right order or some details are missing; it's taken me a few weeks to tack together all the bits and pieces.

Membership Authentication
I'm not talking about hiding your services behind a web site and piggy-backing authentication on top of the WCF - ASP.NET compatibility features. I'm talking about true, per-operation message level authentication using a membership provider. This is particularly useful because we are able to use the same provider for the services and web site and yet keep them completely separate. Here's the why-hows. What this ends up forcing you to do is set up a BasicHttpBinding with TransportWithMessageCredential security. That will require you to run all of your services over HTTPS. I use IIS 7 which makes it very easy to create your own SSL certificates for development. There are problems though. Problem One - even though we are using TransportWithMessageCredential security on a basicHttpBinding, Silverlight does not yet support it. Problem Two - cross domain policy files do not yet support the HTTPS protocol. Problem Three - because the services are not relying on impersonation, it is a bit difficult to get the identity of the authenticated user. Problem two is easy to solve so I am going to talk about it first.

HTTPS and Cross Domain Policies.
[EDIT: This is no longer an issue in the release of Silverlight 2. Supposedly, clientaccesspolicy.xml now allows cross domain calls between HTTP and HTTPS. Regardless, I still set up my development environment this way for the small debugging advantage.]
Cross domain policy files are what let you call WCF services from Silverlight applications that are downloaded from different domains. The short version of how to deal with this is that you don't. Cross domain policy files simply don't support HTTPS right now. That's OK though. What I have done is to create a virtual directory in IIS under the web site that hosts my Silverlight application that maps to my services. I also made sure that the pages that host my Silverlight application are also protected by SSL. That way, my Silverlight is downloaded from and my services are at This satisfies the same domain requirement. At first, this might seem like a performance hit, but since the web pages hosting my Silverlight application are infrequently used (once per login in our case), it's not really that bad at all. While this essentially means that the web site and services do need to exist together for now, once support for HTTPS is introduced in the cross domain policy files, all you need to do is create the file, drop it into the services directory and you can move the virtual directory where ever you want. There are a few other details like making sure you allow anonymous access to the services directory. Authentication will be taken care of by the membership provider when the services are run, but you want people to be able to hit your MEX and WSDL files. You also need to make sure to turn off forms authentication for the services directory or your requests will be run through the ASP.NET authentication process. Since no cookies will be sent with your service calls, requests will fail if you don't disable forms authentication for your services. One upshot of hosting your services and web application in IIS is that you can associate both with the same application pool. If you do so, they will use the same worker process. Since the debugger automatically attaches to the web site worker process, you can debug your services without attaching to the services host by hand. Now on to the harder problem, Problem One.

TransportWithMessageCredential Silverlight Compatibility
It doesn't exist in Silverlight 2 Beta 2. You can make your own though. This post describes how to add your own headers to a WCF call in a Silverlight application. There are a few changes that I needed to make. Most code that uses HTTP needs to be changed to HTTPS. For instance, when adding binding elements to the custom binding you need to change the HttpTransportBindingElement to an HttpsTransportBindingElement. You also need to make sure that you set the security mode for the binding to Transport. The proxy generator will dutifully create your client configuration with the TransportWithMessageCredential. It doesn't hurt to leave them in the configuration, just make sure the default configuration is never used; the runtime will error when it tries to parse the security mode because TransportWithMessageCredential does not exist in the security enumeration in the Silverlight runtime. Getting around it is accomplished by providing your own binding and endpoint address as the per the example in the referenced post. The hardest part was writing the security header so that it was compatible with the WCF security headers. To figure this out, I created a small windows application, hooked it up to the services and inspected the messages server side. I'll paste my security header class below since that code is not included in any of the posts I saw. The frustrating thing about all of this work is that it will all be thrown out when security credentials and TransportWithMessageCredential security are support in Silverlight WCF proxies. Ah well.

Who are you?
Problem number three is getting the identity of the user. I could not find the identity in any of the normal places: OperationContext, HttpContext, Thread.CurrentPrincipal. Nothing. My understanding is that this is because I am not using impersonation, nor do I want to, but if someone knows how to configure things a different way, I'm all ears. My solution was to add a message inspector. Implementing IDispatchMessageInspector allows you to inspect the contents of a message after the credentials in the message have been authenticated. With a little XPath, the authenticated username falls right into your hands, er code... whatever. The point is that you now know the user's identity and can provide it to your routines for authorization purposes. One little gotcha about writing message inspectors is that message can only be read or copied once. Here's an article that explains how to get around it and a conversation on why it works the way it does. Yes, it's an old 'Indigo' article, but it is still relevant.
[EDIT] So I was rereading this article recently because apparently a bunch of people have found it. One change that I made sometime since Silverlight 2 released is that I found out where WCF stuffs identity information. OperationContext.Current.ServiceSecurityContext.PrimaryIdentity. That's it. The message inspectors are no longer necessary if they ever were. I don't know if that's a Silverlight 2 thing or not, but you should all be using the final release now anyway.[/EDIT]

That's more or less it. Again, I'm sorry for the lack of detail, but others have already done most of the documenting, I just had to put it all together. If you have questions, you can email me. My contact information should be on the side of the blog there somewhere. Silverlight RTM or RTW or CTP or whatever comes next is due out fairly soon from my understanding. At that time I don't think all that code required to solve problem two will be needed any more. But if you need to get this working now like we did, it is possible. And it's not that much work once you understand how all the pieces parts fit together.

The code I promised
Somewhere up above I said that I would provide a little bit of code to demonstrate how to write WCF compatible security headers from a class inheriting from MessageHeader. The credentials object is one of my own devising that keeps the password encrypted in memory and I store it at the top level in my Silverlight Application object, app.xaml. But is should be obvious what information goes where. Oh, except for the username token. In other WCF applications, this is the string "uuid-" followed by a random Guid selected for that instance of the application followed by "-1". e.g. "uuid-12345678-1234-1234-1234-123456789012-1". Here's the code:

public class SecurityHeader : MessageHeader

protected override void OnWriteHeaderContents(System.Xml.XmlDictionaryWriter writer, MessageVersion messageVersion)
var timestamp = DateTime.Now;
var application = Application.Current as App;
var credentials = application.Credentials;

var secUtilNamespace =
var secExtNamespace =

"Timestamp", secUtilNamespace);
"Id", secUtilNamespace, "_0");
"Created", secUtilNamespace, timestamp.ToString("O"));
"Expires", secUtilNamespace, timestamp.AddMinutes(5).ToString("O"));
"UsernameToken", secExtNamespace);
"Id", secUtilNamespace, credentials.UsernameToken);
"Username", secExtNamespace, credentials.Username);
"Password", secExtNamespace);
"Type", secExtNamespace, "");

public override string Name
get { return "Security"; }

public override string Namespace
get { return ""; }

public override bool MustUnderstand
get{ return true; }