Friday, November 6, 2009

ASP.NET PasswordRecovery Control and GMail

I recently had need to whip up a quick ASP.NET web site with authentication. Nothing fancy, so the default ASP.NET Membership provider tables and account controls worked fine. The one problem I ran into was that I needed to route the email the MembershipProvider and PasswordRecovery control create through GMail. You see, GMail uses secure connections and the control doesn't seem to offer built in support. But it isn't too hard to get it working.

Here's the approach I ended up with.

protected override void OnInit(EventArgs e)

PasswordRecoveryControl.SendingMail += PasswordRecoveryControl_SendingMail;

void PasswordRecoveryControl_SendingMail(object sender, MailMessageEventArgs e)
// Create network credentials from SMTP configuration settings.
var config = WebConfigurationManager.OpenWebConfiguration(HttpContext.Current.Request.ApplicationPath);
var settings = (MailSettingsSectionGroup)config.GetSectionGroup("");
var credentials = new NetworkCredential(settings.Smtp.Network.UserName, settings.Smtp.Network.Password);

// Send message over secure connection.
var mailClient = new SmtpClient(settings.Smtp.Network.Host, settings.Smtp.Network.Port);
mailClient.EnableSsl = true;
mailClient.UseDefaultCredentials = false;
mailClient.Credentials = credentials;

// Prevent control from sending message via default implementation.
e.Cancel = true;

Canceling the event still results in success as far as the control is concerned. This means the SuccessText and SuccessUrl properties on the control behave the way you want them to if not quite the way you might expect them to.

Tuesday, September 22, 2009

Inversion of Control

I've been a fan of IoC frameworks for a little while now. In particular, I have been using StructureMap. I knew that these frameworks were useful, but I didn't realize what I was missing until recently. I would look at some of the proponents of the frameworks and their example code. This code would have formal code Interfaces defined for almost every object and interaction in the system. That always seemed like overkill to me. I would think, "You are only ever going to have one class that fulfills each of those sets of functionality. Why bother with all the extra coding for the interfaces?" It didn't make sense... until now.

I started working on a new pet project in my spare time. It's a program with a significant number of moving parts that all have different sets of functionality. Following the single responsibility principle, I've broken down each of those sets of functionality into objects that each have their own domain of control. Then I started writing unit tests. And that's when the realization hit me. There may only be one production implementation of most of the classes. But combine those interfaces that define concise functionality with a good mock library and writing unit tests with few lines of code becomes a piece of cake. I'm talking well tested code with tests that are at most twenty lines of code each; sometimes more, but most often less.

"Yes, IoC frameworks are cool for plugins and they make unit testing easier," I would say to myself. But it turns out that I didn't fully grasp the full extent and usefulness of this paradigm. That doesn't necessarily mean I'm all the way there yet either. But, I wanted to write something about this because it really was an eye opening experience; the difference between knowing and grokking. Also, I am not all that unique a person. And since I wasn't putting all this together from reading documentation and blogs, I felt that perhaps there are others out there that might benefit from a different point of view on the subject as well.

Thursday, August 27, 2009

ASP.NET MVC and ReverseDOS

I've blogged a bit about ReverseDOS before. It's a neat utility that allows you to lock up and/or deny HTML requests from content spammers. You know, you have a web site users are allowed to create content on, comments in particular, and the Online Consortium of Gambling Websites has targeted your site as a nice place to do some free advertising. ReverseDOS very easily allows you to tie up their resources and ignore their offerings.

The point of this post is that there is a trick to get ReverseDOS to show custom errors when using ASP.NET MVC. I forget where the original information came from to get this done. It's already out on the web somewhere, I just don't remember where I pulled the pieces from. Just in case it isn't easy to find, I'll show you how I got it done.

First, configure ReverseDOS just like the instructions tell you to. The one ReverseDOS configuration setting that you need to verify is that endRequest = false.

Next, in your Global.asax file, you need to add a new event handler.

protected void Application_PreRequestHandlerExecute()
var error = Context.Items["ReverseDOS_Exception"] as HttpException;
if (null != error)
ServeError("/Error/AccessDenied", 403);

Obviously, change the path to your own error page. The ServerError() method is an example of how to render a different ASP.NET MVC action to the response stream from outside of the MVC framework. This is basically the same as supplying a different view to render when returning from an action. The method looks something like this:

private void ServeError(string path, int statusCode)
var url = new StringBuilder();


string originalPath = Request.Path;
Context.RewritePath(url.ToString(), false);
IHttpHandler httpHandler = new MvcHttpHandler();
Context.RewritePath(originalPath, false);

Response.StatusCode = statusCode;

Granted, there are probably other ways of handling this. In my admittedly limited experience however, this gives me the most control when setting up ReverseDOS along side custom error handling in ASP.NET MVC. It allows me to serve my own error pages with proper http response codes without redirects and without relying on IIS features.

Happy coding to you.

A Solution To My Repository Dilemma

I woke up this morning and realized I had a better solution to my repository problem. Create instances of your business logic that accept repository instances in their constructors. Pass the models through the business logic to the repositories for simple data operations. Perform business logic and relationship management on the models when needed before the data is stored.

It's still not ideal though. First, there's the potential for a lot of pass-throughs. I can live with that I suppose. Second, you end up losing a lot in the way of encapsulation in that the business methods that operate on the data are not contained in the same objects as the data they operate on. But it's not like there isn't precedence for effectively relegating your business models to what amount to DTOs or data contracts. Think web services. You're just not passing those contracts over the wire. Plus, mapping between models tailored to the application layer they are used in is becoming an accepted practice. At least it is in the ASP.NET MVC world which is what prompted me to write about this anyway.

As I said, it may not be ideal, but I think I like it better than where I ended up at in my last post. The code becomes more testable and you don't need to re-implement relationship management in every version of the repositories that you create.

Sunday, August 2, 2009

One Small Problem With Repositories

It used to be that I wrote applications with a hard separation between layers; presentation calls business, business calls data, and other systems were wrapped in abstraction as needed. This worked well enough and many large enterprise applications have been build this way.

One technique frequently implemented in this paradigm is that the entry points for the persistence routines of domain objects are encoded in the objects themselves. While not perfect, one great upshot of this approach was that there was only one place where the relationships between business objects was defined. E.g., if an Order is saved, the Order enforces that all of itsLineItems are also saved. The code is written once and tested once. If new repositories are needed for a new database, write the code, swap them out and as long as your business layer tests still pass, you know your relationship persistence is in tact. The biggest problem with this paradigm though is that it leads to difficult to write tests for oh so many different reasons.

As test driven development is the way to go, I haven't written applications like that for a couple of years. I have been sharpening my saw and forcing myself to build more testable code. In ASP.NET MVC, the prevailing winds seem to push in the direction that your controllers should be injected with repository instances and those instances should be used for data management. This works exceptionally well from the stand point that I can pass repository mocks and stubs to my controller when testing actions. This is just a better way to write code in general.

But one part of all this newer-fangeld code crafting still nags at me. If the repository contains the relationship logic, that logic and all of the tests that enforce it travel with that implementation. If I need to write a new data layer for a new database, I need to rebuild that logic and the tests for it and maintain multiple pieces of code that ultimately enforce the same rules. In almost all cases, this is not a real world problem. Most people never port their applications to multiple databases and so the interfaces are relegated to testing purposes only. Most of us only ever have one set of repository code to test. I'll live with it by ignoring it.

The perfectionist in me still wants to do something about it. I've thought about crazy schemes like dividing the repository functionality into multiple components, CUD/R if you will, with methods targeted to the domain model and UI respectively. I've thought about abstractions and composites and delegates, oh my! But in the end, every solution solves some problems and creates others. In the end, I suppose that knowing when good enough is good enough and being able to identify when it isn't is the way to handle this situation.

Tuesday, July 28, 2009

WCF and IIS 7 Error

I ran across this little nugget of frustration today when trying to hit my WCF services hosted by IIS 7 on a computer with a new build:

Error 403.3
The page you are requesting cannot be served because of the extension configuration...

Turns out that the .svc extension was never registered with IIS.

Simple problem, simple solution. With administrative privelages run:

C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\servicemodelreg -i

or for 64 bit OS's

C:\Windows\Microsoft.NET\Framework64\v3.0\Windows Communication Foundation\servicemodelreg -i

Just another post to bump the solution up Google a bit.

Saturday, July 25, 2009

Rhino Mocks Out Ref Parameters

I started playing with Rhino Mocks recently. Yes, I'm late to the game. And yes, that game is great. If you've made it to this post I'm obviously preaching to the choir, so let's get to it.

I ran across a need to mock up an out parameter today. For the context of this article that would be a ByRef parameter to all you VBers. What to do? Well, it turns out to be a very simple task.

I have several repository methods in a little project of mine that read paged data. I like to return the total number of records in the same query from the database and so I use an out parameter in my repository methods. The methods look something like this:

public IList<RecordModel> ReadPaged(int pageNumber, int pageSize, out int recordCount);

This project of mine is written for ASP.NET MVC which, like every other ASP.NET MVC project, contains controllers. My controllers makes calls on the repositories and push the models returned into the ViewDataDictionary. The goal then is to create a stub for the IRecordRepository.ReadPaged() method that passes a content count, among other values, back to the controller so that the value may be pushed into the ViewDataDictionary and then validated in a unit test. That's a lot of talk for one small method call:

.Stub(m => m.ReadPaged(pageNumber, pageSize, out recordCount))

That call to OutRef(250) is what takes care of the out parameter. The method accepts a parameter array of objects. When the stub method is called by the controller, it populates its arguments into the out parameters in the order in which they are declared in the method signature.

Goal accomplished.

Monday, July 6, 2009

IoC and Structure Map

I started playing with StructureMap a couple of days ago. It is a pretty decent framework for IoC and dependency injection. It works well with ASP.NET MVC and makes it possible to write some good, testable code. That's not to say that there isn't anything better out there, just that this gets the job done well and so I didn't need to look for anything better. The features that I like most are the registry configuration system, and the dependency injection approach.

One way to configure StructureMap is to subclass the Registry class. Methods in the class allow you to create mappings between interfaces or abstract classes and the concrete types that will be used during run time. The simplest approach is to create one to one mappings for each interface / concrete type pair. Nothing you couldn't do with a simple text or configuration file of some sort. However, there are more powerful methods available that will automatically map all types against the interfaces they implement within a given assembly or namespace. You can even map multiple types against the same interface and use a key to get the correct implementation for a given situation at run time. Score one for the IoC framework.

As for creating the mapped instances, there are a couple different ways to get that done. The simpler approach is just a factory method that accepts the interface name and optionally a key and returns the configured implementation. Again, this is nothing that you can't do very quickly and easily with the Activator object in .NET. Once again, a more powerful technique exists and that is to request an instance of an object that accepts mapped types in its constructor. E.g. Suppose I have a class SiteController with a constructor defined as void SiteController(ISiteMapper mapper, ISiteRepository repository) and that both the interfaces have mapped concrete classes configured. I could request an instance of SiteController from the StructureMap ObjectFactory and it would create the SiteController container with the appropriate dependencies passed to the constructor. This is another nice convenience that good IoC frameworks provide you with.

Martin Fowler said something I like about IoC:
"When these containers talk about how they are so useful because they implement 'Inversion of Control' I end up very puzzled. Inversion of control is a common characteristic of frameworks, so saying that these lightweight containers are special because they use inversion of control is like saying my car is special because it has wheels."
My take on this quote is that IoC is nothing more than good object oriented design that takes advantage of polymorphism and factory patterns. It's a technique that has been around since the first OO languages and isn't some new advance in software design. That being said, StructureMap and other frameworks like it do offer some great utilities that simplify the task of writing the code.

Thursday, June 11, 2009

State of Milyli

It's been a little while since I wrote about our little company Milyli. Since that was one of the original purposes of starting this blog, I figured I'd give a little update. Things are going well, but it's a lot of hard work.

As for myself, being mostly in charge of product development can be a little frustrating working almost alone. It's not that I'm the only person working to make the company and product a success, not by a long shot. But I am effectively the only developer working on the code. Granted, I get to develop the application whichever way I want. But for a not small piece of software, that's not as ideal as it might sound. I really miss being able to bounce ideas off of other developers that have a good sense of how the application works. The small increments of work are also not as satisfying when you realize that no work has been done on the rest of the application.

Besides the application, I did a bunch of project work early on and it looks like I'm about to get another load. I enjoy working on the consulting side. The projects are different and there's always a chance to learn a new trick of some sort. But it is frustrating to see so much of our already limited time not being spent on our product.

As for my two partners, they've been doing the bulk of the bill paying for the past number of months. They're each feeling the pain and glory in their own ways.

One of them is working with technology older than even that used at our last place of employment. The old code is plenty crufty and has more than its share of best practice failings. That and interacting with and managing the client seems to be wearing him a bit thin of late.

My other partner spends his time schizophrenically jumping between business development, market analysis, office manager, project manager and consultant. Basically filling in wherever needs filling. I am not envious at all of that role.

All I can do is thank them and keep my head down and keep making software of all sorts work better. What's important to remember is that the Dip is worth getting through. I'm pretty sure we haven't seen the bottom yet and if by some miracle we have, that will just be a pleasant little surprise.

There have been a number of high points as well. We get to make the environment the way we like it including eating lunch together everyday that we are in the office. We have started to see some preliminary design ideas for the look of our application. They look great and I can't wait to start implementing. We have signed on some work that should pay our bills for a while. We have other new, exciting projects coming in that look like they will fit great with our skills and experience. And we keep meeting new people to talk about their problems and solutions even when we aren't necessarily the best folks to tackle the resulting projects.

And that's where it all stands right now; things are going well, but it's a lot of hard work.

Wednesday, June 10, 2009

Create SharePoint Job Timer Definitions

So I realized recently that one of my MOSS applications did not have all of the timer jobs defined for it that it should have. You know, you go to Central Admin -> Operations -> Job Timer Definitions and they just aren't all there. Which ones should be there by default? Here's a good list. Just how do you go about getting some of those jobs in there? There's a bunch of tricks and supported approaches.

The publishing tasks (Scheduled Approval, Scheduled Page Review, etc.) can be created in one shot by creating a new web site collection using the publishing portal template under the existing application and deleting it. The timer definitions stay, the services start running and everyone is happy. Yaaaay!

Some of the definitions will be created if you set the appropriate properties using stsadm.
E.g., stsadm -o setproperty -pn job-immediate-alerts -pv "Every 5 minutes between 0 and 59" -url
Once again the timer job shows up and most of the time your alerts start flowing. Hooray for our side!

But there are a bunch of definitions that there don't appear to be tools to create on their own. Sure you can back up the database, blow away the application, create a new application with the correct template and same name, restore the database, and fix the issues (there's always some). I didn't want to go through all that, though I'm not sure that what I figured out was less work. What I ended up doing was to write some crafty SQL. Well, maybe it was just plain old mundane and obvious SQL, but that's what I did. All the answers lie in the Objects table in your config database.

First find the id of your web application. This is going to be the parent id of the timer record you will insert. The following SQL worked pretty well for me.

SELECT * FROM Objects WHERE Properties LIKE '%SPWebApplication%'

Next, query the table and look for existing definitions of the jobs you are missing. If you don't already have an application that contains them, you can create one. The Name column of the records contains the stsadm property for each timer job. Here's a list of a few. And here's some SQL to help you out.

SELECT * FROM Objects WHERE o.Name LIKE 'job-%'

Find a record for the job you are interested in. You will need two pieces of information: the ClassId and the Properties contents. The classId tells the system which timer job to create and the Properties contain the frequency of the timer job and a couple of other easy to change settings.

Last, run a simple insert statement like the following...
INSERT Objects(Id, ClassId, ParentId, Name, Status, Properties)
VALUES (NEWID(), '[JobClassId]', '[ParentApplicationId]', '[JobPropertyName]', 0,'[PropertyContents]')

A quick refresh of the Timer Job Definitions page will show the new job ready to run. You may need to restart the Windows SharePoint Services Timer service before the new jobs are picked up though. Also, the Timer Job Status list will not show the new jobs until after the first time they are run. Be patient for those weekly tasks.

Again with the disclaimer: I saw in some posts that Microsoft doesn't take to kindly to messing around with the SharePoint databases. If you want continued support, tweak at your own risk. This should really only be used as a last resort in any case as there are usually other options. I just happened across someone asking this question and decided to figure out how to do it.

SharePoint Alerts Not Sending

There are all sorts of reasons why your alerts might not be working in SharePoint. There are a whole slew of causes as to why with the most common being a backup and restore to a new server, farm, environment, etc. Googling will provide you with a bunch of options for the common problems pretty quickly.

None of those really worked for me. I did eventually solve my problem though. One post pointed me at the EventBatches table in the content database. This table contains two pieces of information, the last event time and the last event id processed. It turned out that the id in the EventBatches table was FAR greater than the last id in the EventCache table. This was probably the result of some less than optimal backup and restore voodoo that had gone on previously. Regardless, I set the id in the EventBatches table equal to the last EventCache id and alerts started flowing freely from that point on.

I wasn't able to find anyone that just came flat out and said that, so there it is.

Disclaimer: I saw in some posts that Microsoft doesn't take to kindly to messing around with the SharePoint databases. If you want continued support, tweak at your own risk.

Saturday, May 30, 2009

Comments By Google

A couple of posts back I wrote about some techniques one can use to fight comment spam on their web site. Just recently, I ran into another technique, one that is all the rage in so many other business aspects: outsource it.

It turns out that Google has an interesting little service called Friend Connect. The features include comments, ratings, authentication and moderation tools to name a few. I haven't set it up yet nor read all there is to read, but I'd be willing to bet they also throw some of their great spam filtering technology built for GMail at the comments as well. An API exists so that developers can make use of the information tracked by Google to provide a more interactive experience. Think privileges determined by karma and such. And with a reasonably trustworthy third party writing a large chunk of your code, developers can focus on all the aspects of the site that deliver content instead.

The biggest downside to this approach, and it can be a biggie in some cases, is that all that social data is going to be housed in a remote database. That's probably OK for a great number of sites out there, but for some application, that may just be asking too much.

Another downside is that the look and feel is going to be limited unless you code all of your own controls against their API. That probably still does save some time. More importantly it means that you don't need to worry about the more demanding aspects of authentication. But once again, it means that outsourcing may not yield all the time saving benefits so many people think it will.

Tuesday, May 26, 2009

Middle Mouse Button Broken

I had a frustrating problem with my mouse tonight. Basically, the middle mouse button stopped working on my Razer Lachesis. I tried searching for a resolution and all I found were suggestions on how to clean the mouse. I'm too lazy for that.

I found that a simple way to test the buttons is to remap the button(s) in question to a keystroke; mouse button 3 to the number 3 in this case. I opened up notepad, clicked, and lo and behold, a 3 every time I click. What gives?

I started to switch the mouse button function back to 'windows button 3' and... what gives again? It's gone. As I switched through the other mouse profiles though I noticed that button 3 was almost always set to 'universal scroll'. "Let's try that," I thought to myself. I applied the new setting. Don't forget to apply the new setting! And look, the mouse is as good as new.

As near as I can figure it, I actually set the mouse button to 'windows button 3' at some point in some previous Windows version or application. The interesting thing is that the profiles are stored in the mouse, not the driver software. That's my guess at least as I removed the driver software and restarted the computer at one point and my middle mouse button was still pumping out 3's. Combine that with the fact that the signal is evidently a bit different between Windows 7 and whatever program I set it on, and a little bit of profile changing butter fingers on my part and voila - apparently broken button.

So, it turns out that my only real complaint about my nice, year old Lachesis is that it is too customizable for my clumsy self. Aside from that, this is one heck of a mouse.

Figured I'd share my experience in case some one else runs into this problem.

MOSS Search Access Denied

So I have been running into an issue on Office Share Point where the search service ends with an access denied error when it runs.

"Access is denied. Check that the Default Content Access Account has access to this content, or add a crawl rule to crawl this content."

The fix for this in our situation turned out to come from Microsoft KB 896861.  A lot of solutions focus on making sure account permissions are set up correctly.  However, this lesser known issue is caused by a security feature in IIS that prevents reflection attacks.  The feature gets in the way of the shared services provider when it tries to crawl the site in a single server environment.  The recommended solution is to map a specific hostname to the loopback address.  Check out the KB for the details.

I originally found this article on SharePoint Blogs. Hopefully, promoting this solution will help someone else out there.

Tuesday, May 19, 2009

Techniques to Fight Comment Spam

The following post is a list of techniques that I have run across that attempt to deal with the problem of comment spam on sites.  This is a followup to my last post titled, "Preventing Comment Spam."

Requiring authentication is generally seen as a fairly effective approach to preventing comment spam.  However, the disadvantages are frequently enough to dissuade implementation on many sites.  One problem is that authentication is a feature requiring resources and expertise that not all development shops have enough of.  Another issue is that authentication results in a barrier to leaving comments on a site that casual visitors will probably not bother overcoming.  In addition, the challenge is not that great for technically adept spammers if the payoff is access to a large user base.

Building upon authentication is the idea of karma.  Forcing users to build karma based on quality of participation before they can take certain actions is usually a hurdle that is too high for most spammers to deal with.  Unfortunately, depending on your user base, it can be an equally high hurdle to legitimate participation.

Moderation is another technique that comes up often.  It is generally regarded as the only foolproof approach.  Simply put, every post made to the site is screened by a human being.  The downside of course is that if your site is heavily trafficked by spammers, weeding quickly becomes a task that takes up all of your time.

Filtering is another approach that can best be described as automated moderation.  As an example, I found ReverseDOS.  This is an easy to setup ASP.NET  HttpModule that reads all of the content of a request and determines whether or not the request is a spam attempt based on rules that you define.  The rules can include checking all or only a portion of the request against a set of regular expressions and can be turned on or off for each directory within a site.

Another suggestion along these lines was to create a central repository for tracking spam.  Sites could query the repository which would try to determine if the submitted content was spam based on past submissions, user feedback and a bit of good natured artificial intelligence.  Regardless of the technique, the idea of filtering is to cut down the number of spam comments to an amount manageable by other means.

Reverse Turing tests like CAPTCHA can sometimes be used to increase the difficulty of posting spam.  The problem is that the effectiveness of the most common implementation, retyping words presented as an image, wanes as image recognition tools get better and better.  The images must get more warped in order to prevent automated scanning, but that makes it more difficult for legitimate users as well.  E.g., Google's captcha for new emails is so difficult to read at times, that I only get one out of four correct.

Throttling can be used in order to prevent any user from posting too many times.  Limits can be set on the number of items that can be created over a span of time or making sure that no user posts multiple comments back to back in a single thread.  The challenge here lies in identifying users.  If no authentication is used, relying on IP address is inconsistent at best and runs the risk of blocking legitimate users.

In the end, no single approach is probably good enough to stop spam. The pet project I am currently working on has been built with a mix of most of the techniques above.  I combined a bunch of existing frameworks with a little bit of custom code so it wasn't too much work.  At times I worry that I may have spent too much time on this aspect of the site.  Then again, the whole site was started as a learning endeavor.  If nothing else, I gained some knowledge and will have the tools in place to respond quickly if spammers begin to target the site.

Preventing Comment Spam

I have taken up the coding challenge of dealing with comment spam.  As with most topics that I write about, I am by no means an expert.  But I have done a lot of reading recently, and here are some of my observations.

There is no majority consensus on the single best approach to prevent spam filtering.  Everyone agrees something must be done, but few people agree on which one method is the most effective.

The tactic that most people do agree on is that multiple techniques are necessary in order to achieve the desired levels of spam reduction, ease of maintenance and usability for visitors.  The business of spam is based on the idea that by getting a lot of content in front of a lot of users it is likely that enough people will respond to make a profit.  Countering spam is a process of making it difficult enough for spammers to post to your site that their time is better spent elsewhere.  The challenge lies in creating a system that is easy enough for your users to participate in that is at the same time complex or smart enough to discourage spammers.

The combination of most effective tools to employ varies depending on the site being targeted.  Your breadth of content and comment topics, user quantity and quality, and a host of other variables will determine the different tools that will achieve the best results fighting spam.  The larger and wider ranging each of those dimensions is, the smarter your techniques will need to become.  At some point, the easiest to implement approach may be to screen submissions by hand.

There seems to be a general feeling that if enough sites take steps to reduce spam, the web can be made a better place for everyone.  Spam will probably never go away.  If it does, it is likely that the infrastructure of the web was changed for the worse for everyone in some way.  But the idea is to make it difficult enough that spammers would make more money performing constructive services instead of annoying ones.

For a bit more on techniques used to prevent comment spam, check out this followup post.

Friday, May 15, 2009

Example Code, Patterns and OOD

So, I've titled this post a couple of times now and each time I reverse the order of the concepts. I apologize if it turns out to be backwards in the final draft and it throws the more inflexible of you for a loop.  Moving on...

I've noticed a few things about developers that I've worked with during my career. When creating solutions to problems there are three common plans of attack they follow: they grab a piece of someone else's code from somewhere and shoe horn it in, they find a design pattern that more or less works for the problem at hand, or they think about the problem from an OOD perspective and plan out the code to come.

Example code sometimes gets a job done.  But it was originally written for someone else's job.  If this is always the course of action taken, there is a greater likelihood that the code will not mesh with the architecture or surrounding code that is already in place.

Design patterns are better.  They force you to think about the problem abstractly.  Then you can write code around them that both solves the problem and fits the architecture of your application.  However, at the core, they are still based on a design that is meant to be implemented in a particular way.  Sure there are enough design patterns out there to satisfy any need, but do you really want to memorize them all?

Knowing your object oriented design concepts is definitely the way to go; encapsulation, inheritance and polymorphism.  There is no design pattern I have seen that can not be boiled down to a combination of different amounts of these concepts.  If you know how to use each of them, there are no problems you can't solve and your code will fit into any architecture and follow whatever conventions you need it to.

Admittedly, sometimes you just need to know how to add a particular CSS class to the fifth paragraph tag on a web page using JavaScript.  For that, a code example will definitely get you going in the right direction fastest.  And design patterns are excellent for instruction and communication.  How better to learn when to use the different design principles than by example.  And naming a properly chosen design pattern can save a lot of time when conveying the solution of a complex problem.  

But I feel that by considering a problem from an object oriented perspective first you will end up with the best solution most often.  Sometimes the best solution will turn out to be based on one of the other two tactics.  But this way, you know you arrived at the right one.

Fake Communities

I don't like fake community sites.  Let me qualify that a bit.  I've been finding a bunch of sites recently that create their content by screen scraping other community sites that I legitimately belong to.  I find them slimy for a couple of different reasons.  Yes, 'slimy' is the technical term.  Honest.  

First, they are trying to pass off someone else's work as their own.  They didn't go to the trouble of promoting themselves.  The site probably doesn't have any fresh ideas; definitely not in content and if they copied the content, they probably copied the features as well.  Why would I want to go to the site at all?

Second, they pollute search results.  When I go looking for information, I want the definitive source, not a copy, nothing inaccurate.  There are tons of sites like this that aren't community based.  I don't like those either.  But there's another reason that makes these community sites worse.

When I first started finding these sites, I happened upon them because I was trying to keep track of information about myself and my company.  I was trying to pay attention to what, if anything, the public might be saying about us so that we could respond and be good members of the community.  Anyhow, I found a site that had some information about me on it.  It was mostly outdated and some was wildly inaccurate.

I thought to myself, "I should probably fix that so there won't be any misunderstandings."  And then I realized that I'd been suckered.  Well almost suckered as I didn't actually take any action, but the point is...

There is subtle tactic these sites use to make people join.  Once people see their information there, there is a strong sense of personal identity that urges them to take charge of that data to make sure that they will not be misrepresented.  Maybe I'm just paranoid.  But, I can't believe that they accidentally got the wrong information when it's all publicly available from LinkedIn.

I felt that if I logged into that site, it validated all the questionable tactics that they used to bring me there.  The regurgitating of information from other sites.  Preying on people's sense of identity to create an account and fix the content. And the site is apparently trying to enter into competition with the sites that they steal the information from in the first place.  If I created an account, I felt I would be just another number that they could hold up to investors to 'prove' how much traffic their site was getting.

Maybe I've been reading too much Seth Godin and his honesty and up front marketing tactics are rubbing off on me.  But it doesn't change the fact that these fake community sites are more or less stealing other organizations' work and data and holding it up as their own in order to try to trick the public into using their sites.  And that just feels slimy to me.

Sunday, May 10, 2009

Another ELMAH Convert

I just tried out ELMAH and I am yet another convert.

This project is getting some attention all of a sudden and it deserves it.  The project is a great library for an easy to use .NET logging utility.  I spent the last two or three hours incorporating it into a pet project of mine and it is exactly what I was looking for.  I was able to add database logging to my web site and send emails to GMail.  From there I retrieve then into a FogBugz account, but that's another story.

The basic ELMAH set up is simple.  That article may look long, but that's all there is to it.  For the most part.  There are a few other coding gems out there that I also took advantage of.

This is a wiki page I came across about how to secure ELMAH for remote use.

Here is a great article on how to make ELMAH play nicely with SMTP servers that require SSL.  Namely, but quite possibly Yahoo and others as well.  [Edit: Turns out this was a known issue and there is a fix in the current trunk of the project. Here's some info.]

And last but not least, this wonderful web page explains how to create your own ASP.NET MVC error handler attribute that makes ELMAH appear as if the two frameworks were designed for each other.

There's plenty more to learn about ELMAH such as signalling and the great features of the elmah.axd report tool, but the above resources will get you going pretty darn quick.

Great job on the project Atif.

Thursday, April 23, 2009

Dragging In Silverlight

I started writing this blog to help improve my communication skills. I thought it might be interesting to document a startup from the beginning. Much to my chagrin, though totally understandably, this is not why most people end up at my blog. According to Google Analytics, most people only want me for my code. Well, in the words of Seth Godin or the Kinks or Red Skelton, "Give them what they want."

The simplest piece of code I could dig out on short notice was my drag provider. Sure, this one has been done to death, but with the refactoring into a provider and another little convenience or two I thought it might still be nice to share. Before I get to the code I'm going to describe those useful bits as well as some assumptions the code makes. Though you can always scroll down and get what you really came here for.

First, the provider assumes that the drag element is a child of a canvas. This might be obvious since that is the easiest way, but I thought I'd lay it out there. The top most root layout of our application is a 1x1 Grid with child Grids and DockPanels for the real layout. A simple rule that we follow is that the RootLayout of any of our draggable controls must be a canvas. This works out well for us for too many reasons to list here, but in retrospect, you could probably dynamically add the canvas when creating the control. Either way, the end result is a 1x1 Grid that takes up the entire browser with its top most visible layer being the canvas that in turn contains the draggable element.

Second, the PositionDragElement routine in the sample goes to a bit of extra trouble to make sure that the element being dragged around the screen always stays in full view within the Silverlight plugin. If the mouse leaves the plugin, the drag element will follow the mouse around the edges of the plugin until the mouse returns or the button is released. This may not be desired in some cases, but for us, it prevents users from dragging dialogs out of the Silverlight application and not being able to close them.

Third; also about the PositionDragElement routine; it was pulled out of a common class used both in our draggable provider and our drag and drop framework. If something looks out of place or doesn't work quite right, forgive me. I did some on the fly munging to simplify this example a bit. Also, you obviously don't need to use this provider as is. It makes a decent example of how to perform dragging for many situations.

And so, here's the code.

public class DraggableProvider
#region Constructors
 public DraggableProvider(FrameworkElement dragElement)
if (null == dragElement)
throw new ArgumentNullException("dragElement");
_dragElement = dragElement;

_dragElement.MouseLeftButtonDown += new MouseButtonEventHandler(DragElement_MouseLeftButtonDown);
_dragElement.MouseLeftButtonUp += new MouseButtonEventHandler(DragElement_MouseLeftButtonUp);
_dragElement.MouseMove += new MouseEventHandler(DragElement_MouseMove);


#region Event Handlers

void DragElement_MouseMove(object sender, MouseEventArgs e)
if (_isMouseDown)
var currentMousePosition = e.GetPosition(null);
_lastMousePosition = currentMousePosition;

void DragElement_MouseLeftButtonUp(object sender, MouseButtonEventArgs e)
_isMouseDown = false;

void DragElement_MouseLeftButtonDown(object sender, MouseButtonEventArgs e)
_isMouseDown = true;
_lastMousePosition = e.GetPosition(null);


#region Private Fields

private FrameworkElement _dragElement;
private bool _isMouseDown;
private Point _lastMousePosition;


#region Private Methods
 private void PositionDragElement(Point currentMousePosition)
var xPosition = (double)_dragElement.GetValue(Canvas.LeftProperty);
var yPosition = (double)_dragElement.GetValue(Canvas.TopProperty);
var elementOriginPosition = new Point(xPosition, yPosition);
  var xDelta = currentMousePosition.X - _lastMousePosition.X;
var yDelta = currentMousePosition.Y - _lastMousePosition.Y;
var dragElementWidth = _dragElement.ActualWidth;
var dragElementHeight = _dragElement.ActualHeight;
  // Verify that the drag element contains the mouse.
// This is important when first picking up the element.

var newX = elementOriginPosition.X + xDelta;
if ( currentMousePosition.X < newX )
newX = currentMousePosition.X - .05 * dragElementWidth;
else if (currentMousePosition.X > newX + dragElementWidth)
newX = currentMousePosition.X -
.95 * dragElementWidth;
  var newY = elementOriginPosition.Y + yDelta;
if ( currentMousePosition.Y < newY )
= currentMousePosition.Y - .05 * dragElementHeight;
else if (currentMousePosition.Y > newY + dragElementHeight)
newY = currentMousePosition.Y -
.95 * dragElementHeight;
    // Validate that draggable item is still within the browser.
// This takes precedence over the mouse staying inside the element.
var rootPanel = Application.Current.RootVisual as Panel;
newX = Math.Min(newX, rootPanel.ActualWidth - dragElementWidth);
newX = Math.Max(newX, 0);
newY = Math.Min(newY, rootPanel.ActualHeight - dragElementHeight);
newY = Math.Max(newY, 0);

_dragElement.SetValue(Canvas.LeftProperty, newX);
_dragElement.SetValue(Canvas.TopProperty, newY);
And now, this is how we make our dialogs draggable. We also, add a background that stretches vertical and horizontal to make the dialog modal. Have fun.
 Application.Current.RootVisual as Panel.Children.Add(Background);
Application.Current.RootVisual as Panel.Children.Add(Dialog);
_draggableProvider = new DraggableProvider(Dialog);

Tuesday, April 7, 2009

New Partner at Milyli

OK, this is not really news in that it is not new. Not by six months or so anyway. But we gained a third partner back in October-ish and I never said anything about it.

I guess I'll also take this time to say that I'm not going to write much about my partners in general. They can expose their own lives and/or views if they so desire. I only mention it for a little bit of context to those two regular readers out there so they are not confused.

But I am retroactively and even still excited about it because we gained a person that compliments our existing skill set wonderfully.

Roles Over Process

So I started this little rant a while back about processes. I realized that it wasn't process itself that was the culprit, but managers that micromanage by creating a process to define how their employees do their jobs. And now, after some illness and long running issues at work, I can try to tackle the topic of a better way, in my humble opinion, to manage people.

The short version: create high level roles for each employee. Is this more difficult? Maybe, but I think the benefits are substantial.

One responsibility of a good manager is to define roles that describe what each position is accountable for. Do not think about how people will get their work done. At least, not any more than is needed to define the roles. How people work is process, and in the end, it is best for each employee to figure out the best process to accomplish their goals. They will end up knowing their jobs better than their manager ever could. Designers need to design an application that helps emu farmers keep track of their flocks. Developers need to deliver that application. Those tasks are what the respective roles are responsible for. If you can't figure out what everyone should be doing, they certainly won't be able to and no one will know how to work together.

That's another of the manager's responsibilities, figure out how to get the groups to work together. Help work through communications issues. That might be facilitating a meeting that determines a specification format; that is what the specification contains, not how it gets written. Or it may require making a command decision on who ultimately has jurisdiction over apparently overlapping responsibilities. Do the designers need to convince the developers a feature should be implemented to spec, or are the final designs the last word; I've seen both approaches work.

Hold employees responsible for their work. If results are not up to standards, find out why and how you can help. Get people the tools and training they need. Create a good environment. Adjust roles and responsibilities as needed. Offer advice from your own experience while being careful not to lay down any laws. And be ready to make the tough decision to let someone go if they just don't fit in at your organization for some reason.

After having said all that, I realize these responsibilities are definitely are not easy. That is probably the reason many people end up managing by process. But the benefits are profound. The best part is that skilled, creative workers will be happier when they are allowed to get a job done the way they want. This builds trust and a sense of ownership; intrinsic motivations to do a job well. The other side of that coin is that the manager does not need to keep tabs on the details of every iota of work. Instead, the manager is doing what they should be doing, taking care of employees, communicating, removing roadblocks where needed and holding people accountable.

I think the hardest part about this approach for most is people is when it means giving up something that they enjoy doing. As a programmer, I am not whole-heartedly looking forward to the day where I turn development over to other people. That means I won't have the final say on how things get coded anymore. But how the code is written won't be my responsibility at that point either. I just need to make sure that I trust the people put in place to accomplish what their goals are. I hope that I will do this as well as the managers I have worked with that I admire. I hope that people will enjoy working for me just as much.

Saturday, February 28, 2009

When Is Process Bad?

I've been trying to come up with some rules of thumb as to when putting processes in place probably isn't the correct way to go. I started out trying to pick the tasks that people do day in and day out, but that didn't quite work. For instance, you cannot simply say that a developer writing code should have no processes. That's just foolish. There are some processes that almost all developers should always follow, e.g. use source control, commit early and often, etc.

In that light, my last post was definitely reactionary. Most processes are probably good especially if they're invisible. And if they're invisible, I probably do not even think of them as processes in the first place. At a high level, writing unit tests is part of the process of writing good code. While not invisible, it's definitely a process I don't mind having.

I think processes tend to get under my skin the most when they are used as a management technique when trying to control the results of generally creative, non-repetitive tasks. Writing unit tests is a way of managing change. We always want to manage change the same way so that we know when something breaks before the customer does. But counting lines of code to determine how well the solution to a new problem was written doesn't work. It's hard to put a process in place for non-repetitive tasks because you don't necessarily know what the outcome should be.

The same goes for design. Yes there are certain steps that can be identified as generally good, that you always want to do. You want to make sure that your design solves the users problems. A minimal process for how you go about determining what the users' problems are is probably a good idea. Always observe the user in their natural environment, determine why they are using the software, etc. On larger teams, communicating the final design to many other people probably requires a specific document format so that information is easy to find for all the different teams that need it: QA, development, documentation, sales, etc.

But you can not really put a process around the actual creative design of the feature. Trying to do so usually results in some sort of sign off or management check in that doesn't accomplish what you think it will. It doesn't mean that the best design gets implemented. The manager can't know what is best simply because they don't have the time to do the research and work of their multiple subordinates. Sure they can make observations and recommendations, but the final decision should be the person doing the work. If anything, the only real result is that your subordinates start to feel untrusted.

To boil this all down, I believe that if a manager is trying to control the creative aspect of their subordinates work through processes, they are only getting in the way and sending a negative message to their workers.

Maybe all I've said here is that I dislike micromanagement, that's probably a fair paraphrasing. Regardless, what is the alternative? I said in my last post that there does need to be some sort of accountability and control, where does that come from? If there's one thing I've learned about writing it's to stick to a single topic per post, so you'll just have to wait for my next entry for ramblings on that subject.

Tuesday, February 17, 2009

The Proccess Isn't That Important

I'm not all that keen on process lately. Sure it takes some process to get software written. Process is how we get jobs done. But in the end, within certain limits and circumstances allowing of course, how goals are achieved is no where near as important as achieving the goal.

I have seen teams brought to a standstill because the process indicated that the entire ten member brainstorming group had to come to a consensus on a design before the feature could be started. I have seen stakeholders start witch hunts because an otherwise smooth implementation was not following their imagined process of how software gets written. I've seen teams of otherwise friendly and rational groups of people torn apart because the processes in place did not give individuals the responsibility over how to do their work and put them at odds with each other.

Process should be minimized to the greatest possible extent. Use just enough to keep yourself organized while slowing yourself down as little as possible. There is a slew of policies out there that people believe make their organizations run more effectively which are really only getting in the way. And once again I will admit that some policies can be advantageous and necessary. But even the necessary ones are probably getting in the way somewhat of the real goal of getting the next great iPhone application to market.

Why do people overuse processes? Part of the reason is that workers need to be held accountable for their work. If a manager doesn't know who is responsible for a piece of work, how will they know who needs help when the operation isn't running smoothly? How do you decide if new hires are needed when bottlenecks occur? How do you divide all the work that needs to be done in a way that scales? I can't really argue with any of those motivations.

The part that I do take issue with though is that processes are overused when they are not the best tool for the job. This is probably because small processes are just so easy to create. "Don't forget to CC HR when requesting time off." Simple, easy, not very time consuming considering you were probably already writing the email anyway. But it's a slippery slope and many people don't notice when they start to slide or might not be aware of other options. Also, once a mountain of process is in place, it can be difficult to roll it all back. It's easier just to append another small rule to the process.

I can hear the wags saying, "Chuck, none of that information is really new."

To which I respond, "See the blog subtitle."

"But this post is not particularly useful either. Of more benefit would be recognizing when a process isn't necessarily the right policy and what to put in place instead."

You know what? The wags probably got that one right. And so it will be a good topic for my next post.

Sunday, February 15, 2009

Focus First Form Field with JQuery

It tok me a while to find this little gem and polish it up a bit so I figured I would pass it on. The title of the post should be enough of a desciption of what I was trying to do so here is the code.
function FocusFirstField() {
var topIndex = null;

var fields = $("input:visible:enabled:first,

if (fields.size() > 0)
topIndex = 0; 

for (var i = 1; i < fields.size(); i++)
if (fields[i].offsetTop < fields[topIndex].offsetTop)
topIndex = i;

if (topIndex != null)

The first line just defines a variable to hold the index of the control we finally identify as the top most field in the form.

The second line uses JQuery to select the first, visible, enabled of each input, select and textarea html elements.

The third section primes the following loop. It checks to see that at least one element has been returned and sets the topIndex value to 0.

The fourth section of code loops through any other elements that have been returned and compares the offsetTop value of each to determine the top most element.

The last line of the function checks to see if any element of the selector types has been found and sets the focus to the element found to be the top most.

The function is best called from the window load event as the JQuery document ready event fires too early on some browsers:

And the selector can obviously be tailored to fit your needs.

-- EDIT:

One of the things that I just noticed about this is that I tried to make the code smart enough to take just about any CSS into consideration, hence the loop checking top offsets.  What I just noticed is that the ':first' selectors might work against this at times.  I included those selectors to limit the number of DOM elements returned, but it may cause things to not work all the time.

I think in most situations the code should be fine, but if your CSS really moves elements around ala CSS Zen Garden, you might need to remove the :first portion of each phrase in the selector.