Windows (Home) Server 2008

Last weekend I went down to the local (geek) hardware store and bought myself a 640Gb drive and 2Gb of RAM for my home made Windows Home Server box (for a total of AU$150). After a painless hardware upgrade I then downloaded and installed the latest Virtual Server 2005. I installed the 640Gb drive as a “back-up drive” instead of adding it to the Windows Home Server “disk pool” using the Windows Home Server Power Pack 1 Beta. This gave me two benefits – first it meant that I could then use the drive to actually backup the server shares automatically, and secondly it meant that the drive was not going to be affected by the routine “disk balancing” that Windows Home Server performs on the storage pool.

Windows Home Server Power Pack 1

So with plenty of spare RAM and disk space I could then easily create a virtual machine instance running Windows Server 2008 with a copy of SQL Server 2008 RC0. This is the first real chance I’ve had to take a look at the new features in SQL Server 2008 (the main reason for the install). Initial thoughts… new DataTypes – nice! Built in Change Tracking in conjunction with ADO.NET Sync Services… funky!

Billy Hollis on dnrTV

Just finished watching Billy Hollis demo a WPF application on dnrTV that he and his business partners have been working on for one of their clients. I thought it was a great example of how a WPF application can take advantage of WPFs strengths to add some new UI interaction around common requirements for line of business applications. These included things like navigation, dirty checking, modal forms etc. Well worth watching. Billy also commented that the application was put together without the aid of a UI designer – just three developers with a bent for good UI design.

Dynamic Data

Tonight on the commute home I listened to a recent DNR podcast (show 349) where Carl and Richard interview Scott Hunter a program manager on the ASP.NET team working on Dynamic Data. I’ve heard enough about the topic before (e.g. his interview with Scott Hanselman) to understand the basics. But what I really liked about the DNR interview was when Scott described that their intent to have it work with multiple client technologies – web forms, MVC ASP.NET, Silverlight and one would have to assume WPF.

The Dynamic Data attributes have been specifically placed in a UI agnostic namespace – System.ComponentModel.DataAnnotations. It makes me feel all warm and fuzzy when I see the MS teams communicating or thinking “big picture”. I’m still hopeful we might get a System.ComponentModel.Validation namespace?

"Star"tlingly Bad Code

Scott Hanselman’s got my attention this morning with his recent endeavour to launch a community driven reference application for WPF. If you haven’t already read about BabySmash! go read about it here and check it out on codeplex here.

Scott mentions that the whole thing was put together whilst his wife was watching a rather unappealing movie. He then goes on to mention that the code is bad – really bad. What gets me though is that he has the audacity, nay the gall to suggest that just because he is capable of writing lame-ass code that by some bizarre form of association that his readers too sometimes write code like this. Now – I’ve downloaded the source code for BabySmash! – and let me tell you the code is pretty ugly. If Scott thinks that I’d ever let code like that be published under my name then… wait… ah… BUGGER! He’s used some of my code! What’s worse – he used probably the MOST lame-ass piece of code I’ve ever blogged (oh heck – yes OK I’m sure there are even worse examples of mine).

The code in question was a class I blogged about in November last year. I should point out that I made it very clear at the top of the post that this was more or less a rant – “dribble code” if you will. Its a very simply class that derives from Shape to draw a Star.

The method I used to draw the star was to take a triangle and simply “stamp” it out multiple times, rotating it around a central axis. The very clever WPF GetOutlinedPathGeometry is then used to clean up all the mess and consolidate into just the outline of the star.

Here’s the original code that appears now in Scott’s BabySmash:

public static Geometry CreateStarGeometry(int numberOfPoints)
GeometryGroup group = new GeometryGroup();
group.FillRule = FillRule.Nonzero;
Geometry triangle = PathGeometry.Parse("M 0,-30 L 10,10 -10,10 0,-30");
double deltaAngle = 360 / numberOfPoints;
double currentAngle = 0;
for (int index = 1; index < numberOfPoints; index++)
currentAngle += deltaAngle;
triangle = triangle.CloneCurrentValue();
triangle.Transform = new RotateTransform(currentAngle, 0, 0);
Geometry outlinePath = group.GetOutlinedPathGeometry();
return outlinePath;

The code that I replaced that with the first time I actually used the Star class in an application (for a rating indicator) is shown below. It takes what I think is a much neater approach and is a little easier to configure with an inner and outer radius.

public static Geometry CreateStarGeometry2(
int numberOfPoints,
int outerRadius,
int innerRadius,
Point offset) { List<PathSegment> segments = new List<PathSegment>(); double angleOffset = Math.PI * 2 / numberOfPoints; for (double angle = 0; angle < Math.PI * 2; angle += angleOffset) { double innerAngle = angle + angleOffset / 2; Point outerPoint = new Point(
Math.Sin(angle) * outerRadius + offset.X,
Math.Cos(angle) * -outerRadius + offset.Y); Point innerPoint = new Point(
Math.Sin(innerAngle) * innerRadius + offset.X,
Math.Cos(innerAngle) * -innerRadius + offset.Y); segments.Add(new LineSegment(outerPoint, true)); segments.Add(new LineSegment(innerPoint, true)); } List<PathFigure> figures = new List<PathFigure>(); figures.Add(new PathFigure(
new Point(0 + offset.X, -outerRadius + offset.Y),
segments, true)); Geometry star = new PathGeometry(figures); return star; }

As embarrassed as I am that my Star class has meandered its way to such a large audience – I must admit I kinda got a kick out of seeing this in Scott’s code .

In terms of the BabySmash! application itself – I think this is a great idea. Firstly from the point of view that I also have two young kids who love spending time with me on the computers. The younger two year old is at the stage where she recognizes most of the alphabet and loves typing random letters and seeing them appear on the screen. I wrote my WinForms animation sample primarily because my oldest(then 4) got a kick out of watching the bits dance around the screen (a later version included A-Z characters). [Alas he’s now turned 5 and has since moved on to solving Bloxorz levels – scary!]

Secondly the idea of a community driven WPF application just sounds like a great idea. I’ve been very slowly trying to build my own “hobby” application using WPF. Stumbling and being sidetracked at every turn – all very educational for me – but certainly delaying any semblance of a deliverable. Maybe Scott’s approach is the way to go – hack it together ASAP – then let the community work together to discuss, refactor and enhance?

Thoughts on Mesh Remote Desktop

One of the features of Microsoft’s Mesh is to provide a remote desktop connection to any of your devices via the Live Desktop. That amounts to being able to remote desktop from any of the supported browsers. Microsoft’s Windows Home Server already provides me with this experience and whilst I can’t say I’ve used it a lot – it has helped me out a few times in a big way.

I had assumed that the technology involved in the Window Home Server remote desktop via the browser was the same technology that would be powering Window Live Mesh. Whether it is or not I can’t tell – but the user experience is certainly different.

When you connect to Live Mesh it starts a remote desktop session in a window – complete with the standard “Mesh” sidebar window “clipped” onto the right hand edge. What’s odd though is that the remote desktop uses the resolution of the remote hardware and is scaled to fit in the window – and the scaling does not preserve the aspect ratio. So in my scenario – lets say I connect to my home desktop via my Fujitsu Lifebook. That means I’m viewing a 1920×1200 + 1280×1024 multi-monitor desktop on a laptop screen with a resolution of 1024×768. Umm… not very useful – showing a 3200×1200 very wide desktop squished to a 1024×768 display. It has a mode that allows switching between actual and fixed size – but at those resolutions its kinda akin to using remote desktop from a PDA into a PC – peering through a tiny window and endless scrolling.

Live Mesh Remote Desktop - Squished

Another point of difference is that unlike a standard remote desktop connection is that for the Live Mesh to recognize the devices it appears that you have to be signed in on each device. So no chance of using wake-on-LAN to get your machine up and running and ready to connect. You’d also have to have it auto-logon – hmm… suitable for my Media Center PC maybe – but not something I’d want on my other devices.

In fact the more I play with this, the more I begin to think its not really remote desktop as we know it. For starters the remote machine stays active – i.e. mouse moves and everything you’re doing on the remote desktop can be seen on the remote machine – unless you click the “Hide desktop on remote device” button.

Secondly, its so much slower than a standard remote desktop. Even using it on a LAN its painfully slow to render the desktop background – yep that’s right its rendering everything using bitmaps – and there doesn’t seem to be any configuration options?

Thirdly, because it really just seems to be screen scraping the remote device (as opposed to actually logging in and creating a new virtual session) it means you get all kinds of weird behaviours. For example, when I remote connect to the laptop the logon screen shows the Fingerprint reader user interface. Umm… not really much point ’cause the hardware I’m connected with doesn’t have a fingerprint reader.

Early days though – hopefully the feedback being logged up on Connect will help shape the product.

Live Mesh Connect

I’m still waiting on an invite to the developer Tech Preview to see what the Live Mesh APIs look like.

P.S. The folder sharing Live Mesh stuff seems to be working very nicely. I already prefer it to FolderShare.

Windows Live Mesh Invite

Woohoo – just got an invite to Window Live Mesh – am installing on a few machines now.

As an aside: I’ve been an intermittent user of FolderShare since before it was acquired by Microsoft. Will be interesting to see how they plan to merge FolderShare, Skydrive and Live Mesh going forwards.

BindingGroup in .NET 3.5 SP1?

Of all the new features trumpeted for .NET 3.5 SP1 recently the one bullet point that really caught my attention was this little gem (from Brad Adams blog).

A new BindingGroup in System.Windows.Data (additional databinding support)

At this stage I haven’t downloaded and installed the bits. Anyone know what this “BindingGroup” refers to – I haven’t heard/seen it mentioned anywhere else.

[Update 28-May-2008:The WPF Performance blog provides some further hints on the purpose of BindingGroup]

Item-Level Validation – By using Binding Groups this applies validation rules to an entire bound item. For example it can enable validate & commit scenario for a form with few bind-able edit fields. (available in final RTM bits only)

Applying MetaData to WPF Bindings

This post describes how I’m applying MetaData defined in my model to WPF controls at runtime. The goal is to keep the XAML concise without cluttering it with property settings that obscure the form layout intent and ensuring it stays in sync with the model across all forms in the application. Refer to Karl Schifflet’s recent passionate post about why MetaData is important.

In WinForms I’ve traditionally used the data Binding to identify how MetaData should map to controls. So the following describes my attempt at the same thing in WPF.

I’m using attributes to define MetaData directly against my model classes. I’d like for it to be more open than that – for instance extracting the MetaData from a database, config file etc. – but attributes is a good place to start. Here’s an example property from my Holiday model class.

[Annotation("Name", "Name of this holiday")]
[StringData(MaximumLength=50, Case=CharacterCasing.Upper)]
public string Name
get { return _name; }
if (!Object.Equals(_name, value))
_name = value;

So in this example I’m using three custom attributes. [Mandatory] is pretty self explanatory. Whilst its not used by the MetaData Applicator it is used by the ValidationEngine to create a MandatoryRule.

The [Annotation] attribute derives from System.ComponentModel.DisplayName and extends it by including extended descriptions for tooltips and possible help keys etc.

The third [StringData] attribute is the one that defines the MetaData specific to string data types. The attributes themselves are defined in a neutral namespace and assembly (Spencen.MetaData) with no references to any presentation assemblies.

Taking a look at the StringDataAttribute class its just a bunch of properties with no inherent behaviour (like most attributes).

public class StringDataAttribute : StringLengthAttribute 
public StringDataAttribute() : base()
        #region Public Properties
public string Format { get; set; }
public Type DataType { get; set; }
public CharacterCasing Case { get; set; }
public bool MultiLine { get; set; }
public bool AllowTab { get; set; }
public bool AllowNewline { get; set; }

The StringDataAttribute class is itself decorated with a [MetaData] attribute. This tells the MetaData Applicator that it needs to be considered when applying MetaData properties to the UI. [The StringLengthAttribute that this class derives from is actually another ValidationRule attribute that results in the creation of the StringLengthValidationRule.]

Now the tricky part in all of this was trying to “hook” into the WPF Binding pipeline. I tried all the likely approaches – inheriting from Binding, looking at BindingExpression and eventually creating a custom MarkupExtension that offloads most of the work to the standard Binding MarkupExtension.

Once I got that far I looked around and found that the MarkupExtension appears to be the current approach. Mine is currently a very simplified implementation – as I later found out there are much better examples around including this one or the one used by the Enterprise Library Contrib’s Standalone Validation Block.

Having a customised Binding gives several advantages not the least of which is applying default values for the Binding itself. Ever got tired of adding ValidatesOnDataErrors=true, ValidatesOnExceptions=true, NotifyOnValidationError=true, UpdateSourceTrigger=UpdateSourceTrigger.PropertyChanged to every Binding!?

Anyhow – more on that later – for now I use the following code within the ProvideValue() method of my custom MarkupExtension.

    IProvideValueTarget valueTarget = (IProvideValueTarget)serviceProvider.GetService(
typeof(IProvideValueTarget)); DependencyObject dependencyObject = valueTarget.TargetObject as DependencyObject; DependencyProperty dependencyProperty = valueTarget.TargetProperty as DependencyProperty; if (dependencyObject != null && dependencyProperty != null) { if (dependencyObject is FrameworkElement) { object dataContext = ((FrameworkElement) dependencyObject).GetValue(
FrameworkElement.DataContextProperty); if (Applicator != null) { PropertyDescriptor property = TypeDescriptor.GetProperties(dataContext).Find(_path, false); if (property != null) { foreach (Attribute propertyAttribute in property.Attributes) { // Check if the attribute class is itself decorated with a MetaData attribute. if (TypeDescriptor.GetAttributes(propertyAttribute).Contains(new MetaDataAttribute())) { Applicator.ApplyTo(propertyAttribute, dependencyObject, dependencyProperty); } } } } } }

Essentially its just checking each Binding and then looking for [MetaData] decorated attributes on the data source. [Note that this code overly simplifies resolving the DataContext and Path. Remember the path is just that not necessarily just a property name, it can even have things like indexers etc. in it.]. Once its found an attribute it calls out to the statically assigned Applicator.

The Applicator simply consists of a registered list of types and IMetaDataApplicators. Prior to any custom data Binding the application registered those [MetaData] attributes that it wishes to apply to the UI with the Applicator on the markup extension. Like so…

    MetaDataExtension.Applicator = new MetaDataApplicator();
typeof(StringDataAttribute), new StringDataApplicator());

The StringDataApplicator class then does whatever it desires with the StringDataAttribute, target object and property that its been given. Here’s a simple partially complete example:

    public class StringDataApplicator : IMetaDataApplicator
public void ApplyTo(Attribute attribute, object targetObject, object targetProperty)
StringDataAttribute stringData = attribute as StringDataAttribute;
if (stringData == null) 
throw new InvalidOperationException("StringDataApplication only supports StringDataAttribute."); TextBoxBase textBoxBase = targetObject as TextBoxBase; if (textBoxBase != null) { textBoxBase.AcceptsTab = stringData.AllowTab; textBoxBase.AcceptsReturn = stringData.AllowNewline;
} TextBox textBox = targetObject as TextBox; if (textBox != null) { switch(stringData.Case) { case Spencen.MetaData.CharacterCasing.Lower: textBox.CharacterCasing = System.Windows.Controls.CharacterCasing.Lower; break; case Spencen.MetaData.CharacterCasing.Upper: textBox.CharacterCasing = System.Windows.Controls.CharacterCasing.Upper; break; case Spencen.MetaData.CharacterCasing.Camel: // TODO: Put hooks in to do formatting. break; case Spencen.MetaData.CharacterCasing.Title: // TODO: Put hooks in to do formatting. break; } textBox.MaxLength = stringData.MaximumLength; } } }

The XAML to bind the Holiday classes Name property using the MetaData MarkupExtension would be:

    <TextBox Grid.Column="1" Grid.Row="0" Width="150" Text="{meta:MetaData Name}"/>

The rendered TextBox would use the Binding properties defined as the default on the MetaData MarkupExtension (as opposed to the standard Binding class defaults). It will have its MaxLength set to 10 and entry forced to uppercase characters. Of course there are plenty of other properties on the StringDataAttribute that could have been used. For example:

  • Defining a DataType that specifies a type that is used as an IValueConverter and/or formatter. This can be used to setup all the necessary plumbing for allowing data entry of specific string types – such as phone numbers, e-mail addresses etc.
  • Using MaximumLength to determine the optimum length for a TextBox. Again this helps with consistency – all 4 character code fields are automatically set to the standard width for a four character field.
  • Using the AnnotationAttribute to set the Tooltip.


Asynchronous Validation

Occasionally I’ve wanted to execute Validation Rules that take a significant duration to execute (anything more than half a second for example). Normally these involve some cross-tier communication, e.g. database access, web service call etc.  Examples of these types of validation include:

  1. Validating that a field is unique – for example when entering a new Inventory Item which requires a unique code. This could also require a unique combination of field values, for example an item name that must be unique during its effective lifetime specified by a from/to date.
  2. Validating stock levels for a selected product.

It could be argued that these types of validation are best performed by the business layer on the application server after the user has committed the transaction, i.e. pressed the Save/Submit button. Of course the rules must be validated at that point anyway – since the business layer on your application server should never trust any data being sent to it. But that doesn’t stop us using the same rules to provide timely warnings to the user prior them submitting a form full of data.

The following code shows an sample ValidationRule designed to execute asynchronously.

/// <summary>
/// Sample validation rule that executes asynchronously by default.
/// </summary>
public class SampleAsyncRule : ValidationRule
private int _milliSecondsToDelay;
public SampleAsyncRule(int milliSecondsToDelay)
_milliSecondsToDelay = milliSecondsToDelay;
IsAsync = true; // Setting IsAsync to true ensures the Validate method is executed on a background thread.
public override ValidationResult Validate(object value, System.Globalization.CultureInfo cultureInfo)
// This call could be replaced with a cal to the application service tier via a web service, remoting etc.
Random random = new Random();
if (random.Next(100) < 50)
return new ValidationResult(ContentSeverity.Error, "Async validation has determined an error.");
return ValidationResult.ValidResult;

When validating a registered rule the FormValidator checks the ValidationRule.IsAsync flag. If set to true it executes the Validate method on the rule using an asynchronous delegate call. The rule is interpreted as having returned a “pending” ValidationResult which will put the Validator in an indeterminate state (assuming it was previously in a Valid state). When the async delegate completes a callback method on the Validator is fired which extracts the real ValidationResult and removes the temporary “pending” result.

protected override void ValidateInternal(object validationSource)
foreach (Spencen.Validation.Rules.ValidationRule rule in RegisteredRules[validationSource])
Spencen.Validation.ValidationResult result;
if (rule.IsAsync)
AsyncValidateCaller caller = new AsyncValidateCaller(rule.BeginValidate);
IAsyncResult asyncResult = caller.BeginInvoke(validationSource, 
ValidateCallback, caller); result = new AsyncValidationResult(asyncResult); } else { result = rule.Validate(validationSource, CultureInfo.CurrentCulture); } ExtractErrors(result); } OnValidated(EventArgs.Empty); }