Thursday, September 24, 2009

Guid properties and XAML serialization in Silverlight

Today I found that the XAML parser of Silverlight can’t handle Guids. You need to set a type converter to handle them! In WPF Guid properties are handled automatically.

To get it working you need to add this converter to your project:

   1: public class GuidConverter : TypeConverter
   2: {
   3:     public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
   4:     {
   5:         if (sourceType == typeof(string))
   6:         {
   7:             return true;
   8:         }
  10:         return base.CanConvertFrom(context, sourceType);
  11:     }
  13:     public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)
  14:     {
  15:         if (destinationType == typeof(Guid))
  16:         {
  17:             return true;
  18:         }
  20:         return base.CanConvertTo(context, destinationType);
  21:     }
  23:     public override object ConvertFrom(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value)
  24:     {
  25:         return new Guid((string)value);
  26:     }
  28:     public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType)
  29:     {
  30:         return ((Guid)value).ToString();
  31:     }
  32: }

And them place this on top of the Guid property:

   1: [TypeConverter(typeof(GuidConverter))]
   2: public new Guid SystemId
   3: {
   4:     get;
   5:     set;
   6: }

Works on my machine!

Have fun,


Tuesday, September 15, 2009

Get started with .NET RIA Services

This week I finally got the time to do a deep dive into the world of Silverlight and RIA Services. I’m impressed. This is a really great framework and really takes out some of the pains we had before on business application development. Most applications have a large portion of data-driven stuff in them and this framework reduces the code you need to put-up to run such features to almost nothing.

To get started download and install:

Visual Studio Tools for Silverlight 3

Expression Blend 3 (Not really required but very useful to design silverlight xaml controls.)

Silverlight 3 Toolkit

.NET Ria Services (Don’t forget to download the overview pdf because it contains documentation and hands-on labs.)

SQL Server 2008 Express Edition with Advanced Services

SQL Server 2008 Sample Databases (In this one I choose to install only the files and them run the scripts in management studio.)

The first project

Once everything is installed you can find the Silverlight Business Application template in the Silverlight area for new projects.

ria - create the project

This template will output two projects. One for the Silverlight client and one for the Web Application that contains the server side of the .NET Ria Services.

ria - project structureIf you go to the properties page of the silverlight client you will something called .NET RIA Services link. This setting is telling the extensions that RIA installed in visual studio to project code from the given Web Site into this project. But what does projecting mean?

ria - project linkIf you click on the option to see all files under the client project you will see a folder called Generated_Code. This will contain one file with a name similar to the web site that makes us think they are somehow related. And they are. Projecting means that parts of the code that we are writing at the server will get transformed into equivalent client side code. The word transform here is important because in some cases it is not a simple copy.

ria - exploring code projection 1 

In this article I am following most of the steps included in here.

More advanced stuff – Entity Framework and POCOs

The first thing to get is this sample. If you are already using visual studio 2010 there is a better approach because POCOs are already supported in a CTP here. What these downloads contains is the infrastruture to use simple C# objects as entities in Entity Framework.


Have Fun,

Deep dive into WCF Channels

The binding element reads configuration both at the client and server side and assembles the channel factory and the channel listener respectively. It must put the binding elements in the correct order. The recommended order is TransactionFlow, ReliableSession, Security, CompositeDuplex, OneWay, StreamSecurity, MessageEncoding and Transport.

binding element

The following picture ilustrates the relationship between the ChannelListener and the ChannelFactory in WCF. The listener creates the channel at the server side and the factory at the client side. There are several interfaces for the kind of channel that is to be built. In WCF literature those are known as shapes.

Factory and Listener Architecture

One of the possible shapes is the request and reply shape. I like to think on the channel shape as the way each party in the communication sees the channel. In this example the client wants to use the channel to send requests so it is natural that the it sees the channel as a IRequestChannel. In the other hand the server gets the requests from the channel and wants to use it to send the reply back to the client and so it sees the channel as a IReplyChannel.By the way if you are looking for the method the send the reply back it is a member of the RequestContext class.

request reply channel shape

Inside the Transport Binding Element

When assembling the channel one of the first things that WCF does is checking what shapes the channel supports. It does that by calling CanBuildChannelListener and CanBuildChannelFactory.

This snippet shows the source code for a channel with Request and Reply as well as Input and Output shapes.

   1: public override bool CanBuildChannelFactory<TChannel>(BindingContext context)
   2: {
   3:  return
   4:     typeof(TChannel) == typeof(IOutputChannel) ||
   5:     typeof(TChannel) == typeof(IRequestChannel);
   6: }
   8: public override bool CanBuildChannelListener<TChannel>(BindingContext context)
   9: {
  10:  return
  11:     typeof(TChannel) == typeof(IInputChannel) ||
  12:     typeof(TChannel) == typeof(IReplyChannel);
  13: }

At some point in the future WCF will call the factory methods to get the listener in the server side and the factory in the client side. In these methods we should validate if it is possible to assemble the factory or the listener based on the current state of the binding and them instantiate them.

   1: public override IChannelFactory<TChannel> BuildChannelFactory<TChannel>(BindingContext context)
   2: {
   3:    if (context == null)
   4:    {
   5:        throw new ArgumentNullException("context");
   6:    }
   8:    if (!this.CanBuildChannelFactory<TChannel>(context))
   9:    {
  10:        throw new InvalidOperationException(string.Format("Channel Not Supported - {0}", typeof(TChannel).Name));
  11:    }
  13:    if (base.ManualAddressing)
  14:    {
  15:        throw new InvalidOperationException("Manual Addressing Not Supported");
  16:    }
  18:    return new CustomChannelFactory<TChannel>(this, context); 
  19: }
  21: public override IChannelListener<TChannel> BuildChannelListener<TChannel>(BindingContext context)
  22: {
  23:    if (typeof(TChannel) == typeof(IReplyChannel))
  24:    {
  25:        return (IChannelListener<TChannel>)(new CustomReplyChannelListener(this, context));
  26:    }
  28:    if (typeof(TChannel) == typeof(IInputChannel))
  29:    {
  30:        return (IChannelListener<TChannel>)(new CustomInputChannelListener(this, context));
  31:    }
  33:    throw new InvalidOperationException("Unsupported channel listener.");
  34: }

In the channel listener we must provide a way to accept a new channel to receive messages from clients. One importing that I notice is that at the server side it is important to implement assynchronous way to accept the channel but at the client side we can implement only the synchronous way to send messages (requests) to the server.

The OnAcceptChannel method can be called multiple times and in this case there would be several opened channels at the same time. This sample is based on the Null channel sample and in this case it would not make sense. This is wait a AutoResetEvent is used. The component requesting the second channel will block until the first component releases it or the operation expires.

If you are using multiple channels don’t forget to disconnect the event handlers on the close handler to prevent memory from leaking.

   1: protected override IReplyChannel OnAcceptChannel(TimeSpan timeout)
   2: {
   3:     if (base.State != CommunicationState.Opened)
   4:     {
   5:         throw new CommunicationObjectFaultedException(string.Format("The channel {0} is not opened", this.GetType().Name));
   6:     }
   8:     if (currentChannel != null)
   9:     {
  10:         //Please revisit this as we must accept multiple channels.
  11:         waitChannel.WaitOne(int.MaxValue, true);
  13:         lock (ThisLock)
  14:         {
  15:             // re-open channel
  16:             if (base.State == CommunicationState.Opened && currentChannel != null && currentChannel.State == CommunicationState.Closed)
  17:             {
  18:                 currentChannel = new CustomReplyChannel(this, localAddress);
  19:                 currentChannel.Closed += new EventHandler(OnCurrentChannelClosed);
  20:             }
  21:         }
  22:     }
  23:     else
  24:     {
  25:         lock (ThisLock)
  26:         {
  27:             // open channel at first time
  28:             currentChannel = new CustomReplyChannel(this, localAddress);
  29:             currentChannel.Closed += new EventHandler(OnCurrentChannelClosed);
  30:             int count = CustomListeners.Current.Add(filter, this);
  31:         }
  32:     }
  34:     return currentChannel;
  35: }
  37: protected override IAsyncResult OnBeginAcceptChannel(TimeSpan timeout, AsyncCallback callback, object state)
  38: {
  39:     this.OnAcceptChannel(timeout);
  40:     return new CompletedAsyncResult(callback, state);
  41: }
  43: protected override IReplyChannel OnEndAcceptChannel(IAsyncResult result)
  44: {
  45:     CompletedAsyncResult.End(result);
  46:     return currentChannel;
  47: }

In the channel factory the logic to create the channel is here. It is also import to understand the rationale behind this. The client side is creating the channel and the server side is accepting it. As you can imagine the server can reject it and throw an exception.

   1: protected override TChannel OnCreateChannel(System.ServiceModel.EndpointAddress address, Uri via)
   2: {
   3:     if (!string.Equals(address.Uri.Scheme, this.element.Scheme, StringComparison.InvariantCultureIgnoreCase))
   4:     {
   5:         throw new ArgumentException(string.Format("The scheme {0} specified in address is not supported.", address.Uri.Scheme), "remoteAddress");
   6:     }
   8:     if (typeof(TChannel) == typeof(IOutputChannel))
   9:     {
  10:         return (TChannel)(object)new CustomOutputChannel(this, address, via);
  11:     }
  12:     else if (typeof(TChannel) == typeof(IRequestChannel))
  13:     {
  14:         return (TChannel)(object)new CustomRequestChannel(this, address, via);
  15:     }
  16:     else
  17:     {
  18:         throw new InvalidOperationException("Can not create channel");
  19:     }
  20: }

In the next part we will take a look at how the channels actually work.

Have fun,


Wednesday, September 9, 2009

Entity Framework and Plain Old CSharp Objects


This will be a new feature on .NET 4.0 but for now we can still get a glimpse off it with this adapter written by one of the EF developers. To use POCOs you still need to write the metadata file(s) that represent the storage scheme, the conceptual scheme and the mappings between them. In .NET 4.0 it will also be possible to use only code to create the metadata (if this is good or bad is out of scope :) ).

When we create a EF model for a given database the developer tools create a EDMX file and use a code generator to output code for what is modeled in the EDMX file. This includes the business entities. This is really cool to make those quick demos we see everywhere in the Internet but on large applications things tend to get complicated. The idea of the domain model is that you create a rich object model to represent the domain and that is not tied to any technical or architectural detail of the implementation. It is as close as object can be to the real world equivalents. But the original implementation of EF lead to domain models that where tied and coupled to the details of Object to Relational Mapping.

One good example I use all the time to illustrate how things can get ugly when ORMs mess with the domain model is the number of times I got to write the Customer class. There is a Customer class with attributes for one ORM, another one has some methods that contain mapping code, others derive from a base class that is tied to the ORM and some put constraints to how you write the class code such as forcing things to be virtual. Wouldn’t it be perfect if I could just write the Customer class using my own way of doing things and could tell the ORM how to map it without having to do any change on my code? The EFPocoAdapter is about that.

Inside the EDMX file there are actually three models. Originally they where separated files and it is still possible to keep them separated. This is the strategy I like the most and the one used in this article. They have the extensions ssdl, csdl and msl respectively storage schema description language, conceptual schema description language and mapping schema language.

In this article I will use SQL Server 2008 R2 and AdventureWorks for SQL Server 2008 as the database. We will map the products table and count the rows on it using POCOs and LINQ.

Implementing a sample application with EF and POCOs

Our demo application will follow the following architecture. We will place our business objects inside a business objects assembly with no further dependencies. The code that connects those objects to the entity framework is named on the diagram as Business Objects Adapter. It is built on top of EF POCO Adapter who is built on top of Entity Framework.

entity framework with POCO architecture

If we would like to divide our domain in more than one assembly then there would be one adapter per business object assembly.

The first step is writing the business objects for the application and our business object could not be simpler:

   1: public class Product
   2: {
   3:     public string Name
   4:     {
   5:         get;
   6:         set;
   7:     }
   9:     public int ProductNumber
  10:     {
  11:         get;
  12:         set;
  13:     }
  14: }

The next step is to write the EF metadata that describes this object, the storage table that is associated with it and how they map to each other. Starting by the conceptual model (the Product business object):

   1: <?xml version="1.0" encoding="utf-8"?>
   2: <Schema Namespace="AdventureWorks" Alias="Self" xmlns=""
   3:         xmlns:objectmapping=""
   4:         >
   5:   <EntityContainer Name="AdventureWorksEntities">
   6:     <EntitySet Name="Products" EntityType="AdventureWorks.Product" />
   7:   </EntityContainer>
   9:   <EntityType Name="Product">
  10:     <Key>
  11:       <PropertyRef Name="ProductNumber"/>
  12:     </Key>
  13:     <Property Name="ProductNumber" Type="Int32" Nullable="false" />
  14:   </EntityType>
  16: </Schema>

Next comes the storage model:

   1: <?xml version="1.0" encoding="utf-8"?>
   2: <Schema Namespace="AdventureWorks.Store" Alias="Self" xmlns="" Provider="System.Data.SqlClient" ProviderManifestToken="2005">
   3:   <EntityContainer Name="dbo">
   4:     <EntitySet Name="Product" EntityType="AdventureWorks.Store.Product" Schema="Production" />
   5:   </EntityContainer>
   7:   <EntityType Name="Product">
   8:     <Key>
   9:       <PropertyRef Name="ProductID" />
  10:     </Key>
  11:     <Property Name="ProductID" Type="int" Nullable="false" StoreGeneratedPattern="Identity" />
  12:   </EntityType>
  14: </Schema>

And finally the mapping:

   1: <?xml version="1.0" encoding="utf-8"?>
   2: <Mapping Space="C-S" xmlns="urn:schemas-microsoft-com:windows:storage:mapping:CS">
   3:   <EntityContainerMapping
   4:     StorageEntityContainer="dbo"
   5:     CdmEntityContainer="AdventureWorksEntities">
   6:     <EntitySetMapping Name="Products">
   7:       <EntityTypeMapping TypeName="AdventureWorks.Product">
   8:         <MappingFragment StoreEntitySet="Product">
   9:           <ScalarProperty Name="ProductNumber" ColumnName="ProductID"/>
  10:         </MappingFragment>
  11:       </EntityTypeMapping>
  12:     </EntitySetMapping>
  13:   </EntityContainerMapping>
  14: </Mapping>

There are subtle differences between the C# object Product and the Product table. For instance I decided to call ProductNumber to the ProductId table. The entity frameworks supports much more than this simple column mapping but I am trying to build a working demo that is as simple as possible to help on getting started.

The EFPocoAdapter contains a small command line code generator that will output the adapter files for us. I made a small batch file to do this in order to user relative file paths. In my source tree integrating the commands in the pre-build event as suggested in the EFPocoAdapter would generate a command that is too long for the command processor. (One thing I can’t understand is why do we still have limits for paths and commands.)

   1: SET CLASSGEN=C:\prj\source\_ExternalReferences\EFPocoAdapter\EFPocoClassGen\bin\Debug\EFPocoClassGen.exe
   3: %CLASSGEN% /verbose "/incsdl:..\AdventureWorks.csdl" 
   4:     "/ref:..\Pedrosal.BusinessObjects\bin\Debug\Pedrosal.BusinessObjects.dll" 
   5:     "/outputfile:PocoAdapter.cs" /map:AdventureWorks=Pedrosal.BusinessObjects
   7: %CLASSGEN% /verbose "/incsdl:..\AdventureWorks.csdl" 
   8:     "/ref:..\Pedrosal.BusinessObjects\bin\Debug\Pedrosal.BusinessObjects.dll" 
   9:     "/outputfile:AdventureWorksEntities.cs" /map:AdventureWorks=Pedrosal.BusinessObjects 
  10:     /mode:PocoContainer
  12: PAUSE

Finally the test client code:

   1: class Program
   2: {
   3:     static void Main(string[] args)
   4:     {
   5:         using(AdventureWorksEntities ent = new AdventureWorksEntities())
   6:         {
   7:             // test 1 - count the products
   9:             var productCount = ent.Products.Count();
  10:             Console.WriteLine("You have {0} products in the database.", productCount.ToString());
  11:         }
  12:     }
  13: }

The connection to the database is made because on the generated code the Entity Framework is instructed to read it from the App.config file connection named AdventureWorksEntities. On that connection we must tell EF where the metadata files are and the easiest way is to have them copied to the output path.

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <configuration>
   3:   <connectionStrings>
   4:     <add name="AdventureWorksEntities" connectionString="metadata=AdventureWorks.csdl|AdventureWorks.ssdl|AdventureWorks.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=.;Initial Catalog=AdventureWorks2008;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" />
   5:   </connectionStrings>
   6: </configuration>

Download sample here.

Have fun,