Thursday, May 14, 2009

Enterprise Library 4.1 Configuration – Part 1 Cache

Today I had to develop a small snippet on how to configure enterprise library without going through the application xml configuration system. This gave me the idea of creating a new configuration source that works in memory without the need for xml files.

The component described in this article implements convention over configuration for enterprise library.

Convention over Configuration implementation

First I created a static class to hold the convention values. That is the values for the configuration options that are used if nothing else is said:

   1: /// <summary>
   2: /// Holds the default values for assembling enterprise library components.
   3: /// </summary>
   4: public static class EntLibConvention
   5: {
   6:     /// <summary>
   7:     /// Initializes static members of the <see cref="EntLibConvention"/> class.
   8:     /// </summary>
   9:     static EntLibConvention()
  10:     {
  11:         CacheManagerType = typeof(Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager);
  12:         CacheBackingStoreType = typeof(Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore);
  13:         // ... other settings
  14:     }
  15:  
  16:         /// <summary>
  17:         /// Gets or sets the <see cref="System.Type"/> for the class implementing the cache manager.
  18:         /// </summary>
  19:         public static Type CacheManagerType
  20:         {
  21:             get;
  22:             set;
  23:         }
  24:  
  25:         /// <summary>
  26:         /// Gets or sets the <see cref="System.Type"/> for the class implementing the cache backing store.
  27:         /// </summary>
  28:         public static Type CacheBackingStoreType
  29:         {
  30:             get;
  31:             set;
  32:         }
  33: }

In this initial version one must override the values in the convention class before asking objects to the factory. I added improving this to the backlog to make a first quick release.

Using the extensions

To get objects use another static class name Factories. This class will have one property per supported block holding the concrete factory. For now the caching application block is supported.

I think a good way to express how to use a library is showing some short unit tests. So to get the default manager write something like this:

   1: [TestMethod]
   2: public void TestAddItemToCache()
   3: {
   4:     var cacheManager = Factories.CacheFactory.Default;
   5:     Assert.IsNotNull(cacheManager);
   6:  
   7:     cacheManager.Add("obj1", "object one");
   8:  
   9:     cacheManager = Factories.CacheFactory.Default;
  10:  
  11:     var obj = (string)cacheManager.GetData("obj1");
  12:     Assert.AreEqual(obj, "object one");
  13: }

And if one wants a named manager, so that it can get different instances to separate information:

   1: [TestMethod]
   2: public void TestAddItemToNamedCache()
   3: {
   4:     Factories.CacheFactory.AddNamedConfiguration("myCache");
   5:  
   6:     var cacheManager = Factories.CacheFactory.GetNamedManager("myCache");
   7:     Assert.IsNotNull(cacheManager);
   8:  
   9:     cacheManager.Add("obj1", "object one");
  10:  
  11:     cacheManager = Factories.CacheFactory.GetNamedManager("myCache");
  12:  
  13:     var obj = (string)cacheManager.GetData("obj1");
  14:     Assert.AreEqual(obj, "object one");
  15: }

I plan to add support to the remaining blocks in a near future. The library is distributed under LGPL. I am thinking on using a strong signature in the next release but I need to put in place a way to take out my key file out of the source release :).

Ready to try it? Get it:

Source Code

Binaries

Tuesday, May 5, 2009

Using WCF to transfer large data (files)

I’m in the process of writing a component that will transfer files using WCF. I’ve googled about the problem and found two approaches to the problem:

  1. Use WCF streaming and pass in the files as streams. This brings some limitation to the features that can be enabled. Only transport security is possible, I can’t use reliable messaging and I can’t enable a full WS-Security stack for B2B.
  2. Partition my data into chunks and transfer those chunks as standard data contracts. With reliable messaging this approach works pretty fine but it will not interoperate as easily.

The samples I came up with are quite basic and far from leading to a safe path when we are trying to develop a component for real scenarios. I have to choose between a straightforward design using streams and the real life requirements for security and reliability. It’s not an easy choice.

Let’s walk through some of the things I learned in this spike.

Using streams at the server side

A service contract for streaming must accept or return a stream. On the input any parameter other than the stream will be put in the header. If you use message contracts only the stream can be decorated with MessageBodyMemberAttribute.

A operation to return a stream for downloading should look something like this:

   1: [OperationContract]
   2: Stream GetStream(string streamName);

When setting up the bindings for this scenario some cares must be taken to configure the message limits properly:

   1: var httpBinding = new BasicHttpBinding();
   2: httpBinding.MessageEncoding = WSMessageEncoding.Mtom;
   3: httpBinding.MaxReceivedMessageSize = int.MaxValue;
   4: httpBinding.TransferMode = TransferMode.Streamed;
   5: httpBinding.SendTimeout = new TimeSpan(0, 10, 0);

When using HTTP based bindings two encodings are possible: Text or MTOM. I prefer MTOM because it is a good path to interoperability and in big messages reduces the message size.

The MaxReceivedMessageSize must be set to the size of the Maximum stream length possible. The int.MaxValue is a good choice if you can have really large files (2Gb).

When using large files TransferMode.Streamed is the choice. Using buffered mode would put the all stream in memory and would cause the server to either dye or suffer a major impact on performance.

The final step is to increase the SendTimeout on the server or the channel would close before the operation completed.

The stream should simply be opened and returned:

   1: var filename = request.FileName;
   2: Console.WriteLine("Sending file {0}.", filename);
   3: var file = File.Open(filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
   4: return file;

Using streams at the client side

At the client side setting up the binding is almost the same as in the server side:

   1: var httpBinding = new BasicHttpBinding();
   2: httpBinding.MessageEncoding = WSMessageEncoding.Mtom;
   3: httpBinding.MaxReceivedMessageSize = int.MaxValue; 
   4: httpBinding.TransferMode = TransferMode.Streamed;

The read operation should be done in slices like this:

   1: using (var destinationStream = File.Create(destination))
   2: {
   3:     const int chunkSize = 4096;
   4:     byte[] buffer = new byte[chunkSize];
   5:     int count = 0;
   6:     while ((count = fileStream.Content.Read(buffer, 0, chunkSize)) > 0)
   7:     {
   8:         destinationStream.Write(buffer, 0, count);
   9:     }
  10:  
  11:     fileStream.Content.Close();
  12:     destinationStream.Close();
  13: }

Using streams results

Using this approach I have successfully transferred a 1Gb file over the wire. If the files are on the Mb scale and reliability is not a must this mode seams to be a good choice. I used files composed of randomly generated files.

Sample

You can find a working sample that also includes a small utility to generate the random files here:

Mtom Server

I will also publish a second part about chunking soon.