Designing Clean Object Models with SvcUtil.exe 

There are a couple situations where one might use svcutil.exe in command prompt vs. Add Service Reference in Visual Studio.

At least a couple.

I don't know who (or why) made a decision not to support namespace-less proxy generation in  Visual Studio. It's a fact that if you make a service reference, you have to specify a nested namespace.

On the other end, there is an option to make the proxy class either public or internal, which enables one to hide it from the outside world.

That's a design flaw. There are numerous cases, where you would not want to create another [1] namespace, because you are designing an object model that needs to be clean.

[1] This is only limited to local type system namespace declarations - it does not matter what you are sending over the wire.

Consider the following namespace and class hierarchy that uses a service proxy (MyNamespace.Service class uses it):

Suppose that the Service class uses a WCF service, which synchronizes articles and comments with another system. If you use Add Web Reference, there is no way to hide the generated service namespace. One would probably define MyNamespace.MyServiceProxy as a namespace and declared a using statement in MyNamespace.MyService scope with:

using MyNamespace.MyServiceProxy;

This results in a dirty object model - your clients will see your internals and that shouldn't be an option.

Your class hierarchy now has another namespace, called MyNamespace.ServiceProxy, which is actually only used inside the MyNamespace.Service class.

What one can do is insist that the generated classes are marked internal, hiding the classes from the outside world, but still leaving an empty namespace visible for instantiating assemblies.

Leaving only the namespace visible, with no public classes in it is pure cowardness.

Not good.

Since there is no internal namespace modifier in .NET, you have no other way of hiding your namespace, even if you demand internal classes to be generated by the Add Service Reference dialog.

So, svcutil.exe to the rescue:

svcutil.exe <metadata endpoint>
   /namespace:*, MyNamespace

The /namespace option allows you to map between XML and CLR namespace declarations. The preceding examples maps all (therefore *) existing XML namespaces in service metadata to MyNamespace CLR namespace.

The example will generate a service proxy class, mark it internal and put it into the desired namespace.

Your object model will look like this:

Currently, this is the only way to hide the internals of service communication with a proxy and still being able to keep your object model clean.

Note that this is useful when you want to wrap an existing service proxy or hide a certain already supported service implementation detail in your object model. Since your client code does not need to have access to complete service proxy (or you want to enhance it), it shouldn't be a problem to hide it completely.

Considering options during proxy generation, per method access modifiers would be beneficial too.

Categories:  .NET 3.5 - WCF | Web Services
Friday, April 10, 2009 10:06:43 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Accreditus: Gama System eArchive 

One of our core products, Gama System eArchive was accredited last week.

This is the first accreditation of a domestic product and the first one covering long term electronic document storage in a SOA based system.

Every document stored inside the Gama System eArchive product is now legally legal. No questions asked.

Accreditation is done by a national body and represents the last step in a formal acknowledgement to holiness.

That means a lot to me, even more to our company.

The following blog entries were (in)directly inspired by the development of this product:

We've made a lot of effort to get this thing developed and accredited. The certificate is here.

This, this, this, this, this, this, this, this, this and those are direct approvals of our correct decisions.

Categories:  .NET 3.0 - General | .NET 3.0 - WCF | .NET 3.5 - WCF | Other | Personal | Work
Saturday, July 5, 2008 1:18:06 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Demos from the NT Conference 2008 

As promised, here are the sources from my NTK 2008 sessions [1].

Talk: Document Style Service Interfaces

Read the following blog entry, I tried to describe the concept in detail. Also, this blog post discusses issues when using large document parameters with reliable transport  (WS-RM) channels.

Demo: Document Style Service Interfaces [Download]

This demo defines a service interface with the document parameter model, ie. Guid CreatePerson(XmlDocument person). It shows three different approaches to creation of the passed document:

  1. Raw XML creation
  2. XML Serialization of the (attribute annotated) object graph
  3. XML Serialization using the client object model

Also, versioned schemas for the Person document are shown, including the support for document validation and version independence.

Talk: Windows Server 2008 and Transactional NTFS

This blog entry describes the concept.

Demo 1: Logging using CLFS (Common Log File System) [Download]
Demo 2: NTFS Transactions using the File System, SQL, WCF [Download]
Demo 3: NTFS Transactions using the WCF, MTOM Transport [Download] [2]

[1] All sources are in VS 2008 solution file format.
[2] This transaction spans from the client, through the service boundary, to the server.

Categories:  .NET 3.5 - WCF | Transactions | XML
Thursday, May 15, 2008 4:24:19 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 WCF: Reliable Messaging and Retry Timeouts 

There is a serious limitation present in the RTM version of WCF 3.0/3.5 regarding control of WS-RM retry messages during a reliable session saga.

Let me try to explain the concept.

We have a sender (communication initiator) and a receiver (service). When a reliable session is constructed between the two, every message needs to come to the other side. In a request-reply world, the sender would be a client during the request phase. Then roles would switch during the response phase.

The problem arises when one of the sides does not get the message acknowledgement in time. WCF reliable messaging implementation retries the sending process and hopes for the acknowledgement. All is well.

The problem is that there is no way for the sending application to specify how long the retry timeout should be. There is a way to specify channel opening and closing timeouts, acknowledgement interval and more, but nothing will define how long should the initiator wait for message acks.

Let's talk about how WCF acknowledges messages.

During a request-reply exchange every request message is acknowledged in a response message. WS-RM SOAP headers regarding sequence definition (request) and acknowledgements (response) look like this:

a1 <r:Sequence s:mustUnderstand="1" u:Id="_2">
a2    <r:Identifier>urn:uuid:6c9d...ca90</r:Identifier>
a3    <r:MessageNumber>1</r:MessageNumber>
a4 </r:Sequence>

b1 <r:SequenceAcknowledgement u:Id="_3">
b2    <r:Identifier>urn:uuid:6c99...ca290</r:Identifier>
b3    <r:AcknowledgementRange Lower="1" Upper="1"/>
b4    <netrm:BufferRemaining
b5       xmlns:netrm="">
b6    </netrm:BufferRemaining>
b7 </r:SequenceAcknowledgement>

Request phase defines a sequence and sends the first message (a3). In response, there is the appropriate acknowledgement present, which acks the first message (b3) with Lower and Upper attributes. Lines b4-b6 define a benign and super useful WCF implementation of flow control, which allows the sender to limit the rate of sent messages if service side becomes congested.

When the session is setup, WCF will have a really small time waiting window for acks. Therefore, if ack is not received during this period, the infrastructure will retry the message.

Duplex contracts work slightly differently. There, the acknowledgement interval can be set. This configuration option (config attribute is called acknowledgementInterval) is named inappropriately, since it controls the service and not the client side.

It does not define the time limit on received acknowledgements, but the necessary time to send the acknowledgments back. It allows grouping of sent acks, so that multiple incoming messages can be acked together. Also, the infrastructure will not necessarily honor the specified value.

Now consider the following scenario:

  1. The client is on a reliable network
  2. Network bandwidth is so thin that the sending message takes 20s to come through [1]
  3. Service instancing is set to Multiple
  4. The solution uses a request-reply semantics

[1] It does not matter whether the initiator is on a dial up, or the message is huge.

What happens?

Service initiator sets up a reliable session, then:

  1. First message is being sent
  2. Since the retry interval is really small [2], the message will not get to the other end and the acknowledgement will not bounce back in time
  3. First message is retried, now two messages are being transported
  4. No acks received yet
  5. First message is retried again
  6. Network bandwidth is even thinner
  7. First message is acknowledged
  8. First message retry is discarded on the service side
  9. Second message retry is discarded on the service side

[2] Under 3s.

The number of retry messages depends on the bandwidth and message size. It can happen that tens of messages will be sent before first acknowledgement will be received.

Adaptability algorithms

A good thing is that there are undocumented algorithms implemented for retry timeout. The implementation increases the reply timeout exponentially when the infrastructure detects that the network conditions demand more time (slow throughput) and allows reliable delivery (no losses). If loses are present the reply timeout decreases.

Retry timeout is actually calculated when establishing an RM session. It is based on the roundtrip time and is bigger if the roundtrip time is long.

So, when first messages in a session are exchanged, don't be too surprised to see a couple of message retries.

Categories:  .NET 3.0 - WCF | .NET 3.5 - WCF
Tuesday, April 8, 2008 11:33:13 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 WCF: Passing Collections Through Service Boundaries, Why and How 

In WCF, collection data that is passed through the service boundary goes through a type filter - meaning you will not necessarily get the intrinsic service side type on the client, even if you're expecting it.

No matter if you throw back an int[] or List<int>, you will get the int[] by default on the client.

The main reason is that there is no representation for System.Collections.Generic.List or System.Collection.Generic.LinkedList in service metadata. The concept of System.Collection.Generic.List<int> for example, actually does not have a different semantic meaning from an integer array - it's still a list of ints - but will allow you to program against it with ease.

Though, if one asks nicely, it is possible to guarantee the preferred collection type on the client proxy in certain scenarios.

Unidimensional collections, like List<T>, LinkedList<T> or SortedList<T> are always exposed as T arrays in the client proxy. Dictionary<K, V>, though, is regened on the client via an annotation hint in WSDL (XSD if we are precise). More on that later.

Let's look into it.

WCF infrastructure bends over backwards to simplify client development. If the service side contains a really serializable collection (marked with [Serializable], not [DataContract]) that is also concrete (not an interface), and has an Add method with the following signatures...

public void Add(object obj);
public void Add(T item);

... then WCF will serialize the data to an array of the collections type.

Too complicated? Consider the following:

interface ICollect
   public void AddCoin(Coin coin);

   public List<Coin> GetCoins();

Since the List<T> supports a void Add<T> method and is marked with [Serializable], the following wire representation will be passed to the client:

interface ICollect
  void AddCoin(Coin coin);

  Coin[] GetCoins();

Note: Coin class should be marked either with a [DataContract] or [Serializable] in this case.

So what happens if one wants the same contract on the client proxy and the service? There is an option in the WCF proxy generator, svcutil.exe to force generation of class definitions with a specific collection type.

Use the following for List<T>:

svcutil.exe http://service/metadata/address

Note: List`1 uses back quote, not normal single quote character.

What the /collectionType (short /ct) does, is forces generation of strongly typed collection types. It will generate the holy grail on the client:

interface ICollect
  void AddCoin(Coin coin);

  List<Coin> GetCoins();

In Visual Studio 2008, you will even have an option to specify which types you want to use as collection types and dictionary collection types, as in the following picture:

On the other hand, dictionary collections, as in System.Collections.Generic.Dictionary<K, V> collections, will go through to the client no matter what you specify as a /ct parameter (or don't at all).

If you define the following on the service side...

Dictionary<string, int> GetFoo();

... this will get generated on the client:

Dictionary<string, int> GetFoo();


Because using System.Collections.Generic.Dictionary probably means you know there is no guarantee that client side representation will be possible if you are using an alternative platform. There is no way to meaningfully convey the semantics of a .NET dictionary class using WSDL/XSD.

So, how does the client know?

In fact, the values are serialized as joined name value pair elements as the following schema says:

<xs:complexType name="ArrayOfKeyValueOfstringint">
    <xs:element minOccurs="0" maxOccurs="unbounded"
          <xs:element name="Key" nillable="true" type="xs:string" />
          <xs:element name="Value" type="xs:int" />
<xs:element name="ArrayOfKeyValueOfstringint"
  nillable="true" type="tns:ArrayOfKeyValueOfstringint" />

Note: You can find this schema under types definition of the metadata endpoint. Usually ?xsd=xsd2, instead of ?wsdl will suffice.

As in:



    <!-- ... -->


The meaningful part of type service-to-client-transportation resides in <xs:annotation> element, specifically in /xs:annotation/xs:appinfo/IsDictionary element, which defines that this complex type represents a System.Collections.Generic.Dictionary class. Annotation elements in XML Schema are parser specific and do not convey any structure/data type semantics, but are there for the receiver to interpret.

This must be one of the most excellent school cases of using XML Schema annotations. It allows the well-informed client (as in .NET client, VS 2008 or svcutil.exe) to utilize the semantic meaning if it understands it. If not, no harm is done since the best possible representation, in a form of joined name value pairs still goes through to the client.

Categories:  .NET 3.0 - WCF | .NET 3.5 - WCF | Web Services
Thursday, September 27, 2007 10:04:47 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Approaches to Document Style Parameter Models 

I'm a huge fan of document style parameter models when implementing a public, programmatic façade to a business functionality that often changes.

public interface IDocumentParameterModel
   XmlDocument Process(XmlDocument doc);

This contract defines a simple method, called Process, which processes the input document. The idea is to define the document schema and validate inbound XML documents, while throwing exceptions on validation errors. The processing semantics is arbitrary and can support any kind of action, depending on the defined invoke document schema.

A simple instance document which validates against a version 1.0 processing schema could look like this:

<?xml version="1.0?>
<Process xmlns="" version="1.0">

Another processing instruction, supported in version 1.1 of the processing schema, with different semantics could be:

<?xml version="1.0?>
<Process xmlns="" version="1.1">

Note that the default XML namespace changed, but that is not a norm. It only allows you to automate schema retrieval using the schema repository (think System.Xml.Schema.XmlSchemaSet), load all supported schemas and validate automatically.

public class ProcessService : IDocumentParameterModel
   public XmlDocument Process(XmlDocument doc)
      XmlReaderSettings sett = new XmlReaderSettings();

      sett.Schemas.Add(<document namespace 1>, <schema uri 1>);
      sett.Schemas.Add(<document namespace n>, <schema uri n>);

      sett.ValidationType = ValidationType.Schema;
      sett.ValidationEventHandler += new
      XmlReader books = XmlReader.Create(doc.OuterXml, sett);
      while (books.Read()) { }

      // processing goes here

   static void XmlInvalidHandler(object sender, ValidationEventArgs e)
      if (e.Severity == XmlSeverityType.Error)
         throw new XmlInvalidException(e.Message);

The main benefit of this approach is decoupling the parameter model and method processing version from the communication contract. A service maintainer has an option to change the terms of processing over time, while supporting older version-aware document instances.

This notion is of course most beneficial in situations where your processing syntax changes frequently and has complex validation schemas. A simple case presented here is informational only.

So, how do we validate?

  • We need to check the instance document version first. This is especially true in cases where the document is not qualified with a different namespace when the version changes.
  • We grab the appropriate schema or schema set
  • We validate the inbound XML document, throw a typed XmlInvalidException if invalid
  • We process the call

The service side is quite straightforward.

Let's look at the client and what are the options for painless generation of service calls using this mechanism.

Generally, one can always produce an instance invoke document by hand on the client. By hand meaning using System.Xml classes and DOM concepts. Since this is higly error prone and gets tedious with increasing complexity, there is a notion of a schema compiler, which automatically translates your XML Schema into the CLR type system. Xsd.exe and XmlSerializer are your friends.

If your schema requires parts of the instance document to be digitally signed or encrypted, you will need to adorn the serializer output with some manual DOM work. This might also be a reason to use the third option.

The third, and easiest option for the general developer, is to provide a local object model, which serializes the requests on the client. This is an example:

ProcessInstruction pi = new ProcessInstruction();
pi.Instruction = "Add";
pi.Parameter1 = 10;
pi.Parameter2 = 21;
pi.Sign(cert); // pi.Encrypt(cert);

The main benefit of this approach comes down to having an option on the server and the client. Client developers have three different levels of complexity for generating service calls. The model allows them to be as close to the wire as they see fit. Or they can be abstracted completely from the wire representation if you provide a local object model to access your services.

Categories:  .NET 3.0 - WCF | .NET 3.5 - WCF | Architecture | Web Services | XML
Monday, September 24, 2007 11:19:10 AM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Managed TxF: Distributed Transactions and Transactional NTFS 

Based on my previous post, I managed to get distributed transaction scenario working using WCF, MTOM and WS-AtomicTransactions.

This means that you have the option to transport arbitrary files, using transactional ACID semantics, from the client, over HTTP and MTOM.

The idea is to integrate a distributed transaction with TxF, or NTFS file system transaction. This only works on Windows Server 2008 (Longhorn Server) and Windows Vista.

Download: Sample code

If the client starts a transaction then all files within it should be stored on the server. If something fails or client does not commit, no harm is done. The beauty of this is that it's all seamlessly integrated into the current communication/OS stack.

This is shipping technology; you just have to dive a little deeper to use it.

Here's the scenario:

There are a couple of issues that need to be addressed before we move to the implementation:

  • You should use the managed wrapper included here
    There is support for TransactedFile and TransactedDirectory built it. Next version of VistaBridge samples will include an updated version of this wrapper.

  • Limited distributed transactions support on a system drive
    There is no way to get DTC a superior access coordinator role for TxF on the system drive (think c:\ system drive). This is a major downside in the current implementation of TxF, since I would prefer that system/boot files would be transaction-locked anyway. You have two options if you want to run the following sample:

    • Define secondary resource manager for your directory
      This allows system drive resource manager to still protect system files, but creates a secondary resource manager for the specified directory. Do this:
      • fsutil resource create c:\txf
      • fsutil resource start c:\txf
        You can query your new secondary resource manager by fsutil resource info c:\txf.

    • Use another partition
      Any partition outside the system partition is ok. You cannot use network shares, but USB keys will work. Plug it in and change the paths as defined at the end of this post.

OK, here we go.

Here's the service contract:

[ServiceContract(SessionMode = SessionMode.Allowed)]
interface ITransportFiles
   byte[] GetFile(string name);

   void PutFile(byte[] data, string name);

We allow the sessionful binding (it's not required, though) and allow transactions to flow from the client side. Again, transactions are not mandatory, since client may opt-out of using them and just transport files without a transaction.

The provided transport mechanism uses MTOM, since the contract's parameter model is appropriate for it and because it's much more effective transferring binary data.

So here's the service config:

      <binding name="MTOMBinding"
        <readerQuotas maxArrayLength="10485760"/>

    <service name="WCFService.TransportService">
          <add baseAddress="
      <endpoint address=""

Here, MTOMBinding is being used to specify MTOM wire encoding. Also, quotas and maxReceivedMessageSize attribute is being adjusted to 10 MB, since we are probably transferring larger binary files.

Service implementation is straight forward:

[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
class TransportService : ITransportFiles
    [OperationBehavior(TransactionScopeRequired = true)]
    public byte[] GetFile(string name)
        Console.WriteLine("GetFile: {0}", name);
        Console.WriteLine("Distributed Tx ID: {0}",
        return ReadFully(TransactedFile.Open(@"C:\TxF\Service\" + name,
          FileMode.Open, FileAccess.Read, FileShare.Read), 0);

    [OperationBehavior(TransactionScopeRequired = true)]
    public void PutFile(byte[] data, string filename)
        Console.WriteLine("PutFile: {0}", filename);
        Console.WriteLine("Distributed Tx ID: {0}",

        using (BinaryWriter bw = new BinaryWriter(
            TransactedFile.Open(@"C:\TxF\Service\" + filename,
              FileMode.Create, FileAccess.Write, FileShare.Write)))
            bw.Write(data, 0, data.Length);
            // clean up

Client does four things:

  1. Sends three files (client - server) - no transaction
  2. Gets three files (server - client) - no transaction
  3. Sends three files (client - server) - distributed transaction, all or nothing
  4. Gets three files (server - client) - distributed transaction, all or nothing

Before you run:

  • Decide on the secondary resource manager option (system drive, enable it using fsutil.exe) or use another partition (USB key)
  • Change the paths to your scenario. The sample uses C:\TxF, C:\TxF\Service and C:\TxF\Client and a secondary resource manager. Create these directories before running the sample.

Download: Sample code

This sample is provided without any warranty. It's a sample, so don't use it in production environments.

Monday, July 23, 2007 9:54:13 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Out the Door: WS-ReliableMessaging 1.1 

WS-RM 1.1 is finished. GoodTimestm.

OASIS published two specs:

WCF, as it turns out, will have support for WS-RM 1.1 implementation in Orcas. On this note, there is a new CTP out this week.

Categories:  .NET 3.5 - WCF | Microsoft | Web Services
Tuesday, July 3, 2007 3:58:29 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Durable Messaging Continues 

Nick is continuing my discussion on durable messaging support in modern WS-* based stacks, especially WCF.

I agree that having a simple channel configuration support to direct messages into permanent information source (like SQL) would be beneficial.

A simple idea that begs for an alternative implementation of a WCF extensibility point, has some questions:

  • What happens when messages are (or should be) exchanged in terms of a transaction context?
  • How can we demand transaction support from the underlying datastore, if we don't want to put limitations onto anything residing beind the service boundary?
  • What about security contexts? How can we acknowledge a secure, durable sessionful channel after two weeks of service hibernation down time? Should security keys still be valid just because service was down and not responding all this time?
  • Is durability limited to client/service recycling or non-memory message storage? What about both?

Is [DurableService]enough?

Wednesday, May 30, 2007 9:46:14 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


 Orcas: POX Service Support 

POX (Plain Old XML) support in Orcas is brilliant.

Consider the following service contract:

[ServiceContract(Namespace = "")]
interface IPOXService
   [HttpTransferContract(Path = "Echo", Method = "GET")]
   string Echo(string strString1, ref string strString2, out string strString3);

And the following service (or, better Echo method) implementation:

public class POXService : IPOXService
   public string Echo(string strString1,
ref string strString2, out string strString3)
      strString2 = "strString2: " + strString2;
      strString3 = "strString3";
      return "Echo: " + strString1;

Host it using this:

ServiceMetadataBehavior smb = new ServiceMetadataBehavior();
smb.HttpGetEnabled = true;

ServiceHost sh = new ServiceHost(typeof(POXService), new Uri
ServiceEndpoint ep = sh.AddServiceEndpoint(typeof(IPOXService),
   new WebHttpBinding(), "POX");
ep.Behaviors.Add(new HttpTransferEndpointBehavior());

Console.WriteLine("POX Service Running...");

If you open IE and hit the service endpoint (http://localhost:666/POX/Echo) without URL encoded parameters, you get the following XML back:


Now, drop some parameters in (http://localhost:666/POX/Echo?strString1=boo&strString2=foo&strString3=bar), this is returned:

   <EchoResult>Echo: boo</EchoResult>
   <strString2>strString2: foo</strString2>

Nice. I especially like the fact that ref and out parameters are serialized with the metod return value.

Reach in with //EchoResponse/strString2 for ref parameter and //EchoResponse/strString3 for out parameter. Return value is available on //EchoResponse/EchoResult. Simple and effective.

Categories:  .NET 3.5 - WCF
Saturday, March 10, 2007 9:55:49 PM (Central Europe Standard Time, UTC+01:00)  #    Comments


Copyright © 2003-2024 , Matevž Gačnik
Recent Posts
RSS: Atom:

The opinions expressed herein are my own personal opinions and do not represent my company's view in any way.

My views often change.

This blog is just a collection of bytes.

Copyright © 2003-2024
Matevž Gačnik

Send mail to the author(s) E-mail