Happy Birthday XML 

This week XML is ten years old. The core XML 1.0 specification was released in February 1998.

It's a nice anniversary to have.

The XML + Namespaces specification has a built in namespace declaration of http://www.w3.org/XML/1998/namespace. That's an implicit namespace declaration, a special one, governing all other. One namespace declaration to rule them all. Bound to xml: prefix.

XML was born and published as a W3C Recommendation on 10th of February 1998.

So, well done XML. You did a lot for IT industry in the past decade.

Categories:  XML
Wednesday, 13 February 2008 20:36:09 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 European Silverlight Challenge 

If you're into Silverlight, and you should be, check out http://www.silverlightchallenge.eu, and especially sign up on http://slovenia.silverlightchallenge.eu and join one of our many developers who will participate in this competition.

More here.

Categories:  Other | Silverlight
Friday, 04 January 2008 10:33:22 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Mr. Larry Lessig 

Mr. Lawrence Lessig is a founder of Stanford Center for Internet and Society. He's also a chairman for the Creative Commons organization.

Lessig is one of the most effective speakers in the world, a professor at Stanford, who tries to make this world a better place from a standpoint of stupidity in terms of the copyright law.

The following is published on the CC's site:

We use private rights to create public goods: creative works set free for certain uses. ... We work to offer creators a best-of-both-worlds way to protect their works while encouraging certain uses of them — to declare “some rights reserved.”

Therefore Creative Commons stands for the mantra of some rights reserved and not all rights reserved in terms of meaningful usage of digital technology.

Being a libertarian myself, I cannot oppose these stands. Balance and compromise are good things for content and intellectual products, such as software.

Watch his masterpiece, delivered at TED.

Categories:  Personal
Friday, 09 November 2007 23:59:24 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 On Instance and Static Method Signatures 

We talked today, with Miha, a C# Yoda. Via IM, everything seemed pointless. Couldn't find a good case for the cause of the following mind experiment:

using System;
class Tubo
{
  public static void Test() {}
  private void Test() {} 
}

Note: I'm using Miha's syntax in this post.

We have a static method called Test and an instance method, also called Test. Parameter models of both methods are the same, empty.

Would this compile?

It does not.

The question is why and who/what causes this. There is actually no rationale behind not allowing this thing to compile since both, the compiler and the runtime know the method info upfront.

Since the runtime has all the information it needs, it is strange that this would not be allowed at compile time. However, a (C#) compiler has to determine whether you, the programmer meant to access a static or an instance method.

Here lie the dragons.

It is illegal in most virtual machine based languages to have the same method name and signature for a static/instance method.

The following is an excerpt from a Java specification:

8.4.2 Method Signature

Two methods have the same signature if they have the same name and argument types.

Two method or constructor declarations M and N have the same argument types if all of the following conditions hold:

  • They have the same number of formal parameters (possibly zero)
  • They have the same number of type parameters (possibly zero)
  • Let <A1,...,An> be the formal type parameters of M and let <B1,...,Bn> be the formal type parameters of N. After renaming each occurrence of a Bi in N's type to Ai the bounds of corresponding type variables and the argument types of M and N are the same.

Java (and also C#) does not allow two methods with the same name and parameter model no matter what the access modifier is (public, private, internal, protected ...) and whether the method is static or instance based.

Why?

Simple. Programmer ambiguity. There is no technical reason not to allow it.

Consider the following:

using System;
class Tubo
{
  public static void Test();
  private void Test();
  public void AmbiguousCaller() { Test(); }
}

What would method AmbiguousCaller call? A static Test or instance Test method?

Can't decide?

That's why this is not allowed.

And yes, it would call the instance method if this would be allowed, since statics in C# should be called using a class name, as in Test.Test(). Note that the preceding example does not compile. Also note, that it is legal to have a AmbiguousCaller body as Test() or Tubo.Test().

There is another ambiguous reason. Local variables in C# cannot be named the same as the enclosing class. Therefore, this is illegal:

using System;
class Tubo
{
  private int Tubo;
}

It is. Do a csc /t:library on it.

Since you confirmed this fact, consider the following:

using System;
public class Tubo
{
  public static void StaticMethod() {}
  public void InstanceMethod() {}
}

public class RunMe
{
  public static void Main()
  {
    Tubo Tubo = new Tubo();
    Tubo.InstanceMethod();
    Tubo.StaticMethod();
  }
}

Focus on the Main method body. In Tubo.InstanceMethod() an instance method is called and a reference (namely Tubo) is used. In Tubo.StaticMethod() a static method is called and Tubo is not an instance reference, but class name.

It all comes down to programmer ambiguity. Given a chance, I would support this design decision too.

Categories:  CLR
Saturday, 03 November 2007 22:04:38 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 WCF: Passing Collections Through Service Boundaries, Why and How 

In WCF, collection data that is passed through the service boundary goes through a type filter - meaning you will not necessarily get the intrinsic service side type on the client, even if you're expecting it.

No matter if you throw back an int[] or List<int>, you will get the int[] by default on the client.

The main reason is that there is no representation for System.Collections.Generic.List or System.Collection.Generic.LinkedList in service metadata. The concept of System.Collection.Generic.List<int> for example, actually does not have a different semantic meaning from an integer array - it's still a list of ints - but will allow you to program against it with ease.

Though, if one asks nicely, it is possible to guarantee the preferred collection type on the client proxy in certain scenarios.

Unidimensional collections, like List<T>, LinkedList<T> or SortedList<T> are always exposed as T arrays in the client proxy. Dictionary<K, V>, though, is regened on the client via an annotation hint in WSDL (XSD if we are precise). More on that later.

Let's look into it.

WCF infrastructure bends over backwards to simplify client development. If the service side contains a really serializable collection (marked with [Serializable], not [DataContract]) that is also concrete (not an interface), and has an Add method with the following signatures...

public void Add(object obj);
public void Add(T item);

... then WCF will serialize the data to an array of the collections type.

Too complicated? Consider the following:

[ServiceContract]
interface ICollect
{
   [OperationContract]
   public void AddCoin(Coin coin);

   [OperationContract]
   public List<Coin> GetCoins();
}

Since the List<T> supports a void Add<T> method and is marked with [Serializable], the following wire representation will be passed to the client:

[ServiceContract]
interface ICollect
{
  [OperationContract]
  void AddCoin(Coin coin);

  [OperationContract]
  Coin[] GetCoins();
}

Note: Coin class should be marked either with a [DataContract] or [Serializable] in this case.

So what happens if one wants the same contract on the client proxy and the service? There is an option in the WCF proxy generator, svcutil.exe to force generation of class definitions with a specific collection type.

Use the following for List<T>:

svcutil.exe http://service/metadata/address
  /collectionType:System.Collections.Generic.List`1

Note: List`1 uses back quote, not normal single quote character.

What the /collectionType (short /ct) does, is forces generation of strongly typed collection types. It will generate the holy grail on the client:

[ServiceContract]
interface ICollect
{
  [OperationContract]
  void AddCoin(Coin coin);

  [OperationContract]
  List<Coin> GetCoins();
}

In Visual Studio 2008, you will even have an option to specify which types you want to use as collection types and dictionary collection types, as in the following picture:

On the other hand, dictionary collections, as in System.Collections.Generic.Dictionary<K, V> collections, will go through to the client no matter what you specify as a /ct parameter (or don't at all).

If you define the following on the service side...

[OperationContract]
Dictionary<string, int> GetFoo();

... this will get generated on the client:

[OperationContract]
Dictionary<string, int> GetFoo();

Why?

Because using System.Collections.Generic.Dictionary probably means you know there is no guarantee that client side representation will be possible if you are using an alternative platform. There is no way to meaningfully convey the semantics of a .NET dictionary class using WSDL/XSD.

So, how does the client know?

In fact, the values are serialized as joined name value pair elements as the following schema says:

<xs:complexType name="ArrayOfKeyValueOfstringint">
  <xs:annotation>
    <xs:appinfo>
      <IsDictionary
        xmlns="
http://schemas.microsoft.com/2003/10/Serialization/">
        true
      </IsDictionary>
    </xs:appinfo>
  </xs:annotation>
  <xs:sequence>
    <xs:element minOccurs="0" maxOccurs="unbounded"
      name="KeyValueOfstringint">
      <xs:complexType>
        <xs:sequence>
          <xs:element name="Key" nillable="true" type="xs:string" />
          <xs:element name="Value" type="xs:int" />
        </xs:sequence>
      </xs:complexType>
    </xs:element>
  </xs:sequence>
</xs:complexType>
<xs:element name="ArrayOfKeyValueOfstringint"
  nillable="true" type="tns:ArrayOfKeyValueOfstringint" />

Note: You can find this schema under types definition of the metadata endpoint. Usually ?xsd=xsd2, instead of ?wsdl will suffice.

As in:

<GetFooResponse>
  <KeyValueOfstringint>
    <Key>one</Key>
    <Value>1</Value>

    <Key>two</Key>
    <Value>2</Value>

    <!-- ... -->

    <Key>N</Key>
    <Value>N</Value>
  </KeyValueOfstringint>
<GetFooResponse>

The meaningful part of type service-to-client-transportation resides in <xs:annotation> element, specifically in /xs:annotation/xs:appinfo/IsDictionary element, which defines that this complex type represents a System.Collections.Generic.Dictionary class. Annotation elements in XML Schema are parser specific and do not convey any structure/data type semantics, but are there for the receiver to interpret.

This must be one of the most excellent school cases of using XML Schema annotations. It allows the well-informed client (as in .NET client, VS 2008 or svcutil.exe) to utilize the semantic meaning if it understands it. If not, no harm is done since the best possible representation, in a form of joined name value pairs still goes through to the client.

Categories:  .NET 3.0 - WCF | .NET 3.5 - WCF | Web Services
Thursday, 27 September 2007 22:04:47 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Approaches to Document Style Parameter Models 

I'm a huge fan of document style parameter models when implementing a public, programmatic façade to a business functionality that often changes.

public interface IDocumentParameterModel
{
   [OperationContract]
   [FaultContract(typeof(XmlInvalidException))]
   XmlDocument Process(XmlDocument doc);
}

This contract defines a simple method, called Process, which processes the input document. The idea is to define the document schema and validate inbound XML documents, while throwing exceptions on validation errors. The processing semantics is arbitrary and can support any kind of action, depending on the defined invoke document schema.

A simple instance document which validates against a version 1.0 processing schema could look like this:

<?xml version="1.0?>
<Process xmlns="http://www.gama-system.com/process10.xsd" version="1.0">
   <Instruction>Add</Instruction>
   <Parameter1>10</Parameter1>
   <Parameter2>21</Parameter2>
</Process>

Another processing instruction, supported in version 1.1 of the processing schema, with different semantics could be:

<?xml version="1.0?>
<Process xmlns="http://www.gama-system.com/process11.xsd" version="1.1">
   <Instruction>Store</Instruction>
   <Content>77u/PEFwcGxpY2F0aW9uIHhtbG5zPSJod...mdVcCI</Content>
</Process>

Note that the default XML namespace changed, but that is not a norm. It only allows you to automate schema retrieval using the schema repository (think System.Xml.Schema.XmlSchemaSet), load all supported schemas and validate automatically.

public class ProcessService : IDocumentParameterModel
{
   public XmlDocument Process(XmlDocument doc)
   {
      XmlReaderSettings sett = new XmlReaderSettings();

      sett.Schemas.Add(<document namespace 1>, <schema uri 1>);
      ...
      sett.Schemas.Add(<document namespace n>, <schema uri n>);

      sett.ValidationType = ValidationType.Schema;
      sett.ValidationEventHandler += new
         ValidationEventHandler(XmlInvalidHandler);
      XmlReader books = XmlReader.Create(doc.OuterXml, sett);
      while (books.Read()) { }

      // processing goes here
      ...
   }

   static void XmlInvalidHandler(object sender, ValidationEventArgs e)
   {
      if (e.Severity == XmlSeverityType.Error)
         throw new XmlInvalidException(e.Message);
   }
}

The main benefit of this approach is decoupling the parameter model and method processing version from the communication contract. A service maintainer has an option to change the terms of processing over time, while supporting older version-aware document instances.

This notion is of course most beneficial in situations where your processing syntax changes frequently and has complex validation schemas. A simple case presented here is informational only.

So, how do we validate?

  • We need to check the instance document version first. This is especially true in cases where the document is not qualified with a different namespace when the version changes.
  • We grab the appropriate schema or schema set
  • We validate the inbound XML document, throw a typed XmlInvalidException if invalid
  • We process the call

The service side is quite straightforward.

Let's look at the client and what are the options for painless generation of service calls using this mechanism.

Generally, one can always produce an instance invoke document by hand on the client. By hand meaning using System.Xml classes and DOM concepts. Since this is higly error prone and gets tedious with increasing complexity, there is a notion of a schema compiler, which automatically translates your XML Schema into the CLR type system. Xsd.exe and XmlSerializer are your friends.

If your schema requires parts of the instance document to be digitally signed or encrypted, you will need to adorn the serializer output with some manual DOM work. This might also be a reason to use the third option.

The third, and easiest option for the general developer, is to provide a local object model, which serializes the requests on the client. This is an example:

ProcessInstruction pi = new ProcessInstruction();
pi.Instruction = "Add";
pi.Parameter1 = 10;
pi.Parameter2 = 21;
pi.Sign(cert); // pi.Encrypt(cert);
pi.Serialize();
proxy.Process(pi.SerializedForm);

The main benefit of this approach comes down to having an option on the server and the client. Client developers have three different levels of complexity for generating service calls. The model allows them to be as close to the wire as they see fit. Or they can be abstracted completely from the wire representation if you provide a local object model to access your services.

Categories:  .NET 3.0 - WCF | .NET 3.5 - WCF | Architecture | Web Services | XML
Monday, 24 September 2007 11:19:10 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 XmlSerializer, Ambient XML Namespaces and Digital Signatures 

If you use XmlSerializer type to perform serialization of documents which are digitally signed later on, you should be careful.

XML namespaces which are included in the serialized form could cause trouble for anyone signing the document after serialization, especially in the case of normalized signature checks.

Let's go step by step.

Suppose we have this simple schema, let's call it problem.xsd:

<?xml version="1.0" encoding="utf-8"?>
<xs:schema targetNamespace="
http://www.gama-system.com/problems.xsd"
           elementFormDefault="qualified"
           xmlns="
http://www.gama-system.com/problems.xsd"
           xmlns:xs="
http://www.w3.org/2001/XMLSchema">
  <xs:element name="Problem" type="ProblemType"/>
  <xs:complexType name="ProblemType">
    <xs:sequence>
      <xs:element name="Name" type="xs:string" />
      <xs:element name="Severity" type="xs:int" />
      <xs:element name="Definition" type="DefinitionType"/>
      <xs:element name="Description" type="xs:string" />
    </xs:sequence>
  </xs:complexType>
  <xs:complexType name="DefinitionType">
    <xs:simpleContent>
      <xs:extension base="xs:base64Binary">
        <xs:attribute name="Id" type="GUIDType" use="required"/>
      </xs:extension>
    </xs:simpleContent>
  </xs:complexType>
  <xs:simpleType name="GUIDType">
    <xs:restriction base="xs:string">
      <xs:pattern value="Id-[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-
                         [0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}"/>
    </xs:restriction>
  </xs:simpleType>
</xs:schema>

This schema describes a problem, which is defined by a name (typed as string), severity (typed as integer), definition (typed as byte array) and description (typed as string). The schema also says that the definition of a problem has an Id attribute, which we will use when digitally signing a specific problem definition. This Id attribute is defined as GUID, as the simple type GUIDType defines.

Instance documents validating against this schema would look like this:

<?xml version="1.0"?>
<Problem xmlns="
http://www.gama-system.com/problems.xsd">
  <Name>Specific problem</Name>
  <Severity>4</Severity>
  <Definition Id="c31dd112-dd42-41da-c11d-33ff7d2112s2">MD1sDQ8=</Definition>
  <Description>This is a specific problem.</Description>
</Problem>

Or this:

<?xml version="1.0"?>
<Problem xmlns="http://www.gama-system.com/problems.xsd">
  <Name>XML DigSig Problem</Name>
  <Severity>5</Severity>
  <Definition Id="b01cb152-cf93-48df-b07e-97ea7f2ec2e9">CgsMDQ8=</Definition>
  <Description>Ambient namespaces break digsigs.</Description>
</Problem>

Mark this one as exhibit A.

Only a few of you out there are still generating XML documents by hand, since there exists a notion of schema compilers. In the .NET Framework world, there is xsd.exe, which bridges the gap between the XML type system and the CLR type system.

xsd.exe /c problem.xsd

The tool compiles problem.xsd schema into the CLR type system. This allows you to use in-schema defined classes and serialize them later on with the XmlSerializer class. The second instance document (exhibit A) serialization program would look like this:

// generate problem
ProblemType problem = new ProblemType();
problem.Name = "XML DigSig Problem";
problem.Severity = 5;
DefinitionType dt = new DefinitionType();
dt.Id = Guid.NewGuid().ToString();
dt.Value = new byte[] { 0xa, 0xb, 0xc, 0xd, 0xf };
problem.Definition = dt;
problem.Description = "Ambient namespaces break digsigs.";

// serialize problem
XmlSerializer ser = new XmlSerializer(typeof(ProblemType));
FileStream stream = new FileStream("Problem.xml", FileMode.Create, FileAccess.Write);
ser.Serialize(stream, problem);
stream.Close();
           
Here lie the dragons.

XmlSerializer class default serialization mechanism would output this:

<?xml version="1.0"?>
<Problem xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xmlns:xsd="http://www.w3.org/2001/XMLSchema"
         xmlns="http://www.gama-system.com/problems.xsd">
  <Name>XML DigSig Problem</Name>
  <Severity>5</Severity>
  <Definition Id="b01cb152-cf93-48df-b07e-97ea7f2ec2e9">CgsMDQ8=</Definition>
  <Description>Ambient namespaces break digsigs.</Description>
</Problem>

Mark this one as exhibit B.

If you look closely, you will notice two additional prefix namespace declarations in exhibit B bound to xsi and xsd prefixes, against exhibit A.

The fact is, that both documents (exhibit B, and exhibit A) are valid against the problem.xsd schema.

<theory>

Prefixed namespaces are part of the XML Infoset. All XML processing is done on XML Infoset level. Since only declarations (look at prefixes xsi and xsd) are made in exhibit B, the document itself is not semantically different from exhibit A. That stated, instance documents are equivalent and should validate against the same schema.

</theory>

What happens if we sign the Definition element of exhibit B (XmlSerializer generated, prefixed namespaces present)?

We get this:

<?xml version="1.0"?>
<Problem xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xmlns:xsd="http://www.w3.org/2001/XMLSchema"
         xmlns="http://www.gama-system.com/problems.xsd">
  <Name>XML DigSig Problem</Name>
  <Severity>5</Severity>
  <Definition Id="b01cb152-cf93-48df-b07e-97ea7f2ec2e9">CgsMDQ8=</Definition>
  <Description>Ambient namespaces break digsigs.</Description>
  <Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
    <SignedInfo>
      <CanonicalizationMethod Algorithm="http://www.w3.org/TR/...20010315" />
      <SignatureMethod Algorithm="http://www.w3.org/...rsa-sha1" />
      <Reference URI="#Id-b01cb152-cf93-48df-b07e-97ea7f2ec2e9">
        <DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" />
        <DigestValue>k3gbdFVJEpv4LWJAvvHUZZo/VUQ=</DigestValue>
      </Reference>
    </SignedInfo>
    <SignatureValue>K8f...p14=</SignatureValue>
    <KeyInfo>
      <KeyValue>
        <RSAKeyValue>
          <Modulus>eVs...rL4=</Modulus>
          <Exponent>AQAB</Exponent>
        </RSAKeyValue>
      </KeyValue>
      <X509Data>
        <X509Certificate>MIIF...Bw==</X509Certificate>
      </X509Data>
    </KeyInfo>
  </Signature>

</Problem>

Let's call this document exhibit D.

This document is the same as exhibit B, but has the Definition element digitally signed. Note the /Problem/Signature/SingedInfo/Reference[@URI] value. Digital signature is performed only on the Definition element and not the complete document.

Now, if one would validate the same document without the prefixed namespace declarations, as in:

<?xml version="1.0"?>
<Problem xmlns="http://www.gama-system.com/problems.xsd">
  <Name>XML DigSig Problem</Name>
  <Severity>5</Severity>
  <Definition Id="b01cb152-cf93-48df-b07e-97ea7f2ec2e9">CgsMDQ8=</Definition>
  <Description>Ambient namespaces break digsigs.</Description>
  <Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
    ...
  </Signature>

</Problem>

... the signature verification would fail. Let's call this document exhibit C.

<theory>

As said earlier, all XML processing is done on the XML Infoset level. Since ambient prefixed namespace declarations are visible in all child elements of the declaring element, exhibits C and D are different. Explicitly, element contexts are different for element Definition, since exhibit C does not have ambient declarations present and exhibit D does. The signature verification fails.

</theory>

Solution?

Much simpler than what's written above. Force XmlSerializer class to serialize what should be serialized in the first place. We need to declare the namespace definition of the serialized document and prevent XmlSerializer to be too smart. The .NET Framework serialization mechanism contains a XmlSerializerNamespaces class which can be specified during serialization process.

Since we know the only (and by the way, default) namespace of the serialized document, this makes things work out OK:

// generate problem
ProblemType problem = new ProblemType();
problem.Name = "XML DigSig Problem";
problem.Severity = 5;
DefinitionType dt = new DefinitionType();
dt.Id = Guid.NewGuid().ToString();
dt.Value = new byte[] { 0xa, 0xb, 0xc, 0xd, 0xf };
problem.Definition = dt;
problem.Description = "Ambient namespaces break digsigs.";

// serialize problem
XmlSerializerNamespaces xsn = new XmlSerializerNamespaces();
xsn.Add(String.Empty, "http://www.gama-system.com/problem.xsd");

XmlSerializer ser = new XmlSerializer(typeof(ProblemType));
FileStream stream = new FileStream("Problem.xml", FileMode.Create, FileAccess.Write);
ser.Serialize(stream, problem, xsn);
stream.Close();

This will force XmlSerializer to produce a valid document - with valid XML element contexts, without any ambient namespaces.

The question is, why does XmlSerialzer produce this namespaces by default? That should be a topic for another post.

Categories:  CLR | XML
Wednesday, 19 September 2007 21:57:57 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Oh my God: 1.1 

Shame? Nokia?

Same sentence, as in Shame and Nokia?

There is just no pride in IT anymore. Backbones are long gone too.

Categories:  Apple | Other | Personal
Wednesday, 29 August 2007 17:40:16 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Oh my God: 1.0 

This post puts shame to a new level.

There is no excuse for having Microsoft Access database serving any kind of content in an online banking solution.

The funny thing is, that even the comment excuses seem fragile. They obviously just don't get it. The bank should not defend their position, but focus on changing it immediately.

So, they should fix this ASAP, then fire PR, then apologize.

Well-done David, for exposing what should never reach a production environment.

Never. Ever.

Categories:  Other | Personal
Wednesday, 29 August 2007 17:35:38 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Apple vs. Dell 

This is why one should buy best of both worlds. Mac rules on a client. Dell is quite competitive on the (home) server market.

We don't care about cables around servers. Yet.

So? 'nuff said.

Categories:  Apple | Personal
Wednesday, 08 August 2007 19:37:18 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Windows Vista Performance and Reliability Updates 

These have been brewing for a couple of months. They're out today.

They contain a number of patches that improve fix Vista. You can get them here:

Windows Vista Performance Update:

Windows Vista Reliability Update:

Go get them. Now.

Categories:  Windows Vista
Wednesday, 08 August 2007 07:13:20 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Midyear Reader Profile 

Since it's summer here in Europe, and thus roughly middle of the year, here comes your, Dear Reader, profile.

Mind you, only last years data (June 2006 - June 2007) is included, according to Google Analytics.

Browser Versions:

Operating Systems:

Browser Versions and Operating Systems:

Screen Resolutions:

Adobe Flash Support:

And finally, most strangely, Java Support:

The last one surprised me.

Categories:  Blogging
Thursday, 26 July 2007 21:58:25 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Managed TxF: Distributed Transactions and Transactional NTFS 

Based on my previous post, I managed to get distributed transaction scenario working using WCF, MTOM and WS-AtomicTransactions.

This means that you have the option to transport arbitrary files, using transactional ACID semantics, from the client, over HTTP and MTOM.

The idea is to integrate a distributed transaction with TxF, or NTFS file system transaction. This only works on Windows Server 2008 (Longhorn Server) and Windows Vista.

Download: Sample code

If the client starts a transaction then all files within it should be stored on the server. If something fails or client does not commit, no harm is done. The beauty of this is that it's all seamlessly integrated into the current communication/OS stack.

This is shipping technology; you just have to dive a little deeper to use it.

Here's the scenario:

There are a couple of issues that need to be addressed before we move to the implementation:

  • You should use the managed wrapper included here
    There is support for TransactedFile and TransactedDirectory built it. Next version of VistaBridge samples will include an updated version of this wrapper.

  • Limited distributed transactions support on a system drive
    There is no way to get DTC a superior access coordinator role for TxF on the system drive (think c:\ system drive). This is a major downside in the current implementation of TxF, since I would prefer that system/boot files would be transaction-locked anyway. You have two options if you want to run the following sample:

    • Define secondary resource manager for your directory
      This allows system drive resource manager to still protect system files, but creates a secondary resource manager for the specified directory. Do this:
      • fsutil resource create c:\txf
      • fsutil resource start c:\txf
        You can query your new secondary resource manager by fsutil resource info c:\txf.

    • Use another partition
      Any partition outside the system partition is ok. You cannot use network shares, but USB keys will work. Plug it in and change the paths as defined at the end of this post.

OK, here we go.

Here's the service contract:

[ServiceContract(SessionMode = SessionMode.Allowed)]
interface ITransportFiles
{
   [OperationContract]
   [TransactionFlow(TransactionFlowOption.Allowed)]
   byte[] GetFile(string name);

   [OperationContract]
   [TransactionFlow(TransactionFlowOption.Allowed)]
   void PutFile(byte[] data, string name);

We allow the sessionful binding (it's not required, though) and allow transactions to flow from the client side. Again, transactions are not mandatory, since client may opt-out of using them and just transport files without a transaction.

The provided transport mechanism uses MTOM, since the contract's parameter model is appropriate for it and because it's much more effective transferring binary data.

So here's the service config:

<system.serviceModel>
  <bindings>
    <wsHttpBinding>
      <binding name="MTOMBinding"
          transactionFlow="true"
          messageEncoding="Mtom"
          maxReceivedMessageSize="10485760">
        <readerQuotas maxArrayLength="10485760"/>

      </binding>
    </wsHttpBinding>
  </bindings>
  <services>
    <service name="WCFService.TransportService">
      <host>
        <baseAddresses>
          <add baseAddress="
http://localhost:555/transportservice">
        </baseAddresses>
      </host>
      <endpoint address=""
          binding="wsHttpBinding"
          bindingConfiguration="MTOMBinding"
          contract="WCFService.ITransportFiles"/>
    </service>
  </services>
</system.serviceModel>

Here, MTOMBinding is being used to specify MTOM wire encoding. Also, quotas and maxReceivedMessageSize attribute is being adjusted to 10 MB, since we are probably transferring larger binary files.

Service implementation is straight forward:

[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
class TransportService : ITransportFiles
{
    [OperationBehavior(TransactionScopeRequired = true)]
    public byte[] GetFile(string name)
    {
        Console.WriteLine("GetFile: {0}", name);
        Console.WriteLine("Distributed Tx ID: {0}",
          Transaction.Current.TransactionInformation.DistributedIdentifier);
        return ReadFully(TransactedFile.Open(@"C:\TxF\Service\" + name,
          FileMode.Open, FileAccess.Read, FileShare.Read), 0);
    }

    [OperationBehavior(TransactionScopeRequired = true)]
    public void PutFile(byte[] data, string filename)
    {
        Console.WriteLine("PutFile: {0}", filename);
        Console.WriteLine("Distributed Tx ID: {0}",
          Transaction.Current.TransactionInformation.DistributedIdentifier);

        using (BinaryWriter bw = new BinaryWriter(
            TransactedFile.Open(@"C:\TxF\Service\" + filename,
              FileMode.Create, FileAccess.Write, FileShare.Write)))
        {
            bw.Write(data, 0, data.Length);
           
            // clean up
            bw.Flush();
        }
    }
}

Client does four things:

  1. Sends three files (client - server) - no transaction
  2. Gets three files (server - client) - no transaction
  3. Sends three files (client - server) - distributed transaction, all or nothing
  4. Gets three files (server - client) - distributed transaction, all or nothing

Before you run:

  • Decide on the secondary resource manager option (system drive, enable it using fsutil.exe) or use another partition (USB key)
  • Change the paths to your scenario. The sample uses C:\TxF, C:\TxF\Service and C:\TxF\Client and a secondary resource manager. Create these directories before running the sample.

Download: Sample code

This sample is provided without any warranty. It's a sample, so don't use it in production environments.

Monday, 23 July 2007 21:54:13 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 WS-Management: Windows Vista and Windows Server 2008 

I dived into WS-Management support in Vista / Longhorn Server Windows Server 2008 this weekend. There are a couple of caveats if you want to enable remote WS-Management based access to these machines. Support for remote management is also built into Windows Server 2003 R2.

WS-Management specification allows remote access to any resource that implements the specification. Everything accessed in a WS-Management world is a resource, which is identifiable by a URI. The spec uses WS-Eventing, WS-Enumeration, WS-Transfer and SOAP 1.2 via HTTP.

Since remote management implementation in Windows acknowledges all the work done in the WMI space, you can simply issue commands in terms of URIs that incorporate WMI namespaces.

For example, the WMI class or action (method) is identified by a URI, just as any other WS-Management based resource. You can construct access to any WMI class / action using the following semantics:

  • http://schemas.microsoft.com/wbem/wsman/1/wmi denotes a default WMI namespace accessible via WS-Management
  • http://schemas.microsoft.com/wbem/wsman/1/wmi/root/default denotes access to root/default namespace

Since the majority of WMI classes are in root/cimv2 namespace, you should use the following URI to access those:

http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2

OK, back to WS-Management and its implementation in Vista / Windows Server 2008.

First, Windows Server 2008 has the Windows Remote Management service started up by default. Vista doesn't. So start it up, if you're on a Vista box.

Second, depending on your network configuration, if you're in a workgroup environment (not joined to a domain) you should tell your client to trust the server side.

Trusting the server side involves executing a command on a client. Remote management tools included in Windows Server 2008 / Windows Vista are capable of configuring the local machine and issuing commands to remote machine. There are basically two tools which allow you to setup the infrastructure and issue remote commands to the destination. They are:

  • winrm.cmd (uses winrm.vbs), defines configuration of local machine
  • winrs.exe (winrscmd.dll and friends), Windows Remote Shell client, issues commands to a remote machine

As said, WS-Management support is enabled by default in Windows Server 2008. This means that the appropriate service is running, but one should still define basic configuration on it. Nothing is enabled by default; you have to opt-in.

Since Microsoft is progressing to a more admin friendly environment, this is done by issuing the following command (server command):

winrm quickconfig (or winrm qc)

This enables the obvious:

  • Starts the Windows Remote Management service (if not stared; in Windows Vista case)
  • Enables autostart on the Windows Remote Management service
  • Starts up a listener for all machine's IP addresses
  • Configures appropriate firewall exceptions

You should get the following output:

[c:\windows\system32]winrm quickconfig

WinRM is not set up to allow remote access to this machine for management.
The following changes must be made:

Create a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
Enable the WinRM firewall exception.

Make these changes [y/n]? y

WinRM has been updated for remote management.
Created a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
WinRM firewall exception enabled.

There are options in winrm.cmd to fine tune anything, including the listening ports and / or SSL (HTTPS) support. In a trusted environment you probably don't want to issue commands using HTTP based mechanism, since you are located behind the trust boundary and have complete control over available (allowed) TCP ports.

You can now issue remote management commands against the configured server, but only if the communication is trusted. So in case you are in a workgroup environment (client and server in a workgroup), this should get you started (client command):

winrm set winrm/config/client @{TrustedHosts="<server ip or hostname>"}

You can specify multiple trusted servers using a comma:

winrm set winrm/config/client @{TrustedHosts="10.10.10.108, 10.10.10.109"}

This trusts the server(s) no matter what. Even over HTTP only.

Enumerating the configured listeners - remember, listener is located on the destination side - is done via:

winrm enumerate winrm/config/listener

OK, now we're able to issue commands to the remote side using WS-Management infrastructure. You can, for example, try this (client command):

winrs -r:http://<server ip> -u:<username> -p:<password> <shell command>, ie.
winrs -r:http://10.10.10.108 -u:administrator -p:p$38E0jjW! dir -s

or

winrs -r:http://10.10.10.108 -u:administrator -p:p$38E0jjW! hostname

You can even explose HTTP based approach through your firewall if you're crazy enough. But using HTTPS would be the smart way out. What you need is a valid certificate with server authentication capability and a matching CN. Self-signed certs won't work.

Simple and effective.

Categories:  Web Services | Windows Vista
Sunday, 22 July 2007 19:48:22 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Managed TxF: Support in Windows Vista and Windows Server 2008 

If you happen to be on a Windows Vista or Windows Server 2008 box, there is some goodness going your way.

There is a basic managed TxF (Transactional NTFS) wrapper available (unveiled by Jason Olson).

What this thing gives you is this:

try
{
   using (TransactionScope tsFiles = new TransactionScope
     
TransactionScopeOption.RequiresNew))
   {
      WriteFile("TxFile1.txt");
      throw new FileNotFoundException();
      WriteFile("TxFile2.txt");
      tsFiles.Complete();
   }
}
catch (Exception ex)
{
   Console.WriteLine(ex.Message);
}

WriteFile method that does, well, file writing, is here:

using (TransactionScope tsFile = new TransactionScope
      (TransactionScopeOption.Required))
{
   Console.WriteLine("Creating transacted file '{0}'.", filename);

   using (StreamWriter tFile = new StreamWriter(TransactedFile.Open(filename, 
          FileMode.Create,
          FileAccess.Write,
         
FileShare.None)))
   {
      tFile.Write(String.Format("Random data. My filename is '{0}'.",
          filename));
   }

   tsFile.Complete();
   Console.WriteLine("File '{0}' written.", filename);
}

So we have a nested TransactionScope with a curious type - TransactedFile. Mind you, there is support for TransactedDirectory built in.

What's happening underneath is awesome. The wrapper talks to unmanaged implementation of TxF, which is built in on every Vista / Longhorn Server box.

What you get is transactional file system support with System.Transactions. And it's going to go far beyond that.

I wrote some sample code, go get it. Oh, BTW, remove the exception line to see the real benefit.

Download: Sample code

This sample is provided without any warranty. It's a sample, so don't use it in production environments.

Categories:  Transactions | Windows Vista
Friday, 20 July 2007 15:59:16 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Problem: Adding custom properties to document text in Word 2007 

There is some serious pain going on when you need to add a simple custom document property into multiple Word 2007 text areas.

Say you have a version property that you would need to update using the document property mechanics. And say you use it in four different locations inside your document.

  • There is no ribbon command for it. There was a menu option in Word 2003 days.
  • There is no simple way of adding to The Ribbon. You have to customize the Quick Access Toolbar and stick with ugly, limited use icons forever or so.
    • You need to choose All commands in Customize Quick Access Toolbar to find Insert Field option.
  • This is not the only limiting option for a power user. The number of simplifications for the casual user is equal to the number of limitations for the power user. And yes, I know, casual users win the number battle.

So:

  1. Right click The Ribbon and select Customize Quick Access Toolbar
  2. Select All Commands and Insert Field
  3. Add it to Custom Quick Access Toolbar
  4. Click the new icon
  5. In Field names select DocProperty
  6. Select your value, in this example Version

Yes. Ease of use.

Please, give me an option to get my menus and keyboard shortcuts back.

Pretty please.

 

Categories:  Microsoft | Personal | Work
Monday, 09 July 2007 21:44:50 (Central Europe Standard Time, UTC+01:00)  #    Comments

 

Copyright © 2003-2024 , Matevž Gačnik
Recent Posts
RD / MVP
Feeds
RSS: Atom:
Archives
Categories
Blogroll
Legal

The opinions expressed herein are my own personal opinions and do not represent my company's view in any way.

My views often change.

This blog is just a collection of bytes.

Copyright © 2003-2024
Matevž Gačnik

Send mail to the author(s) E-mail