Debugging Windows Azure DevFabric HTTP 500 Errors 

While developing with Windows Azure SDK and local Azure development fabric, when things go wrong, they go really wrong.

It could something as obscure as leaving an empty element somewhere in Web.config or a certificate issue.

The problem is, that before Visual Studio attaches a debugger into a IIS worker process, a lot of things can go wrong. What you get, it this:

There's no Event Log entry, nothing in a typical dev fabric temp folder (always, check 'C:\Users\\AppData\Local\Temp\Visual Studio Web Debugger.log' first), nada.

Poking deeper, what you need to do is allow IIS to respond properly. By default IIS only displays complete error details for local addresses. So, to get a detailed report you need to use a local address.

You can get a local address by fiddling with the site binding with IIS Manager and changing what Azure SDK does, so:

  • First, start IIS Management Console.
  • Then right click on your deployment(*).* and select Edit Bindings.
  • Select All Unassigned

If you hit your web / service role using the new local address (could be something like http://127.0.0.1:82), you are most likely getting the full disclosure, like this:

In this case, a service was dispatched into dev fabric, which had an empty element somewhere in Web.config. The web role was failing before Visual Studio could attach a debugger, and only HTTP 500 was returned through normal means of communication.

Categories:  .NET 4.0 - WCF | Windows Azure
Wednesday, April 10, 2013 3:04:49 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 The Case of Guest OS Versioning in Windows Azure 

There's a notion of Windows Guest OS versions in Windows Azure. Guest OS versions can actually be (in Q1 2012) either a stripped down version of Windows Server 2008 or a similiar version of Windows Server 2008 R2.

You can upgrade your guest OS in Windows Azure Management Portal:

Not that it makes much difference, especially while developing .NET solutions, but I like to be on the newest OS version all the time.

The problem is that the defaults are stale. In 1.6 version of the Windows Azure SDK, the default templates all specify the following:

   xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"
   osFamily="1"
   osVersion="*">
     

The osFamily attribute defines OS version, with 1 being Windows Server 2008 and 2 being Windows Server 2008 R2. If you omit the osFamily attribute, the default is 1 too! Actually this attribute should probably move to the Role element, since it defines the version of the role's guest OS.

   xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"
  
osFamily="1"
   osVersion="*">
 
   
    

 
 
   
    
 

It doesn't make sense to have it normalized over all roles. Also, this schema makes it impossible to leave it out in VM role instances, where it gets ignored.

The osVersion attribute defines the version which should be deployed for your guest OS. The format is * or WA-GUEST-OS-M.m_YYYYMM-nn. You should never use the latter one. Asterisk, normally, means 'please upgrade all my instances automatically'. Asterisk is your friend.

If you want/need Windows Server 2008 R2, change it in your service configuration XML.

What this means is, that even if you publish and upgrade your guest OS version in the Azure Management Portal, you will get reverted the next time you update your app from within Visual Studio.

Categories:  Windows Azure
Sunday, March 31, 2013 9:40:01 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 The Case of Lightweight Azure MMC Snap-In Not Installing on Azure SDK 1.6 

There are a couple of Windows Azure management tools, scripts and PowerShell commandlets available, but I find Windows Azure Platform Management Tool (MMC snap-in) one of the easiest to install and use for different Windows Azure subscriptions.

The problem is the tool has not been updated for almost a year and is this failing when you try to install it on the latest Windows Azure SDK (currently v1.6).

Here's the solution.

Categories:  Windows Azure
Tuesday, March 26, 2013 11:36:53 AM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Bleeding Edge 2011: Pub/Sub Broker Design 

This is the content from my Bleeding Edge 2011 talk on pub/sub broker design and implementation.

Due to constraints of the project (European Commission, funded by EU) I cannot publicly distribute the implementation code at this time. I plan to do it after the review process is done. It has been advised, that this probably won't be the case.

Specifically, this is:

  • A message based pub/sub broker
  • Can use typed messages
  • Can be extended
  • Can communicate with anyone
  • Supports push and pull models
  • Can call you back
  • Service based
  • Fast (in memory)
  • Is currently fed by the Twitter gardenhose stream, for your pleasure

Anyway, I can discuss the implementation and design decisions, so here's the PPT (in slovene only).

Downloads: Bleeding Edge 2011, PPTs
Demo front-end: Here

Wednesday, October 05, 2011 10:17:31 AM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 The Case of Empty OptionalFeatures.exe Dialog 

The following is a three day saga of empty 'Turn Windows Features on or off' dialog.

This dialog, as unimportant as it may seem, is the only orifice into Windows subsystem installations without having to cramp up command line msiexec.exe wizardry on obscure system installation folders that nobody wants to understand.

Empty, it looks like this:

First thing anyone should do when it comes to something obscure like this is:

  1. Reinstall the OS (kidding, but would help)
  2. In-place upgrade of the OS (kidding, but would help faster)
  3. Clean reboot (really, but most probably won't help)
  4. Run chkdsk /f and sfc /scannow (really)
  5. If that does not help, proceed below

If you still can't control your MSMQ or IIS installation, then you need to find out which of the servicing packages got corrupted somehow.

Servicing packages are Windows Update MSIs, located in hell under HKLM/Software/Microsoft/Windows/CurrentVersion/Component Based Servicing/Packages. I've got a couple thousand under there, so the only question is how to get to rough one out of there.

There's a tool, called System Update Readiness Tool [here] that nobody uses. Its side effect is that it checks peculiarities like this. Run it, then unleash notepad.exe on C:\Windows\Logs\CBS\CheckSUR.log and find something like this:

Checking Windows Servicing Packages

Checking Package Manifests and Catalogs
(f) CBS MUM Corrupt 0x800F0900 servicing\Packages\
Package_4_for_KB2446710~31bf3856ad364e35~amd64~~6.1.1.3.mum  Line 1:

(f) CBS Catalog Corrupt 0x800B0100 servicing\Packages\
Package_4_for_KB2446710~31bf3856ad364e35~amd64~~6.1.1.3.cat  

Then find the package in registry, take ownership of the node, set permissions so you can delete and delete it. Your OptionalFeatures.exe work again and it took only 10 minutes.

Categories:  Other | Windows 7 | Work
Tuesday, June 07, 2011 7:57:17 AM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Clouds Will Fail 

This is a slightly less technical post, covering my experiences and thought on cloud computing as a viable business processing platform.

Recent Amazon EC2 failure was gathering a considerable amount of press and discussion coverage. Mostly discussions revolve around the failure of cloud computing as a promise to never go down, never lose a bit of information.

This is wrong and has been wrong for a couple of years. Marketing people should not be making promises their technical engineers can't deliver. Actually, marketing should really step down from highly technical features and services, in general. I find it funny that there is no serious marketing involved in selling BWR reactors (which fail too), but they probably serve the same amount of people as do cloud services, nowadays.

Getting back to the topic, as you may know EC2 failed miserably a couple of weeks ago. It was something that should not happen - at least in many techie minds. The fault at AWS EC2 cloud was with their EBS storage system, which failed across multiple AWS availability zones within the same AWS region in North Virginia. Think of availability zones as server racks within the same data center and regions as different datacenters.

Companies like Foursquare, Tekpub, Quora and others all deployed their solutions to the same Amazon region - North Virginia and were thus susceptive to problems within that specific datacenter. They could have replicated across different AWS regions, but did not.

Thus, clouds will fail. It's only a matter of time. They will go down. The main thing clouds deliver is a lower probability of failure, not its elimination. Thinking that cloud computing will solve the industry's fears on losing data or deliver 100% uptime is downright imaginary.

Take a look at EC2's SLA. It says 99.95% availability. Microsoft's Azure SLA? 99.9%. That's almost 4.5 hours and almost 9 hours of downtime built in! And we didn't event start to discuss how much junk marketing people will sell.

We are still in IaaS world, although Microsoft is really pushing PaaS and SaaS hard. Having said that, Windows Azure's goal of 'forget about it, we will save you anyway' currently has a lot more merit that other offerings. It is indeed trying to go the PaaS and SaaS route while abstracting the physical machines, racks and datacenters.

Categories:  Architecture | Other
Wednesday, May 04, 2011 9:02:27 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Twitter Handle 

I am more active on Twitter lately, finding it amusing personally.

Handle: http://twitter.com/matevzg

Categories:  Personal
Sunday, March 27, 2011 5:54:45 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Load Test Tool for Windows Server AppFabric Distributed Cache 

During exploration of high availability (HA) features of Windows Server AppFabric Distributed Cache I needed to generate enough load in a short timeframe. You know, to kill a couple of servers.

This is what came out of it.

It's a simple command line tool, allowing you to:

  • Add millions of objects of arbitrary size to the cache cluster (using cache.Add())
  • Put objects of arbitraty size to cache cluster
  • Get objects back
  • Remove objects from cache
  • Has cluster support
  • Has local cache support
  • Will list configuration
  • Will max out you local processors (using .NET 4 Parallel.For())
  • Will perform graceously, even in times of trouble

I talked about this at a recent Users Group meeting, doing a live demo of cache clusters under load.

Typical usage scenario is:

  1. Configure a HA cluster
    Remember, 3 nodes minimum, Windows Server 2008 (R2) Enterprise or DataCenter
  2. Configure a HA cache
  3. Edit App.config, list all available servers
  4. Connect to cluster
  5. Put a bunch of large objects (generate load)
    Since AppFabric currently supports only partitioned cache type, this will distribute load among all cluster hosts. Thus, all hosts will store 1/N percent of objects.
  6. Stop one node
  7. Get all objects back
    Since cache is in HA mode, you will get all your objects back, even though a host is down - cluster will redistribute all the missing cache regions to running nodes.

You can download the tool here.

Categories:  .NET 4.0 - General | Architecture | Microsoft
Thursday, December 09, 2010 2:07:25 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 P != NP Proof Failing 

One of the most important steps needed for computer science to get itself to the next level seems to be fading away.

P vs NP

Actually, its proof from is playing hard to catch again. This question (whether P=NP or P!=NP) does not want to be answered. It could be, that the problem of proving it is also NP-complete.

The (scientific) community wants needs closure. If P != NP would be proven, a lot of orthodox legislature in PKI, cryptography and signature/timestamp validity would probably become looser. If P=NP is true, well, s*!t hits the fan.

Categories:  Other
Thursday, August 19, 2010 9:31:28 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Visual Studio 2010 Beta 2: Why this Lack of Pureness Again? 

Developer pureness is what we should all strive for.

This is not it:

Conflicting with this:

[c:\NetFx4\versiontester\bin\debug]corflags VersionTester.exe
Microsoft (R) .NET Framework CorFlags Conversion Tool.  Version  3.5.21022.8
Copyright (c) Microsoft Corporation.  All rights reserved.

Version   : v4.0.21006
CLR Header: 2.5
PE        : PE32
CorFlags  : 1
ILONLY    : 1
32BIT     : 0
Signed    : 0

Visual Studio must display version number of 4.0 instead of 4 in RTM. It needs to display major and minor number of the new framework coherently with 2.0, 3.0, 3.5.

Otherwise my compulsive disorder kicks in.

Categories:  .NET 4.0 - General
Saturday, October 24, 2009 12:50:47 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 ServiceModel: Assembly Size in .NET 4.0 

I'm just poking around the new framework, dealing with corflags and CLR Header versions (more on that later), but what I found is the following.

Do a dir *.dll /o:-s in the framework directory. You get this on .NET FX 4.0 beta 2 box:

[c:\windows\microsoft.net\framework64\v4.0.21006]dir *.dll /o:-s

 Volume in drive C is 7  Serial number is 1ED5:8CA5
 Directory of  C:\Windows\Microsoft.NET\Framework64\v4.0.21006\*.dll

 7.10.2009 3:44   9.833.784  clr.dll
 7.10.2009 2:44   6.072.664  System.ServiceModel.dll
 7.10.2009 2:44   5.251.928  System.Windows.Forms.dll
 7.10.2009 6:04   5.088.584  System.Web.dll
 7.10.2009 5:31   5.086.024  System.Design.dll
 7.10.2009 3:44   4.927.808  mscorlib.dll
 7.10.2009 3:44   3.529.608  Microsoft.VB.Activities.Compiler.dll
 7.10.2009 2:44   3.501.376  System.dll
 7.10.2009 2:44   3.335.000  System.Data.Entity.dll
 7.10.2009 3:44   3.244.360  System.Data.dll
 7.10.2009 2:44   2.145.096  System.XML.dll
 7.10.2009 5:31   1.784.664  System.Web.Extensions.dll
 7.10.2009 2:44   1.711.488  System.Windows.Forms.DataVis.dll
 7.10.2009 5:31   1.697.128  System.Web.DataVis.dll
 7.10.2009 5:31   1.578.352  System.Workflow.ComponentModel.dll
 7.10.2009 3:44   1.540.928  clrjit.dll
 7.10.2009 3:44   1.511.752  mscordacwks.dll
 7.10.2009 3:44   1.454.400  mscordbi.dll
 7.10.2009 2:44   1.339.248  System.Activities.Presentation.dll
 7.10.2009 5:31   1.277.776  Microsoft.Build.dll
 7.10.2009 2:44   1.257.800  System.Core.dll
 7.10.2009 2:44   1.178.448  System.Activities.dll
 7.10.2009 5:31   1.071.464  System.Workflow.Activities.dll
 7.10.2009 5:31   1.041.256  Microsoft.Build.Tasks.v4.0.dll
 7.10.2009 2:44   1.026.920  System.Runtime.Serialization.dll

Funny how many engineer dollars are/were spent on middle tier technologies I love.

Proof: System.ServiceModel.dll is bigger then System.dll, System.Windows.Forms.dll, mscorlib.dll, and System.Data.dll.

We only bow to clr.dll, a new .NET 4.0 Common Language Runtime implementation.

Categories:  .NET 4.0 - General | .NET 4.0 - WCF
Saturday, October 24, 2009 12:39:05 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Talk at Interoperability Day 2009 

Interoperability day is focused on, well, interoperability. Especially between major vendors in government space.

We talked about major issues in long term document preservation formats and the penetration of Office serialization in real world...

.. and the lack of support from legislature covering long term electronic document formats and their use.

Here's the PPT [Slovenian].

Categories:  Other | Work
Wednesday, June 03, 2009 2:23:45 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Speaking at Document Interop Initiative 

On Monday, 05/18/2009 I'm speaking at Document Interop Initiative in London, England.

Title of the talk is: High Fidelity Programmatic Access to Document Content, which will cover the following topics:

  • Importance of OOXML as a standards based format, especially in technical issues of long term storage for document content preservation
  • Importance of legal long term storage for document formats
  • Signature and timestamp benefits of long term XML formats
  • Performance and actual cost analysis of using publicly-parsable formats
  • Benefits of having a high fidelity programmatic access to document content, backed with standardization

The event is held at Microsoft Limited, Cardinal Place, 100 Victoria Street SW1E 5JL, London.

Update: Here's the presentation.

Categories:  Microsoft | Work
Saturday, May 16, 2009 7:21:51 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Designing Clean Object Models with SvcUtil.exe 

There are a couple situations where one might use svcutil.exe in command prompt vs. Add Service Reference in Visual Studio.

At least a couple.

I don't know who (or why) made a decision not to support namespace-less proxy generation in  Visual Studio. It's a fact that if you make a service reference, you have to specify a nested namespace.

On the other end, there is an option to make the proxy class either public or internal, which enables one to hide it from the outside world.

That's a design flaw. There are numerous cases, where you would not want to create another [1] namespace, because you are designing an object model that needs to be clean.

[1] This is only limited to local type system namespace declarations - it does not matter what you are sending over the wire.

Consider the following namespace and class hierarchy that uses a service proxy (MyNamespace.Service class uses it):

Suppose that the Service class uses a WCF service, which synchronizes articles and comments with another system. If you use Add Web Reference, there is no way to hide the generated service namespace. One would probably define MyNamespace.MyServiceProxy as a namespace and declared a using statement in MyNamespace.MyService scope with:

using MyNamespace.MyServiceProxy;

This results in a dirty object model - your clients will see your internals and that shouldn't be an option.

Your class hierarchy now has another namespace, called MyNamespace.ServiceProxy, which is actually only used inside the MyNamespace.Service class.

What one can do is insist that the generated classes are marked internal, hiding the classes from the outside world, but still leaving an empty namespace visible for instantiating assemblies.

Leaving only the namespace visible, with no public classes in it is pure cowardness.

Not good.

Since there is no internal namespace modifier in .NET, you have no other way of hiding your namespace, even if you demand internal classes to be generated by the Add Service Reference dialog.

So, svcutil.exe to the rescue:

svcutil.exe <metadata endpoint>
   /namespace:*, MyNamespace
   /internal
   /out:ServiceProxyClass.cs

The /namespace option allows you to map between XML and CLR namespace declarations. The preceding examples maps all (therefore *) existing XML namespaces in service metadata to MyNamespace CLR namespace.

The example will generate a service proxy class, mark it internal and put it into the desired namespace.

Your object model will look like this:

Currently, this is the only way to hide the internals of service communication with a proxy and still being able to keep your object model clean.

Note that this is useful when you want to wrap an existing service proxy or hide a certain already supported service implementation detail in your object model. Since your client code does not need to have access to complete service proxy (or you want to enhance it), it shouldn't be a problem to hide it completely.

Considering options during proxy generation, per method access modifiers would be beneficial too.

Categories:  .NET 3.5 - WCF | Web Services
Friday, April 10, 2009 10:06:43 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

 Bug in XmlSerializer, XmlSerializerNamespaces 

XmlSerializer is a great peace of technology. Combined with xsd.exe and friends (XmlSerializerNamespaces, et al.) it's a powerful tool for one to get around XML instance serialization/deserialization.

But, there is a potentially serious bug present, even in 3.5 SP1 version of the .NET Framework.

Suppose we have the following XML structure:

<Envelope xmlns="NamespaceA"
          xmlns:B="NamespaceB">
  <B:Header></B:Header>
  <Body></Body>
</Envelope>

This tells you that Envelope, and Body elements are in the same namespace (namely 'NamespaceA'), while Header is qualified with 'NamespaceB'.

Now suppose we need to programmatically insert <B:Header> element into an empty, core, document.

Core document:

<Envelope xmlns="NamespaceA"
          xmlns:B="NamespaceB">
  <Body></Body>
</Envelope>

Now do an XmlNode.InsertNode() of the following:

<B:Header>...</B:Header>

We should get:

<Envelope xmlns="NamespaceA"
          xmlns:B="NamespaceB">
  <B:Header>...</B:Header>
  <Body></Body>
</Envelope>

To get the to be inserted part one would serialize (using XmlSerializer) the following Header document:

<B: Header xmlns:B="NamespaceB">
  ...
</B:Header>

To do this, a simple XmlSerializer magic will do the trick:

XmlSerializerNamespaces xsn = new XmlSerializerNamespaces();
xsn.Add("B", "NamespaceB");

XmlSerializer xSer = new XmlSerializer(typeof(Header));
XmlTextWriter tw = new XmlTextWriter(ms, null);
xSer.Serialize(tw, h, xsn);

ms.Seek(0, SeekOrigin.Begin);

XmlDocument doc = new XmlDocument()
doc.Load(ms);
ms.Close();

This would generate exactly what we wanted. A prefixed namespace based XML document, with the B prefix bound to 'NamespaceB'.

Now, if we would import this document fragment into our core document using XmlNode.ImportNode(), we would get:

<Envelope xmlns="NamespaceA"
          xmlns:B="NamespaceB">
  <B:Header xmlns:B="NamespaceB">...</B:Header>
  <Body></Body>
</Envelope>

Which is valid and actually, from an XML Infoset view, an isomorphic document to the original. So what if it's got the same namespace declared twice, right?

Right - until you involve digital signatures. I have described a specific problem with ambient namespaces in length in this blog entry: XmlSerializer, Ambient XML Namespaces and Digital Signatures.

When importing a node from another context, XmlNode and friends do a resolve against all namespace declarations in scope. So, when importing such a header, we shouldn't get a duplicate namespace declaration.

The problem is, we don't get a duplicate namespace declaration, since XmlSerializer actually inserts a normal XML attribute into the Header element. That's why we seem to get another namespace declaration. It's actually not a declaration but a plain old attribute. It's even visible (in this case in XmlElement.Attributes), and it definitely shouldn't be there.

So if you hit this special case, remove all attributes before importing the node into your core document. Like this:

XmlSerializerNamespaces xsn = new XmlSerializerNamespaces();
xsn.Add("B", "NamespaceB");

XmlSerializer xSer = new XmlSerializer(typeof(Header));
XmlTextWriter tw = new XmlTextWriter(ms, null);
xSer.Serialize(tw, h, xsn);

ms.Seek(0, SeekOrigin.Begin);

XmlDocument doc = new XmlDocument()
doc.Load(ms);
ms.Close();
doc.DocumentElement.Attributes.RemoveAll();

Note that the serialized document representation will not change, since an ambient namespace declaration (B linked to 'NamespaceB') still exists in the XML Infoset of doc XML document.

Categories:  .NET 3.5 - General | XML
Monday, April 06, 2009 1:22:33 PM (Central Europe Standard Time, UTC+01:00)  #    Comments

 

Copyright © 2003-2014 , Matevž Gačnik
Recent Posts
RD / MVP
Feeds
RSS: Atom:
Archives
Categories
Blogroll
Legal

The opinions expressed herein are my own personal opinions and do not represent my company's view in any way.

My views often change.

This blog is just a collection of bytes.

Copyright © 2003-2014
Matevž Gačnik

Send mail to the author(s) E-mail