Then there is Terence Tao, who should get a Nobel. There is no way to get a mathematics
Nobel outside of Fields Medal [1] which he've got.
Amadeus #Mozart, go Bob Dylan!
Microsoft Copilot (AI Based, going to all Office, Bing and communication products)
M3GAN is nice and cute.
Source link: medRxiv
Quite scary.. think.
Copy and paste from my conversation with https://chat.openai.com.
Response time was under 20 milliseconds. Handling context with no issues. Fascinating.
> So, how are you OpenAI?
I'm doing great! OpenAI is a research laboratory dedicated to finding ways to use
artificial intelligence to benefit humanity. We're making advances in machine learning,
robotics, and other AI technologies, and we're always looking for new opportunities
to make the world a better place.
> What's a garbage collector?
A garbage collector is a program in computer science that is responsible for managing
the memory allocation of a computer system in order to ensure that it runs efficiently.
It frees up memory that is no longer in use and reclaims memory that is being used
inefficiently.
> Should we use it?
Yes, garbage collection is important for the efficient use of memory and can help
improve the performance of a computer system. If a computer system is not using garbage
collection, it’s likely that it will suffer from memory leaks, which can lead to reduced
performance and system instability.
> Hmm, I should have asked differently...
No worries, we're here to help. What other questions do you have about garbage collection?
> What I meant to ask was, should we use it manually or should we leave garbage
collection to platforms we use?
It depends on the platform and the application you are developing. If the platform
has a built-in garbage collection system, you can usually leave it to the platform
to manage memory allocation and reclaim memory. However, in some cases, it might be
necessary to manually implement garbage collection in order to optimize the performance
of the system.
> Could you give me an example of that?
Sure. For example, if you are developing a game, it might be beneficial to manually
implement garbage collection in order to free up memory that is no longer in use and
reclaim memory that is being used inefficiently. This can help keep the game running
smoothly and improve the overall user experience.
> Any other examples?
Yes, garbage collection can also be used in other areas such as web development and
database management. Garbage collection can help reduce the amount of memory that
is used by a web application and can help ensure that a database is running efficiently.
> Any other examples when we should trust the platforms for garbage collection?
In many cases, it is best to trust the platforms for garbage collection. For example,
if you are developing an application that runs on a popular platform such as Android
or iOS, you can usually trust the garbage collection system that is built into the
platform. This will ensure that memory is being managed efficiently and that the application
is running smoothly.
> What do you mean by 'managed efficiently'?
When garbage collection is managed efficiently, it means that memory is being allocated
and reclaimed in a way that maximizes efficiency and minimizes memory leaks. This
helps to ensure that the application runs smoothly and does not suffer from performance
issues due to memory leaks.
> You are quite knowledgeable about the subject.
Thank you for your kind words!
> What happens if memory is not reclaimed?
If memory is not reclaimed, it can lead to memory leaks and decrease the performance
of the application. Memory leaks can also cause the application to crash or become
unstable. To prevent these issues, it is important to ensure that memory is being
reclaimed efficiently with the help of garbage collection.
> So, what did programmers do when there was no automatic garbage collection?
Before the advent of automatic garbage collection, programmers had to manually manage
memory allocation and reclaim memory when necessary. This could be a tedious and time
consuming process, and errors in memory management could lead to memory leaks and
other performance issues.
> What do they do now in C or Lisp or Fortran?
Nowadays, many programming languages such as C, Lisp, and Fortran have built-in garbage
collection systems. This allows programmers to focus more on the application logic,
rather than having to manually manage memory allocation and reclaim memory.
> Which platforms or languages do have automatic garbage collection?
Many popular platforms and languages such as Java, JavaScript, Python, and more have
built-in garbage collection systems. Additionally, many modern operating systems such
as Windows and macOS also have garbage collection systems.
> Does .NET have it?
Yes, .NET has a built-in garbage collection system. The .NET garbage collection system
is responsible for managing memory allocation, reclaiming memory when appropriate,
and ensuring that the application runs efficiently.
> So.. What are generations in terms of .NET garbage collection?
Generations are a concept used in .NET garbage collection to help improve the performance
of the garbage collector. The .NET garbage collector divides objects into three generations
based on their lifetime, and objects that have been alive for longer are placed into
higher generations. This helps the garbage collector to identify which objects are
the most likely to be reclaimed, allowing it to reclaim memory more efficiently.
> Tell me about latency models in this case.
Latency models are used in .NET garbage collection to help reduce the impact that
garbage collection has on the performance of the application. The latency model estimates
the time it will take for the garbage collector to reclaim memory, and then it schedules
garbage collection at times that will have the least impact on the application. This
helps to ensure that the application runs smoothly, even when garbage collection is
running.
> Any thoughts on weak references for GC?
Weak references are a type of reference that can help the garbage collector reclaim
memory more efficiently. Weak references are references to objects that are no longer
strongly referenced, and they allow the garbage collector to reclaim the objects that
they refer to even if they are not strongly referenced. This can help reduce memory
leaks and improve the performance of the application.
> You might just pass a Turing test. What is it?
The Turing test is a test of a machine’s ability to exhibit intelligent behavior that
is indistinguishable from that of a human. It was proposed by Alan Turing in 1950,
and it is still used today to evaluate the performance of AI systems.
> Could you pass it?
My understanding is that I am not the one who is being tested in the Turing test -
it is the machine that is being tested. However, it is possible that I could pass
the Turing test if I was able to exhibit intelligent behavior that is indistinguishable
from that of a human.
> So coming back to GC.. Could you explain the concept of compaction?
Compaction is a process used by the garbage collector to reclaim memory more efficiently.
The garbage collector moves objects around in memory to free up memory that is no
longer in use and to reduce the amount of memory fragmentation. This helps to reduce
memory leaks and improve the performance of the application.
> No, GC as in garbage collection...
Ah, of course! Garbage collection is a process used in computer science to manage
the memory allocation of a computer system in order to ensure that it runs efficiently.
It frees up memory that is no longer in use and reclaims memory that is being used
inefficiently.
> Giving you an option to select between a forced and optimized garbage collection.
When would you choose one or the other?
The choice between forced and optimized garbage collection depends on the application
and the expected workload of the system. Forced garbage collection is used when the
application needs to reclaim memory immediately and the system can tolerate a slight
decrease in performance. Optimized garbage collection is used when the application
can wait for the garbage collector to reclaim memory and the system's performance
needs to be maximized.
> Quite good.
Thank you!
> Could we change topics?
Of course! What would you like to talk about?
> How much power does a 12kW power outlet generate in 12 hours?
A 12kW power outlet will generate 144 kWh (12 kW x 12 hours) of electricity in 12
hours.
> What about 120 hours?
For 120 hours, the 12 kW power outlet will generate 1,440 kWh (12 kW x 120 hours)
of electricity.
> So, a 700MW power plant generates how many times more power that this power outlet?
A 700 MW power plant will generate 4,800,000 kWh of electricity in 12 hours (700 MW
x 12 hours), which is 33,333 times more power than the 12 kW power outlet.
> What are typical examples of such a powerful power plants? Do they run on coal?
Yes, many large power plants are powered by coal. These power plants are typically
used to generate electricity for large cities and regions. Examples of such power
plants include the Taichung Power Plant in Taiwan, which has a capacity of 5,500 MW,
and the Four Corners Power Plant in the US, which has a capacity of 3,520 MW.
> What about nuclear, which one generates the most power?
The Kashiwazaki-Kariwa Nuclear Power Plant in Japan is the largest nuclear power plant
in the world, with a capacity of 7,965 MW. It is followed by the Fukushima Daiichi
Nuclear Power Plant in Japan (4,711 MW) and the Bruce Nuclear Generating Station in
Canada (4,400 MW).
> And in Europe?
In Europe, the largest nuclear power plant is the Kola Nuclear Power Plant in Russia,
with a capacity of 4,800 MW. It is followed by the Cattenom Nuclear Power Plant in
France (3,960 MW) and the Olkiluoto Nuclear Power Plant in Finland (2,800 MW).
> And in Slovenia?
In Slovenia, the only nuclear power plant is the Krško Nuclear Power Plant, with a
capacity of 688 MW.
> Austria?
In Austria, there are no nuclear power plants.
> Ukraine?
In Ukraine, the largest nuclear power plant is the Zaporizhia Nuclear Power Plant,
with a capacity of 6,000 MW.
> You just said Kola nuclear poweplant is the largest in Europe. What gives?
The Kola Nuclear Power Plant in Russia is the largest nuclear power plant in Europe
in terms of capacity, but it is not located in Europe itself. It is located in the
Murmansk region of Russia, which is east of Finland and Norway.
> So, how many reactors does Zaporizhia consist of?
The Zaporizhia Nuclear Power Plant consists of six nuclear reactors.
> Each generating how many megawatts?
Each of the six reactors at the Zaporizhia Nuclear Power Plant generates 1,000 MW
of electricity.
> Thank you. You've been splendid.
You're welcome! It was my pleasure to help.
Ever wanted to have your Windows desktop background look sharp? Like, Bing Gallery sharp? Let me help you.
Here's a little tool I'm releasing today that allows you to:
It's all in one tool. Small, simple, efficient.
Here's the parameter model it supports:
BingDailyImage v1.0 - Download desktop wallpaper
images from Bing
Copyright © 2021, Matevž Gačnik
www.request-response.com
Gets Bing Picture of the Day images for today and a specified number of days back.
Usage: BingDailyImage [-c] [-d [days]] [-f folder] [-t [top|bottom]]
[-b]
-c Get current Bing image
-d [days] Specifies number of days to fetch.
If you omit this parameter the tool will download
last two weeks (14 days) of Bing wallpapers.
-f Set download folder
If you omit this parameter the folder will be
set to - '%USERPROFILE%\Pictures\Bing Wallpapers'.
-t Add text (image title and description) to images
You can specify text position [top, bottom]. Default is bottom.
-b Set last downloaded image as desktop background
-s install Installs BingDailyImage as a system service
Use -f to specify service download folder path
Use -t to let service add text to images
-s uninstall Uninstalls BingDailyImage as a system service
-s start Starts BingDailyImage service
-s stop Stops BingDailyImage service
-s query Queries BingDailyImage service state
-h Displays help
You can just do a BingDailyImage.exe -c to get the current daily image. By default, it will not tamper with background images, so you'll get the highest resolution available (1920x1200 or 1920x1080), like this:
BingDailyImage v1.0 - Download desktop wallpaper
images from Bing
Copyright © 2021, Matevž Gačnik
www.request-response.com
Downloading Bing Image of the Day for 2021-12-16.
Image date: 2021-12-16
Image title: Old Town in Salzburg, Austria
Image description: When it's lit up like this with a cozy glow, we can admire… When
there's a mountain in your city… We're looking at the Old Town portion
of this Baroque city…
Downloading background... Background for 1920x1200 found.
Saving background... Done for 2021-12-16.
Or do a BingDailyImage.exe -d 10 -t to get the last 10 and add a nice, transparent background text to them.
Hell, do a BingDailyImage.exe -s install and forget about it. It's going to download new images once they are published to Bing's servers. All you need to do now is set your Windows 10/11 desktop background to be fetched from the download folder. Done.
Here's the download.
Here's a sample of a downloaded image.
[1] You might be surprised about the fun facts you'll learn every day.
This is a slightly less technical post, covering my experiences and thoughts on cloud computing as a viable business processing platform.
Recent Amazon EC2 failure was gathering a considerable amount of press and discussion coverage. Mostly discussions revolve around the failure of cloud computing as a promise to never go down, never lose a bit of information.
This is wrong and has been wrong for a couple of years. Marketing people should not be making promises their technical engineers can't deliver. Actually, marketing should really step down from highly technical features and services, in general. I find it funny that there is no serious marketing involved in selling BWR reactors (which fail too), but they probably serve the same amount of people as do cloud services, nowadays.
Getting back to the topic, as you may know EC2 failed miserably a couple of weeks ago. It was something that should not happen - at least in many techie minds. The fault at AWS EC2 cloud was with their EBS storage system, which failed across multiple AWS availability zones within the same AWS region in North Virginia. Think of availability zones as server racks within the same data center and regions as different datacenters.
Companies like Twitter, Reddit, Foursquare, Tekpub, Quora and others all deployed their solutions to the same Amazon region - for example - North Virginia and were thus susceptive to problems within that specific datacenter. They could have replicated across different AWS regions, but did not.
Thus, clouds will fail. It's only a matter of time. They will go down. The main thing clouds deliver is a lower probability of failure, not its elimination. Thinking that cloud computing will solve the industry's fears on losing data or deliver 100% uptime is downright imaginary.
Take a look at EC2's SLA. It says 99.95% availability. Microsoft's Azure SLA? 99.9%. That's approximately +- 7 hours of downtime built in! And we didn't even start to discuss how much junk marketing people will sell.
We are still in IaaS world, although companies are really pushing PaaS and SaaS hard. Having said that, Windows Azure's goal of 'forget about it, we will save you anyway' currently has a lot more merit that other offerings. It is indeed trying to go the PaaS and SaaS route while abstracting the physical machines, racks and local/private datacenters.
One of the most important steps needed for computer science to get itself to the next level seems to be fading away.
Actually, its proof is playing hard to catch again. This question (whether P = NP or P != NP) does not want to be answered. It could be, that the problem of proving it is also NP-complete.
The (scientific) community wants needs closure. If P != NP would
be proven, a lot of orthodox legislature in PKI, cryptography and signature/timestamp
validity would probably become looser. If P=NP is true, well, s*!t hits the fan.
While developing with Windows Azure SDK and local Azure development fabric, when things go wrong, they go really wrong.
It could something as obscure as leaving an empty
The problem is, that before Visual Studio attaches a debugger into a IIS worker process, a lot of things can go wrong. What you get, it this:
There's no Event Log entry, nothing in a typical dev fabric temp
folder (always, check 'C:\Users\
Poking deeper, what you need to do is allow IIS to respond properly. By default IIS only displays complete error details for local addresses. So, to get a detailed report you need to use a local address.
You can get a local address by fiddling with the site binding with IIS Manager and changing what Azure SDK does, so:
If you hit your web / service role using the new local address (could be something like http://127.0.0.1:82), you are most likely getting the full disclosure, like this:
In this case, a service was dispatched into dev fabric, which had an empty
There are a couple of Windows Azure management tools, scripts and PowerShell commandlets available, but I find Windows Azure Platform Management Tool (MMC snap-in) one of the easiest to install and use for different Windows Azure subscriptions.
The problem is the tool has not been updated for almost a year and is this failing when you try to install it on the latest Windows Azure SDK (currently v1.6).
Here's the solution.
This is the content from my Bleeding Edge talk on pub/sub broker design and implementation.
Due to constraints of the project (European Commission, funded by EU) I cannot publicly
distribute the implementation code at this time. I plan to do it after the
review process is done. It has been advised, that this probably won't be
the case.
Specifically, this is:
Anyway, I can discuss the implementation and design decisions, so here's the PPT (in slovene only).
Downloads: Bleeding
Edge 2011, PPTs
Demo front-end: Here
The following is a three day saga of empty 'Turn Windows Features on or off' dialog.
This dialog, as unimportant as it may seem, is the only orifice into Windows subsystem installations without having to cramp up command line msiexec.exe wizardry on obscure system installation folders that nobody wants to understand.
Empty, it looks like this:
First thing anyone should do when it comes to something obscure like this is:
If you still can't control your MSMQ or IIS installation, then you need to find out which of the servicing packages got corrupted somehow.
Servicing packages are Windows Update MSIs, located in hell under HKLM/Software/Microsoft/Windows/CurrentVersion/Component Based Servicing/Packages. I've got a couple thousand under there, so the only question is how to get to rough one out of there.
There's a tool, called System Update Readiness Tool [here] that nobody uses. Its side effect is that it checks peculiarities like this. Run it, then unleash notepad.exe on C:\Windows\Logs\CBS\CheckSUR.log and find something like this:
Checking Windows Servicing Packages
Checking Package Manifests and Catalogs
(f) CBS MUM Corrupt 0x800F0900 servicing\Packages\
Package_4_for_KB2446710~31bf3856ad364e35~amd64~~6.1.1.3.mum Line 1:
(f) CBS Catalog Corrupt 0x800B0100 servicing\Packages\
Package_4_for_KB2446710~31bf3856ad364e35~amd64~~6.1.1.3.cat
Then find the package in registry, take ownership of the node, set permissions so
you can delete and delete it. Your OptionalFeatures.exe work
again and it took only 10 minutes.
I am more active on Twitter lately, finding it amusing personally.
Handle: http://twitter.com/matevzg
During exploration of high availability (HA) features of Windows Server AppFabric Distributed Cache I needed to generate enough load in a short timeframe. You know, to kill a couple of servers.
This is what came out of it.
It's a simple command line tool, allowing you to:
I talked about this at a recent Users Group meeting, doing a live demo of cache clusters under load.
Typical usage scenario is:
You can download the tool here.
Developer pureness is what we should all strive for.
This is not it:
Conflicting with this:
[c:\NetFx4\versiontester\bin\debug]corflags VersionTester.exe
Microsoft (R) .NET Framework CorFlags Conversion Tool. Version 3.5.21022.8
Copyright (c) Microsoft Corporation. All rights reserved.
Version : v4.0.21006
CLR Header: 2.5
PE : PE32
CorFlags : 1
ILONLY : 1
32BIT : 0
Signed : 0
Visual Studio must display version number of 4.0 instead of 4 in RTM. It needs to display major and minor number of the new framework coherently with 2.0, 3.0, 3.5.
Otherwise my compulsive disorder kicks in.
I'm just poking around the new framework, dealing with corflags and CLR Header versions (more on that later), but what I found is the following.
Do a dir *.dll /o:-s in the framework directory. You get this on .NET FX 4.0 beta 2 box:
[c:\windows\microsoft.net\framework64\v4.0.21006]dir *.dll /o:-s
Volume in drive C is 7 Serial number is 1ED5:8CA5
Directory of C:\Windows\Microsoft.NET\Framework64\v4.0.21006\*.dll
7.10.2009 3:44 9.833.784 clr.dll
7.10.2009 2:44 6.072.664 System.ServiceModel.dll
7.10.2009 2:44 5.251.928 System.Windows.Forms.dll
7.10.2009 6:04 5.088.584 System.Web.dll
7.10.2009 5:31 5.086.024 System.Design.dll
7.10.2009 3:44 4.927.808 mscorlib.dll
7.10.2009 3:44 3.529.608 Microsoft.VB.Activities.Compiler.dll
7.10.2009 2:44 3.501.376 System.dll
7.10.2009 2:44 3.335.000 System.Data.Entity.dll
7.10.2009 3:44 3.244.360 System.Data.dll
7.10.2009 2:44 2.145.096 System.XML.dll
7.10.2009 5:31 1.784.664 System.Web.Extensions.dll
7.10.2009 2:44 1.711.488 System.Windows.Forms.DataVis.dll
7.10.2009 5:31 1.697.128 System.Web.DataVis.dll
7.10.2009 5:31 1.578.352 System.Workflow.ComponentModel.dll
7.10.2009 3:44 1.540.928 clrjit.dll
7.10.2009 3:44 1.511.752 mscordacwks.dll
7.10.2009 3:44 1.454.400 mscordbi.dll
7.10.2009 2:44 1.339.248 System.Activities.Presentation.dll
7.10.2009 5:31 1.277.776 Microsoft.Build.dll
7.10.2009 2:44 1.257.800 System.Core.dll
7.10.2009 2:44 1.178.448 System.Activities.dll
7.10.2009 5:31 1.071.464 System.Workflow.Activities.dll
7.10.2009 5:31 1.041.256 Microsoft.Build.Tasks.v4.0.dll
7.10.2009 2:44 1.026.920 System.Runtime.Serialization.dll
Funny how many engineer dollars are/were spent on middle tier technologies I love.
Proof: System.ServiceModel.dll is bigger then System.dll, System.Windows.Forms.dll, mscorlib.dll, and System.Data.dll.
We only bow to clr.dll, a new .NET 4.0 Common Language Runtime implementation.
Interoperability day is focused on, well, interoperability. Especially between major vendors in government space.
We talked about major issues in long term document preservation formats and the penetration of Office serialization in real world...
.. and the lack of support from legislature covering long term electronic document formats and their use.
Here's the PPT [Slovenian].
On Monday, 05/18/2009 I'm speaking at Document Interop Initiative in London, England.
Title of the talk is: High Fidelity Programmatic Access to Document Content, which will cover the following topics:
The event is held at Microsoft Limited, Cardinal Place, 100 Victoria Street SW1E 5JL, London.
Update: Here's the presentation.
There are a couple situations where one might use svcutil.exe in command prompt vs. Add Service Reference in Visual Studio.
At least a couple.
I don't know who (or why) made a decision not to support namespace-less proxy generation in Visual Studio. It's a fact that if you make a service reference, you have to specify a nested namespace.
On the other end, there is an option to make the proxy class either public or internal, which enables one to hide it from the outside world.
That's a design flaw. There are numerous cases, where you would not want to create another [1] namespace, because you are designing an object model that needs to be clean.
[1] This is only limited to local type system namespace declarations - it does not matter what you are sending over the wire.
Consider the following namespace and class hierarchy that uses a service proxy (MyNamespace.Service class uses it):
Suppose that the Service class uses a WCF service, which synchronizes articles and comments with another system. If you use Add Web Reference, there is no way to hide the generated service namespace. One would probably define MyNamespace.MyServiceProxy as a namespace and declared a using statement in MyNamespace.MyService scope with:
using MyNamespace.MyServiceProxy;
This results in a dirty object model - your clients will see your internals and that shouldn't be an option.
Your class hierarchy now has another namespace, called MyNamespace.ServiceProxy, which is actually only used inside the MyNamespace.Service class.
What one can do is insist that the generated classes are marked internal, hiding the classes from the outside world, but still leaving an empty namespace visible for instantiating assemblies.
Leaving only the namespace visible, with no public classes in it is pure cowardness.
Not good.
Since there is no internal namespace modifier in .NET, you have no other way of hiding your namespace, even if you demand internal classes to be generated by the Add Service Reference dialog.
So, svcutil.exe to the rescue:
svcutil.exe <metadata endpoint>
/namespace:*, MyNamespace
/internal
/out:ServiceProxyClass.cs
The /namespace option allows you to map between XML and CLR namespace declarations. The preceding examples maps all (therefore *) existing XML namespaces in service metadata to MyNamespace CLR namespace.
The example will generate a service proxy class, mark it internal and put it into the desired namespace.
Your object model will look like this:
Currently, this is the only way to hide the internals of service communication with a proxy and still being able to keep your object model clean.
Note that this is useful when you want to wrap an existing service proxy or hide a certain already supported service implementation detail in your object model. Since your client code does not need to have access to complete service proxy (or you want to enhance it), it shouldn't be a problem to hide it completely.
Considering options during proxy generation, per method access modifiers would be beneficial too.
XmlSerializer is a great peace of technology. Combined with xsd.exe and friends (XmlSerializerNamespaces, et al.) it's a powerful tool for one to get around XML instance serialization/deserialization.
But, there is a potentially serious bug present, even in 3.5 SP1 version of the .NET Framework.
Suppose we have the following XML structure:
<Envelope xmlns="NamespaceA"
xmlns:B="NamespaceB">
<B:Header></B:Header>
<Body></Body>
</Envelope>
This tells you that Envelope, and Body elements are in the same namespace (namely 'NamespaceA'), while Header is qualified with 'NamespaceB'.
Now suppose we need to programmatically insert <B:Header> element into an empty, core, document.
Core document:
<Envelope xmlns="NamespaceA"
xmlns:B="NamespaceB">
<Body></Body>
</Envelope>
Now do an XmlNode.InsertNode() of the following:
<B:Header>...</B:Header>
We should get:
<Envelope xmlns="NamespaceA"
xmlns:B="NamespaceB">
<B:Header>...</B:Header>
<Body></Body>
</Envelope>
To get the to be inserted part one would serialize (using XmlSerializer) the following Header document:
<B: Header xmlns:B="NamespaceB">
...
</B:Header>
To do this, a simple XmlSerializer magic will do the trick:
XmlSerializerNamespaces xsn = new XmlSerializerNamespaces();
xsn.Add("B", "NamespaceB");
XmlSerializer xSer = new XmlSerializer(typeof(Header));
XmlTextWriter tw = new XmlTextWriter(ms, null);
xSer.Serialize(tw, h, xsn);
ms.Seek(0, SeekOrigin.Begin);
XmlDocument doc = new XmlDocument()
doc.Load(ms);
ms.Close();
This would generate exactly what we wanted. A prefixed namespace based XML document, with the B prefix bound to 'NamespaceB'.
Now, if we would import this document fragment into our core document using XmlNode.ImportNode(), we would get:
<Envelope xmlns="NamespaceA"
xmlns:B="NamespaceB">
<B:Header xmlns:B="NamespaceB">...</B:Header>
<Body></Body>
</Envelope>
Which is valid and actually, from an XML Infoset view, an isomorphic document to the original. So what if it's got the same namespace declared twice, right?
Right - until you involve digital signatures. I have described a specific problem with ambient namespaces in length in this blog entry: XmlSerializer, Ambient XML Namespaces and Digital Signatures.
When importing a node from another context, XmlNode and friends do a resolve against all namespace declarations in scope. So, when importing such a header, we shouldn't get a duplicate namespace declaration.
The problem is, we don't get a duplicate namespace declaration, since XmlSerializer actually inserts a normal XML attribute into the Header element. That's why we seem to get another namespace declaration. It's actually not a declaration but a plain old attribute. It's even visible (in this case in XmlElement.Attributes), and it definitely shouldn't be there.
So if you hit this special case, remove all attributes before importing the node into your core document. Like this:
XmlSerializerNamespaces xsn = new XmlSerializerNamespaces();
xsn.Add("B", "NamespaceB");
XmlSerializer xSer = new XmlSerializer(typeof(Header));
XmlTextWriter tw = new XmlTextWriter(ms, null);
xSer.Serialize(tw, h, xsn);
ms.Seek(0, SeekOrigin.Begin);
XmlDocument doc = new XmlDocument()
doc.Load(ms);
ms.Close();
doc.DocumentElement.Attributes.RemoveAll();
Note that the serialized document representation will not change,
since an ambient namespace declaration (B linked to 'NamespaceB') still exists in
the XML Infoset of doc XML document.
DOW is down almost 700, again. And...
... I just have to post this. Especially when almost everything (besides devaluation of the safe currency) has already been done.
If you had bought $1,000.00 of Nortel stock one year ago, it would now be worth $49.00.
With Enron , you would have $16.50 of the original $1,000.00.
With MCI/Worldcom , you would have less than $5.00 left.
If you had bought $1,000.00 worth of Miller Genuine Draft (the beer, not the stock) one year ago, drunk all the beer then turned in the cans for the 10-cent deposit, you would have $214.00.
Based on the above, 401KegPlan.com's current investment advice is to take that $5.00 you have left over and drink lots and lots of beer and recycle.
We are going over the edge. We as a world economy.
And yea, I opened a keg of Guinness.
I'm sorry it took a week, but here we go.
This is my exit content from Bleeding Edge 2008. I'm also posting complete conference contents, just in case.
Thanks go out to Dušan, Dejan, Miha, Miha and Miha.
Downloads:
Remark: PPT in Slovene only. Code international.
Thank you for attending. Hope to see you next year!
Recently I needed to specify exactly how I would like a specific class serialized. Suppose we have the following simple schema:
<xs:schema targetNamespace="
http://my.favourite.ns/person.xsd
"
elementFormDefault="qualified"
xmlns:xs="
http://www.w3.org/2001/XMLSchema
"
xmlns="
http://my.favourite.ns/person.xsd"
>
<xs:complexType name="Person">
<xs:sequence>
<xs:element name="Name" type="xs:string" />
<xs:element name="Surname" type="xs:string" />
<xs:element name="Age" type="xs:int" minOccurs="0"
/>
</xs:sequence>
</xs:complexType>
</xs:schema>
Let's call this schema President.xsd.
A valid XML instance against this schema would be, for example:
<Person xmlns="http://my.favourite.ns/person.xsd">
<Name>Barrack</Name>
<Surname>Obama</Surname>
<Age>47</Age>
</Person>
Since we are serializing against a specific XML schema (XSD), we have an option of schema compilation:
xsd /c President.xsd
This, obviously, yields a programmatic type system result in a form of a C# class. All well and done.
Now.
If we serialize the filled up class instance back to XML, we get a valid XML instance. It's valid against President.xsd.
There is a case where your schema changes ever so slightly - read, the namespaces change, and you don't want to recompile the entire solution to support this, but you still want to use XML serialization. Who doesn't - what do you do?
Suppose we want to get the following back, when serializing:
<PresidentPerson xmlns="
http://schemas.gama-system.com/
president.xsd"
>
<Name>Barrack</Name>
<Surname>Obama</Surname>
<Age>47</Age>
</PresidentPerson>
There is an option to override the default serialization technique of XmlSerializer. Enter the world of XmlAttributes and XmlAttributeOverrides:
private XmlSerializer GetOverridedSerializer()
{
// set overrides for person element
XmlAttributes attrsPerson = new XmlAttributes();
XmlRootAttribute rootPerson =
new XmlRootAttribute("PresidentPerson");
rootPerson.Namespace = "
http://schemas.gama-system.com/
president.xsd
";
attrsPerson.XmlRoot = rootPerson;
// create overrider
XmlAttributeOverrides xOver = new XmlAttributeOverrides();
xOver.Add(typeof(Person), attrsPerson);
XmlSerializer xSer = new XmlSerializer(typeof(Person),
xOver);
return xSer;
}
Now serialize normally:
Stream ms = new MemoryStream();
XmlTextWriter tw = new XmlTextWriter(ms, null);
xSer.Serialize(tw, person);
This will work even if you only have a compiled version of your object graph, and you don't have any sources. System.Xml.Serialization.XmlAttributeOverrides class allows you to adorn any XML serializable class with your own XML syntax - element names, attribute names, namespaces and types.
Remember - you can override them all and still serialize your angle
brackets.
I'm happy to announce our organization work is coming to fruition.
Bleeding Edge 2008 is taking the stage for the autumn season.
Portorož, Slovenia, October 1st, 9:00
Official site, Registration, Sponsors
Go for the early bird registration (till September 12th). The time is now.
Potential sponsor? Here's the offering.
One of our core products, Gama System eArchive was accredited last week.
This is the first accreditation of a domestic product and the first one covering long term electronic document storage in a SOA based system.
Every document stored inside the Gama System eArchive product is now legally legal. No questions asked.
Accreditation is done by a national body and represents the last step in a formal acknowledgement to holiness.
That means a lot to me, even more to our company.
The following blog entries were (in)directly inspired by the development of this product:
We've made a lot of effort to get this thing developed and accredited. The certificate is here.
This, this, this, this, this, this, this, this, this and those are direct approvals of our correct decisions.
This is brilliant.
Sysinternals tools are now (actually were already when I left for vacation) available live via a web (http and WebDAV) based resource on http://live.sysinternals.comand \\live.sysinternals.com.
This means I can do the following:
[c:\]dir \\live.sysinternals.com\tools
Directory of \\live.sysinternals.com\tools\ *
2.06.2008 1:16 <DIR> .
2.06.2008 1:16 <DIR> ..
2.06.2008 1:16 <DIR> WindowsInternals
30.05.2008 17:55 668 About_This_Site.txt
13.05.2008 19:00 225.320 accesschk.exe
1.11.2006 15:06 174.968 AccessEnum.exe
1.11.2006 23:05 121.712 accvio.EXE
12.07.2007 7:26 50.379 AdExplorer.chm
26.11.2007 14:21 422.952 ADExplorer.exe
7.11.2007 11:13 401.616 ADInsight.chm
20.11.2007 14:25 1.049.640 ADInsight.exe
1.11.2006 15:05 150.328 adrestore.exe
1.11.2006 15:06 154.424 Autologon.exe
8.05.2008 10:20 48.476 autoruns.chm
12.05.2008 17:31 622.632 autoruns.exe 1.11.2006
...
1.11.2006 15:06 207.672 Winobj.exe
30.12.1999 12:26 7.653 WINOBJ.HLP
27.05.2008 16:21 142.376 ZoomIt.exe
24.185.901 bytes in 103 files and 3 dirs
109.442.727.936 bytes free
Or, I can fire up a Windows Explorer window (or press the start key, then type) and just type: \\live.sysinternals.com\tools.
Or:
[c:\]copy \\live.sysinternals.com\tools\Procmon.exe
C:\Windows\System32
\\live.sysinternals.com\tools\Procmon.exe =>
C:\Windows\System32\Procmon.exe
1 file copied
Brilliant and useful.
As promised, here are the sources from my NTK 2008 sessions [1].
Talk: Document Style Service Interfaces
Read the following blog entry, I tried to describe the concept in detail. Also, this blog post discusses issues when using large document parameters with reliable transport (WS-RM) channels.
Demo: Document Style Service Interfaces [Download]
This demo defines a service interface with the document parameter model, ie. Guid CreatePerson(XmlDocument person). It shows three different approaches to creation of the passed document:
Also, versioned schemas for the Person document are shown, including the support for document validation and version independence.
Talk: Windows Server 2008 and Transactional NTFS
This blog entry describes the concept.
Demo 1: Logging using CLFS (Common Log File System) [Download]
Demo 2: NTFS Transactions using the File System, SQL, WCF [Download]
Demo 3: NTFS Transactions using the WCF, MTOM Transport [Download] [2]
[1] All sources are in VS 2008 solution file format.
[2] This transaction spans from the client, through the service boundary, to the server.
Suppose we have a document like this:
<?xml version="1.0"?>
<root xmlns="urn-foo-bar">
<subroot>
<value1>value1</value1>
<value2>value2</value2>
</subroot>
<Signature xmlns="h
ttp://www.w3.org/2000/09/xmldsig
#">
<SignedInfo>
<CanonicalizationMethod
Algorithm="
http://www.w3.org/TR/2001/REC-xml-c14n-20010315
"
/>
<SignatureMethod
Algorithm="
http://www.w3.org/2000/09/xmldsig#rsa-sha1
"
/>
<Reference URI="">
<Transforms>
<Transform
Algorithm="
http://www.w3.org/2000/09/
xmldsig#enveloped-signature
"/>
</Transforms>
<DigestMethod
Algorithm="
http://www.w3.org/2000/09/xmldsig#sha1
"
/>
<DigestValue>1Xp...EOko=</DigestValue>
</Reference>
</SignedInfo>
<SignatureValue>nls...cH0k=</SignatureValue>
<KeyInfo>
<KeyValue>
<RSAKeyValue>
<Modulus>9f3W...fxG0E=</Modulus>
<Exponent>AQAB</Exponent>
</RSAKeyValue>
</KeyValue>
<X509Data>
<X509Certificate>MIIEi...ktYgN</X509Certificate>
</X509Data>
</KeyInfo>
</Signature>
</root>
This document represents data and an enveloped digital signature over the complete XML document. The digital signature completeness is defined in the Reference element, which has URI attribute set to empty string (Reference Uri="").
Checking the Signature
The following should always be applied during signature validation:
Note: In most situations this is the optimal validation sequence. Why? Signatures are broken far more frequently then certificates are revoked/expired. And certificates are revoked/expired far more frequently then their chains.
1. Validating the digital signature
First, get it out of there:
XmlNamespaceManager xmlns = new XmlNamespaceManager(xdkDocument.NameTable); [1]
xmlns.AddNamespace("ds", " http://www.w3.org/2000/09/xmldsig #");
XmlNodeList nodeList = xdkDocument.SelectNodes("//ds:Signature", xmlns);
[1] xdkDocument should be an XmlDocument instance representing your document.Second, construct a SignedXml instance:
foreach (XmlNode xmlNode in nodeList)
{
// create signed xml object
SignedXml signedXml = new SignedXml(xdkDocument); [2]// verify signature
signedXml.LoadXml((XmlElement)xmlNode);
}[2] Note that we are constructing the SignedXml instance from a complete document, not only the signature. Read this.
Third, validate:
bool booSigValid = signedXml.CheckSignature();
If booSigValid is true, proceed.
2. Validating the certificate(s) used to create the signature
First, get it out of there:
XmlNode xndCert = xmlNode.SelectSingleNode(".//ds:X509Certificate", xmlns); [3]
[3] There can be multiple X509Certificate elements qualified with h ttp://www.w3.org/2000/09/xmldsig # namespace in there. Xml Digital Signature specification is allowing the serialization of a complete certificate chain of the certificate used to sign the document. Normally, the signing certificate should be the first to be serialized.
Second, get the X509Certificate2 instance:
byte[] bytCert = Convert.FromBase64String(xndCert.InnerText);
X509Certificate2 x509cert = new X509Certificate2(bytCert);Third, validate:
bool booCertValid = x509cert.Verify();
If booCertValid is true, proceed.
3. Validating the certificate(s) chain(s)
Building and validating the chain:
X509Chain certChain = new X509Chain();
bool booChainValid = certChain.Build(x509cert);
int intChainLength = certChain.ChainElements.Count; [4]If booChainValid is true, your signature is valid.
Some Rules and Some Laws
We have three booleans:
If booSigValid evaluates to false, there is no discussion. Someone changed the document.
What happens if one of the following two expressions evaluates to true:
1. ((booSigValid) && (!booCertValid) && (!booChainValid))
2. ((booSigValid) && (booCertValid) &&
(!booChainValid))
This normally means that either the certificate is not valid (CRLed or expired) [4], or one of the chain's certificate is not valid/expired.
[4] The premise is that one checked the signature according to 1, 2, 3 schema described above.
The Question
Is digital signature valid even if CA revoked the certificate after the signature has already been done? Is it valid even after the certificate expires? If signature is valid and certificate has been revoked, what is the legal validity of the signature?
In legal terms, the signature would be invalid on both upper assertions, 1 and 2.
This means, that once the generator of the signature is dead, or one of his predecessors is dead, all his children die too.
Timestamps to the Rescue
According to most country's digital signature laws the signature is valid only during the validity of the signing certificate and validity of the signing certificate's chain, both being checked for revocation and expiry date ... if you don't timestamp it.
If the source document has another signature from a trusted authority, and that authority is a timestamp authority, it would look like this:
<?xml version="1.0"?>
<root xmlns="urn-foo-bar">
<subroot>
<value1>value1</value1>
<value2>value2</value2>
</subroot>
<Signature xmlns="
http://www.w3.org/2000/09/xmldsig
#">
...
</Signature>
<dsig:Signature Id="TimeStampToken"
xmlns:dsig="
http://www.w3.org/2000/09/xmldsig
#">
<dsig:SignedInfo>
<dsig:CanonicalizationMethod
Algorithm="
http://www.w3.org/TR/2001/REC-xml-c14n-20010315
"
/>
<dsig:SignatureMethod
Algorithm="
http://www.w3.org/2000/09/xmldsig#rsa-sha1"
/>
<dsig:Reference
URI="#TimeStampInfo-113D2EEB158BBB2D7CC000000000004DF65">
<dsig:DigestMethod
Algorithm="
http://www.w3.org/2000/09/xmldsig#sha1"
/>
<dsig:DigestValue>y+xw...scKg=</dsig:DigestValue>
</dsig:Reference>
<dsig:Reference URI="#TimeStampAuthority">
<dsig:DigestMethod
Algorithm="
http://www.w3.org/2000/09/xmldsig#sha1"
/>
<dsig:DigestValue>KhFIr...Sv4=</dsig:DigestValue>
<dsig:/Reference>
</dsig:SignedInfo>
<dsig:SignatureValue>R4m...k3aQ==</dsig:SignatureValue>
<dsig:KeyInfo Id="TimeStampAuthority">
<dsig:X509Data>
<dsig:X509Certificate>MII...Osmg==</dsig:X509Certificate>
</dsig:X509Data>
</dsig:KeyInfo>
<dsig:Object
Id="TimeStampInfo-113D2EEB158BBB2D7CC000000000004DF65">
<ts:TimeStampInfo
xmlns:ts="
http://www.provider.com/schemas
/timestamp-protocol-20020207
"
xmlns:ds="
http://www.w3.org/2000/09/xmldsig
#">
<ts:Policy id="
http://provider.tsa.com/documents"
/>
<ts:Digest>
<ds:DigestMethod
Algorithm="
http://www.w3.org/2000/
09/xmldsig#sha1"
/>
<ds:DigestValue>V7+bH...Kmsec=</ds:DigestValue>
</ts:Digest>
<ts:SerialNumber>938...045</ts:SerialNumber>
<ts:CreationTime>2008-04-13T11:31:42.004Z</ts:CreationTime>
<ts:Nonce>121...780</ts:Nonce>
</ts:TimeStampInfo>
</dsig:Object>
</dsig:Signature>
</root>
The second signature would be performed by an out-of-band authority, normally a TSA authority. It would only sign a hash value (in this case SHA1 hash) which was constructed by hashing the original document and the included digital signature.
This (second) signature should be checked using the same 1, 2, 3 steps. For the purpose of this mind experiment, let's say it would generate a booTimestampValid boolean.
Now, let's reexamine the booleans:
In this case, even though the signature's certificate (or its chain) is invalid, the signature would pass legal validity if the timesamp's signature is valid, together with its certificate and certificate chain. Note that the TSA signature is generated with a different set of keys than the original digital signature.
Actually booTimestampValid is defined as ((booSigValid) && (booCertValid) && (booChainValid)) for the timestamp signature/certificate/certificate chain [5].
[5] Legal validity is guaranteed only in cases where 1 or 2 are true.
There is a serious limitation present in the RTM version of WCF 3.0/3.5 regarding control of WS-RM retry messages during a reliable session saga.
Let me try to explain the concept.
We have a sender (communication initiator) and a receiver (service). When a reliable session is constructed between the two, every message needs to come to the other side. In a request-reply world, the sender would be a client during the request phase. Then roles would switch during the response phase.
The problem arises when one of the sides does not get the message acknowledgement in time. WCF reliable messaging implementation retries the sending process and hopes for the acknowledgement. All is well.
The problem is that there is no way for the sending application to specify how long the retry timeout should be. There is a way to specify channel opening and closing timeouts, acknowledgement interval and more, but nothing will define how long should the initiator wait for message acks.
Let's talk about how WCF acknowledges messages.
During a request-reply exchange every request message is acknowledged in a response message. WS-RM SOAP headers regarding sequence definition (request) and acknowledgements (response) look like this:
a1 <r:Sequence s:mustUnderstand="1" u:Id="_2">
a2 <r:Identifier>urn:uuid:6c9d...ca90</r:Identifier>
a3 <r:MessageNumber>1</r:MessageNumber>
a4 </r:Sequence>
b1 <r:SequenceAcknowledgement u:Id="_3">
b2 <r:Identifier>urn:uuid:6c99...ca290</r:Identifier>
b3 <r:AcknowledgementRange Lower="1" Upper="1"/>
b4 <netrm:BufferRemaining
b5 xmlns:netrm="
http://schemas.microsoft.com/ws/2006/05/rm"
>
b6 </netrm:BufferRemaining>
b7 </r:SequenceAcknowledgement>
Request phase defines a sequence and sends the first message (a3). In response, there is the appropriate acknowledgement present, which acks the first message (b3) with Lower and Upper attributes. Lines b4-b6 define a benign and super useful WCF implementation of flow control, which allows the sender to limit the rate of sent messages if service side becomes congested.
When the session is setup, WCF will have a really small time waiting window for acks. Therefore, if ack is not received during this period, the infrastructure will retry the message.
Duplex contracts work slightly differently. There, the acknowledgement interval can be set. This configuration option (config attribute is called acknowledgementInterval) is named inappropriately, since it controls the service and not the client side.
It does not define the time limit on received acknowledgements, but the necessary time to send the acknowledgments back. It allows grouping of sent acks, so that multiple incoming messages can be acked together. Also, the infrastructure will not necessarily honor the specified value.
Now consider the following scenario:
[1] It does not matter whether the initiator is on a dial up, or the message is huge.
What happens?
Service initiator sets up a reliable session, then:
[2] Under 3s.
The number of retry messages depends on the bandwidth and message size. It can happen that tens of messages will be sent before first acknowledgement will be received.
Adaptability algorithms
A good thing is that there are undocumented algorithms implemented for retry timeout. The implementation increases the reply timeout exponentially when the infrastructure detects that the network conditions demand more time (slow throughput) and allows reliable delivery (no losses). If loses are present the reply timeout decreases.
Retry timeout is actually calculated when establishing an RM session. It is based on the roundtrip time and is bigger if the roundtrip time is long.
So, when first messages in a session are exchanged, don't be too surprised to see a couple of message retries.
Wow, Stephen.
This is one of the best ideas I've heard of hedging against the dollar in terms if IT outsourcing cost. And I mean it.
I'm not in a position of valueing the description made, but I am willing to take the pill, no matter what.
What everybody needs is only to get to one million sterling project, taking half a year.
That's it, hedging done or not.
This week XML is ten years old. The core XML 1.0 specification was released in February 1998.
It's a nice anniversary to have.
The XML + Namespaces specification has a built in namespace declaration of http://www.w3.org/XML/1998/namespace. That's an implicit namespace declaration, a special one, governing all other. One namespace declaration to rule them all. Bound to xml: prefix.
XML was born and published as a W3C Recommendation on 10th of February 1998.
So, well done XML. You did a lot for IT industry in the past decade.
If you're into Silverlight, and you should be, check out http://www.silverlightchallenge.eu, and especially sign up on http://slovenia.silverlightchallenge.eu and join one of our many developers who will participate in this competition.
More here.
Mr. Lawrence Lessig is a founder of Stanford Center for Internet and Society. He's also a chairman for the Creative Commons organization.
Lessig is one of the most effective speakers in the world, a professor at Stanford, who tries to make this world a better place from a standpoint of stupidity in terms of the copyright law.
The following is published on the CC's site:
We use private rights to create public goods: creative works set free for certain uses. ... We work to offer creators a best-of-both-worlds way to protect their works while encouraging certain uses of them — to declare “some rights reserved.”
Therefore Creative Commons stands for the mantra of some rights reserved and not all rights reserved in terms of meaningful usage of digital technology.
Being a libertarian myself, I cannot oppose these stands. Balance and compromise are good things for content and intellectual products, such as software.
Watch his masterpiece, delivered at TED.
We talked today, with Miha, a C# Yoda. Via IM, everything seemed pointless. Couldn't find a good case for the cause of the following mind experiment:
using System;
class Tubo
{
public static void Test() {}
private void Test() {}
}
Note: I'm using Miha's syntax in this post.
We have a static method called Test and an instance method, also called Test. Parameter models of both methods are the same, empty.
Would this compile?
It does not.
The question is why and who/what causes this. There is actually no rationale behind not allowing this thing to compile since both, the compiler and the runtime know the method info upfront.
Since the runtime has all the information it needs, it is strange that this would not be allowed at compile time. However, a (C#) compiler has to determine whether you, the programmer meant to access a static or an instance method.
Here lie the dragons.
It is illegal in most virtual machine based languages to have the same method name and signature for a static/instance method.
The following is an excerpt from a Java specification:
8.4.2 Method Signature
Two methods have the same signature if they have the same name and argument types.
Two method or constructor declarations M and N have the same argument types if all of the following conditions hold:
- They have the same number of formal parameters (possibly zero)
- They have the same number of type parameters (possibly zero)
- Let <A1,...,An> be the formal type parameters of M and let <B1,...,Bn> be the formal type parameters of N. After renaming each occurrence of a Bi in N's type to Ai the bounds of corresponding type variables and the argument types of M and N are the same.
Java (and also C#) does not allow two methods with the same name and parameter model no matter what the access modifier is (public, private, internal, protected ...) and whether the method is static or instance based.
Why?
Simple. Programmer ambiguity. There is no technical reason not to allow it.
Consider the following:
using System;
class Tubo
{
public static void Test();
private void Test();
public void AmbiguousCaller() { Test(); }
}
What would method AmbiguousCaller call? A static Test or instance Test method?
Can't decide?
That's why this is not allowed.
And yes, it would call the instance method if this would be allowed, since statics in C# should be called using a class name, as in Test.Test(). Note that the preceding example does not compile. Also note, that it is legal to have a AmbiguousCaller body as Test() or Tubo.Test().
There is another ambiguous reason. Local variables in C# cannot be named the same as the enclosing class. Therefore, this is illegal:
using System;
class Tubo
{
private int Tubo;
}
It is. Do a csc /t:library on it.
Since you confirmed this fact, consider the following:
using System;
public class Tubo
{
public static void StaticMethod() {}
public void InstanceMethod() {}
}
public class RunMe
{
public static void Main()
{
Tubo Tubo = new Tubo();
Tubo.InstanceMethod();
Tubo.StaticMethod();
}
}
Focus on the Main method body. In Tubo.InstanceMethod() an instance method is called and a reference (namely Tubo) is used. In Tubo.StaticMethod() a static method is called and Tubo is not an instance reference, but class name.
It all comes down to programmer ambiguity. Given a chance, I would support this design decision too.
In WCF, collection data that is passed through the service boundary goes through a type filter - meaning you will not necessarily get the intrinsic service side type on the client, even if you're expecting it.
No matter if you throw back an int[] or List<int>, you will get the int[] by default on the client.
The main reason is that there is no representation for System.Collections.Generic.List or System.Collection.Generic.LinkedList in service metadata. The concept of System.Collection.Generic.List<int> for example, actually does not have a different semantic meaning from an integer array - it's still a list of ints - but will allow you to program against it with ease.
Though, if one asks nicely, it is possible to guarantee the preferred collection type on the client proxy in certain scenarios.
Unidimensional collections, like List<T>, LinkedList<T> or SortedList<T> are always exposed as T arrays in the client proxy. Dictionary<K, V>, though, is regened on the client via an annotation hint in WSDL (XSD if we are precise). More on that later.
Let's look into it.
WCF infrastructure bends over backwards to simplify client development. If the service side contains a really serializable collection (marked with [Serializable], not [DataContract]) that is also concrete (not an interface), and has an Add method with the following signatures...
public void Add(object obj);
public void Add(T item);
... then WCF will serialize the data to an array of the collections type.
Too complicated? Consider the following:
[ServiceContract]
interface ICollect
{
[OperationContract]
public void AddCoin(Coin coin);
[OperationContract]
public List<Coin> GetCoins();
}
Since the List<T> supports a void Add<T> method and is marked with [Serializable], the following wire representation will be passed to the client:
[ServiceContract]
interface ICollect
{
[OperationContract]
void AddCoin(Coin coin);
[OperationContract]
Coin[] GetCoins();
}
Note: Coin class should be marked either with a [DataContract] or [Serializable] in
this case.
So what happens if one wants the same contract on the client proxy and the service? There is an option in the WCF proxy generator, svcutil.exe to force generation of class definitions with a specific collection type.
Use the following for List<T>:
svcutil.exe http://service/metadata/address
/collectionType:System.Collections.Generic.List`1
Note: List`1 uses back quote,
not normal single quote character.
What the /collectionType (short /ct) does, is forces generation of strongly typed collection types. It will generate the holy grail on the client:
[ServiceContract]
interface ICollect
{
[OperationContract]
void AddCoin(Coin coin);
[OperationContract]
List<Coin> GetCoins();
}
In Visual Studio 2008, you will even have an option to specify which types you want to use as collection types and dictionary collection types, as in the following picture:
On the other hand, dictionary collections, as in System.Collections.Generic.Dictionary<K, V> collections, will go through to the client no matter what you specify as a /ct parameter (or don't at all).
If you define the following on the service side...
[OperationContract]
Dictionary<string, int> GetFoo();
... this will get generated on the client:
[OperationContract]
Dictionary<string, int> GetFoo();
Why?
Because using System.Collections.Generic.Dictionary probably means you know there is no guarantee that client side representation will be possible if you are using an alternative platform. There is no way to meaningfully convey the semantics of a .NET dictionary class using WSDL/XSD.
So, how does the client know?
In fact, the values are serialized as joined name value pair elements as the following schema says:
<xs:complexType name="ArrayOfKeyValueOfstringint">
<xs:annotation>
<xs:appinfo>
<IsDictionary
xmlns="
http://schemas.microsoft.com/2003/10/Serialization/
">
true
</IsDictionary>
</xs:appinfo>
</xs:annotation>
<xs:sequence>
<xs:element minOccurs="0" maxOccurs="unbounded"
name="KeyValueOfstringint">
<xs:complexType>
<xs:sequence>
<xs:element name="Key" nillable="true"
type="xs:string" />
<xs:element name="Value"
type="xs:int" />
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
<xs:element name="ArrayOfKeyValueOfstringint"
nillable="true" type="tns:ArrayOfKeyValueOfstringint" />
Note: You can find this schema under types definition
of the metadata endpoint. Usually ?xsd=xsd2, instead
of ?wsdl will suffice.
As in:
<GetFooResponse>
<KeyValueOfstringint>
<Key>one</Key>
<Value>1</Value>
<Key>two</Key>
<Value>2</Value>
<!-- ... -->
<Key>N</Key>
<Value>N</Value>
</KeyValueOfstringint>
<GetFooResponse>
The meaningful part of type service-to-client-transportation resides in <xs:annotation> element, specifically in /xs:annotation/xs:appinfo/IsDictionary element, which defines that this complex type represents a System.Collections.Generic.Dictionary class. Annotation elements in XML Schema are parser specific and do not convey any structure/data type semantics, but are there for the receiver to interpret.
This must be one of the most excellent school cases of using XML Schema annotations. It allows the well-informed client (as in .NET client, VS 2008 or svcutil.exe) to utilize the semantic meaning if it understands it. If not, no harm is done since the best possible representation, in a form of joined name value pairs still goes through to the client.
I'm a huge fan of document style parameter models when implementing a public, programmatic façade to a business functionality that often changes.
public interface IDocumentParameterModel
{
[OperationContract]
[FaultContract(typeof(XmlInvalidException))]
XmlDocument Process(XmlDocument doc);
}
This contract defines a simple method, called Process, which processes the input document. The idea is to define the document schema and validate inbound XML documents, while throwing exceptions on validation errors. The processing semantics is arbitrary and can support any kind of action, depending on the defined invoke document schema.
A simple instance document which validates against a version 1.0 processing schema could look like this:
<?xml version="1.0?>
<Process xmlns="http://www.gama-system.com/process10.xsd" version="1.0">
<Instruction>Add</Instruction>
<Parameter1>10</Parameter1>
<Parameter2>21</Parameter2>
</Process>
Another processing instruction, supported in version 1.1 of the processing schema, with different semantics could be:
<?xml version="1.0?>
<Process xmlns="http://www.gama-system.com/process11.xsd" version="1.1">
<Instruction>Store</Instruction>
<Content>77u/PEFwcGxpY2F0aW9uIHhtbG5zPSJod...mdVcCI</Content>
</Process>
Note that the default XML namespace changed, but that is not a norm. It only allows you to automate schema retrieval using the schema repository (think System.Xml.Schema.XmlSchemaSet), load all supported schemas and validate automatically.
public class ProcessService : IDocumentParameterModel
{
public XmlDocument Process(XmlDocument doc)
{
XmlReaderSettings sett = new XmlReaderSettings();
sett.Schemas.Add(<document
namespace 1>, <schema uri 1>);
...
sett.Schemas.Add(<document namespace n>, <schema
uri n>);
sett.ValidationType = ValidationType.Schema;
sett.ValidationEventHandler += new
ValidationEventHandler(XmlInvalidHandler);
XmlReader books = XmlReader.Create(doc.OuterXml,
sett);
while (books.Read())
{ }
// processing goes here
...
}
static void XmlInvalidHandler(object sender,
ValidationEventArgs e)
{
if (e.Severity == XmlSeverityType.Error)
throw new XmlInvalidException(e.Message);
}
}
The main benefit of this approach is decoupling the parameter model and method processing version from the communication contract. A service maintainer has an option to change the terms of processing over time, while supporting older version-aware document instances.
This notion is of course most beneficial in situations where your processing syntax changes frequently and has complex validation schemas. A simple case presented here is informational only.
So, how do we validate?
The service side is quite straightforward.
Let's look at the client and what are the options for painless generation of service calls using this mechanism.
Generally, one can always produce an instance invoke document by hand on the client. By hand meaning using System.Xml classes and DOM concepts. Since this is higly error prone and gets tedious with increasing complexity, there is a notion of a schema compiler, which automatically translates your XML Schema into the CLR type system. Xsd.exe and XmlSerializer are your friends.
If your schema requires parts of the instance document to be digitally signed or encrypted, you will need to adorn the serializer output with some manual DOM work. This might also be a reason to use the third option.
The third, and easiest option for the general developer, is to provide a local object model, which serializes the requests on the client. This is an example:
ProcessInstruction pi = new ProcessInstruction();
pi.Instruction = "Add";
pi.Parameter1 = 10;
pi.Parameter2 = 21;
pi.Sign(cert); // pi.Encrypt(cert);
pi.Serialize();
proxy.Process(pi.SerializedForm);
The main benefit of this approach comes down to having an option on the server and the client. Client developers have three different levels of complexity for generating service calls. The model allows them to be as close to the wire as they see fit. Or they can be abstracted completely from the wire representation if you provide a local object model to access your services.
If you use XmlSerializer type to perform serialization of documents which are digitally signed later on, you should be careful.
XML namespaces which are included in the serialized form could cause trouble for anyone signing the document after serialization, especially in the case of normalized signature checks.
Let's go step by step.
Suppose we have this simple schema, let's call it problem.xsd:
<?xml version="1.0" encoding="utf-8"?>
<xs:schema targetNamespace="
http://www.gama-system.com/problems.xsd
"
elementFormDefault="qualified"
xmlns="
http://www.gama-system.com/problems.xsd
"
xmlns:xs="
http://www.w3.org/2001/XMLSchema
">
<xs:element name="Problem" type="ProblemType"/>
<xs:complexType name="ProblemType">
<xs:sequence>
<xs:element name="Name" type="xs:string" />
<xs:element name="Severity" type="xs:int" />
<xs:element name="Definition" type="DefinitionType"/>
<xs:element name="Description" type="xs:string"
/>
</xs:sequence>
</xs:complexType>
<xs:complexType name="DefinitionType">
<xs:simpleContent>
<xs:extension base="xs:base64Binary">
<xs:attribute name="Id" type="GUIDType"
use="required"/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
<xs:simpleType name="GUIDType">
<xs:restriction base="xs:string">
<xs:pattern value="Id-[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-
[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}"/>
</xs:restriction>
</xs:simpleType>
</xs:schema>
This schema describes a problem, which is defined by a name (typed as string), severity (typed as integer), definition (typed as byte array) and description (typed as string). The schema also says that the definition of a problem has an Id attribute, which we will use when digitally signing a specific problem definition. This Id attribute is defined as GUID, as the simple type GUIDType defines.
Instance documents validating against this schema would look like this:
<?xml version="1.0"?>
<Problem xmlns="
http://www.gama-system.com/problems.xsd
">
<Name>Specific problem</Name>
<Severity>4</Severity>
<Definition Id="c31dd112-dd42-41da-c11d-33ff7d2112s2">MD1sDQ8=</Definition>
<Description>This is a specific problem.</Description>
</Problem>
Or this:
<?xml version="1.0"?>
<Problem xmlns="http://www.gama-system.com/problems.xsd">
<Name>XML DigSig Problem</Name>
<Severity>5</Severity>
<Definition Id="b01cb152-cf93-48df-b07e-97ea7f2ec2e9">CgsMDQ8=</Definition>
<Description>Ambient namespaces break digsigs.</Description>
</Problem>
Mark this one as exhibit A.
Only a few of you out there are still generating XML documents by hand, since there exists a notion of schema compilers. In the .NET Framework world, there is xsd.exe, which bridges the gap between the XML type system and the CLR type system.
xsd.exe /c problem.xsd
The tool compiles problem.xsd schema into the CLR type system. This allows you to use in-schema defined classes and serialize them later on with the XmlSerializer class. The second instance document (exhibit A) serialization program would look like this:
// generate problem
ProblemType problem = new ProblemType();
problem.Name = "XML DigSig Problem";
problem.Severity = 5;
DefinitionType dt = new DefinitionType();
dt.Id = Guid.NewGuid().ToString();
dt.Value = new byte[] { 0xa, 0xb, 0xc, 0xd, 0xf };
problem.Definition = dt;
problem.Description = "Ambient namespaces break digsigs.";
// serialize problem
XmlSerializer ser = new XmlSerializer(typeof(ProblemType));
FileStream stream = new FileStream("Problem.xml", FileMode.Create, FileAccess.Write);
ser.Serialize(stream, problem);
stream.Close();
Here lie the dragons.
XmlSerializer class default serialization mechanism would output this:
<?xml version="1.0"?>
<Problem xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns="http://www.gama-system.com/problems.xsd">
<Name>XML DigSig Problem</Name>
<Severity>5</Severity>
<Definition Id="b01cb152-cf93-48df-b07e-97ea7f2ec2e9">CgsMDQ8=</Definition>
<Description>Ambient namespaces break digsigs.</Description>
</Problem>
Mark this one as exhibit B.
If you look closely, you will notice two additional prefix namespace declarations in exhibit B bound to xsi and xsd prefixes, against exhibit A.
The fact is, that both documents (exhibit B, and exhibit A) are valid against the problem.xsd schema.
<theory>
Prefixed namespaces are part of the XML Infoset. All XML processing is done on XML Infoset level. Since only declarations (look at prefixes xsi and xsd) are made in exhibit B, the document itself is not semantically different from exhibit A. That stated, instance documents are equivalent and should validate against the same schema.
</theory>
What happens if we sign the Definition element of exhibit B (XmlSerializer generated, prefixed namespaces present)?
We get this:
<?xml version="1.0"?>
<Problem xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns="http://www.gama-system.com/problems.xsd">
<Name>XML DigSig Problem</Name>
<Severity>5</Severity>
<Definition Id="b01cb152-cf93-48df-b07e-97ea7f2ec2e9">CgsMDQ8=</Definition>
<Description>Ambient namespaces break digsigs.</Description>
<Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
<SignedInfo>
<CanonicalizationMethod Algorithm="http://www.w3.org/TR/...20010315"
/>
<SignatureMethod Algorithm="http://www.w3.org/...rsa-sha1"
/>
<Reference URI="#Id-b01cb152-cf93-48df-b07e-97ea7f2ec2e9">
<DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"
/>
<DigestValue>k3gbdFVJEpv4LWJAvvHUZZo/VUQ=</DigestValue>
</Reference>
</SignedInfo>
<SignatureValue>K8f...p14=</SignatureValue>
<KeyInfo>
<KeyValue>
<RSAKeyValue>
<Modulus>eVs...rL4=</Modulus>
<Exponent>AQAB</Exponent>
</RSAKeyValue>
</KeyValue>
<X509Data>
<X509Certificate>MIIF...Bw==</X509Certificate>
</X509Data>
</KeyInfo>
</Signature>
</Problem>
Let's call this document exhibit D.
This document is the same as exhibit B, but has the Definition element digitally signed. Note the /Problem/Signature/SingedInfo/Reference[@URI] value. Digital signature is performed only on the Definition element and not the complete document.
Now, if one would validate the same document without the prefixed namespace declarations, as in:
<?xml version="1.0"?>
<Problem xmlns="http://www.gama-system.com/problems.xsd">
<Name>XML DigSig Problem</Name>
<Severity>5</Severity>
<Definition Id="b01cb152-cf93-48df-b07e-97ea7f2ec2e9">CgsMDQ8=</Definition>
<Description>Ambient namespaces break digsigs.</Description>
<Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
...
</Signature>
</Problem>
... the signature verification would fail. Let's call this document exhibit C.
<theory>
As said earlier, all XML processing is done on the XML Infoset level. Since ambient prefixed namespace declarations are visible in all child elements of the declaring element, exhibits C and D are different. Explicitly, element contexts are different for element Definition, since exhibit C does not have ambient declarations present and exhibit D does. The signature verification fails.
</theory>
Solution?
Much simpler than what's written above. Force XmlSerializer class to serialize what should be serialized in the first place. We need to declare the namespace definition of the serialized document and prevent XmlSerializer to be too smart. The .NET Framework serialization mechanism contains a XmlSerializerNamespaces class which can be specified during serialization process.
Since we know the only (and by the way, default) namespace of the serialized document, this makes things work out OK:
// generate problem
ProblemType problem = new ProblemType();
problem.Name = "XML DigSig Problem";
problem.Severity = 5;
DefinitionType dt = new DefinitionType();
dt.Id = Guid.NewGuid().ToString();
dt.Value = new byte[] { 0xa, 0xb, 0xc, 0xd, 0xf };
problem.Definition = dt;
problem.Description = "Ambient namespaces break digsigs.";
// serialize problem
XmlSerializerNamespaces xsn = new XmlSerializerNamespaces();
xsn.Add(String.Empty, "http://www.gama-system.com/problem.xsd");
XmlSerializer ser = new XmlSerializer(typeof(ProblemType));
FileStream stream = new FileStream("Problem.xml", FileMode.Create, FileAccess.Write);
ser.Serialize(stream, problem, xsn);
stream.Close();
This will force XmlSerializer to produce a valid document - with valid XML element contexts, without any ambient namespaces.
The question is, why does XmlSerialzer produce this namespaces by default? That should be a topic for another post.
Shame? Nokia?
Same sentence, as in Shame and Nokia?
There is just no pride in IT anymore. Backbones are long gone too.
This post puts shame to a new level.
There is no excuse for having Microsoft Access database serving any kind of content in an online banking solution.
The funny thing is, that even the comment excuses seem fragile. They obviously just don't get it. The bank should not defend their position, but focus on changing it immediately.
So, they should fix this ASAP, then fire PR, then apologize.
Well-done David, for exposing what should never reach a production environment.
Never. Ever.
This is why one should buy best of both worlds. Mac rules on a client. Dell is quite competitive on the (home) server market.
We don't care about cables around servers. Yet.
So? 'nuff said.
These have been brewing for a couple of months. They're out today.
They contain a number of patches that improve fix Vista. You
can get them here:
Windows Vista Performance Update:
Windows Vista Reliability Update:
Go get them. Now.
Since it's summer here in Europe, and thus roughly middle of the year, here comes your, Dear Reader, profile.
Mind you, only last years data (June 2006 - June 2007) is included, according to Google Analytics.
Browser Versions:
Operating Systems:
Browser Versions and Operating Systems:
Screen Resolutions:
Adobe Flash Support:
And finally, most strangely, Java Support:
The last one surprised me.
Based on my previous post, I managed to get distributed transaction scenario working using WCF, MTOM and WS-AtomicTransactions.
This means that you have the option to transport arbitrary files, using transactional ACID semantics, from the client, over HTTP and MTOM.
The idea is to integrate a distributed transaction with TxF, or NTFS file system transaction. This only works on Windows Server 2008 (Longhorn Server) and Windows Vista.
Download: Sample code
If the client starts a transaction then all files within it should be stored on the server. If something fails or client does not commit, no harm is done. The beauty of this is that it's all seamlessly integrated into the current communication/OS stack.
This is shipping technology; you just have to dive a little deeper to use it.
Here's the scenario:
There are a couple of issues that need to be addressed before we move to the implementation:
OK, here we go.
Here's the service contract:
[ServiceContract(SessionMode = SessionMode.Allowed)]
interface ITransportFiles
{
[OperationContract]
[TransactionFlow(TransactionFlowOption.Allowed)]
byte[] GetFile(string name);
[OperationContract]
[TransactionFlow(TransactionFlowOption.Allowed)]
void PutFile(byte[] data, string name);
}
We allow the sessionful binding (it's not required, though) and allow transactions to flow from the client side. Again, transactions are not mandatory, since client may opt-out of using them and just transport files without a transaction.
The provided transport mechanism uses MTOM, since the contract's parameter model is appropriate for it and because it's much more effective transferring binary data.
So here's the service config:
<system.serviceModel>
<bindings>
<wsHttpBinding>
<binding name="MTOMBinding"
transactionFlow="true"
messageEncoding="Mtom"
maxReceivedMessageSize="10485760">
<readerQuotas maxArrayLength="10485760"/>
</binding>
</wsHttpBinding>
</bindings>
<services>
<service name="WCFService.TransportService">
<host>
<baseAddresses>
<add baseAddress="
http://localhost:555/transportservice"
>
</baseAddresses>
</host>
<endpoint address=""
binding="wsHttpBinding"
bindingConfiguration="MTOMBinding"
contract="WCFService.ITransportFiles"/>
</service>
</services>
</system.serviceModel>
Here, MTOMBinding is being used to specify MTOM wire encoding. Also, quotas and maxReceivedMessageSize attribute is being adjusted to 10 MB, since we are probably transferring larger binary files.
Service implementation is straight forward:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
class TransportService : ITransportFiles
{
[OperationBehavior(TransactionScopeRequired = true)]
public byte[] GetFile(string name)
{
Console.WriteLine("GetFile: {0}", name);
Console.WriteLine("Distributed Tx ID: {0}",
Transaction.Current.TransactionInformation.DistributedIdentifier);
return ReadFully(TransactedFile.Open(@"C:\TxF\Service\"
+ name,
FileMode.Open, FileAccess.Read,
FileShare.Read), 0);
}
[OperationBehavior(TransactionScopeRequired
= true)]
public void PutFile(byte[] data, string filename)
{
Console.WriteLine("PutFile: {0}", filename);
Console.WriteLine("Distributed Tx ID: {0}",
Transaction.Current.TransactionInformation.DistributedIdentifier);
using (BinaryWriter
bw = new BinaryWriter(
TransactedFile.Open(@"C:\TxF\Service\"
+ filename,
FileMode.Create,
FileAccess.Write, FileShare.Write)))
{
bw.Write(data,
0, data.Length);
// clean up
bw.Flush();
}
}
}
Client does four things:
Before you run:
Download: Sample code
This sample is provided without any warranty. It's a sample, so don't use it in production environments.
I dived into WS-Management support
in Vista / Longhorn Server Windows Server 2008 this weekend. There
are a couple of caveats if you want to enable remote WS-Management based access to
these machines. Support for remote management is also built into Windows Server 2003 R2.
WS-Management specification allows remote access to any resource that implements the specification. Everything accessed in a WS-Management world is a resource, which is identifiable by a URI. The spec uses WS-Eventing, WS-Enumeration, WS-Transfer and SOAP 1.2 via HTTP.
Since remote management implementation in Windows acknowledges all the work done in the WMI space, you can simply issue commands in terms of URIs that incorporate WMI namespaces.
For example, the WMI class or action (method) is identified by a URI, just as any other WS-Management based resource. You can construct access to any WMI class / action using the following semantics:
Since the majority of WMI classes are in root/cimv2 namespace, you should use the following URI to access those:
http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2
OK, back to WS-Management and its implementation in Vista / Windows Server 2008.
First, Windows Server 2008 has the Windows Remote Management service started up by default. Vista doesn't. So start it up, if you're on a Vista box.
Second, depending on your network configuration, if you're in a workgroup environment (not joined to a domain) you should tell your client to trust the server side.
Trusting the server side involves executing a command on a client. Remote management tools included in Windows Server 2008 / Windows Vista are capable of configuring the local machine and issuing commands to remote machine. There are basically two tools which allow you to setup the infrastructure and issue remote commands to the destination. They are:
As said, WS-Management support is enabled by default in Windows Server 2008. This means that the appropriate service is running, but one should still define basic configuration on it. Nothing is enabled by default; you have to opt-in.
Since Microsoft is progressing to a more admin friendly environment, this is done by issuing the following command (server command):
winrm quickconfig (or winrm qc)
This enables the obvious:
You should get the following output:
[c:\windows\system32]winrm quickconfig
WinRM is not set up to allow remote access to this machine for management.
The following changes must be made:
Create a WinRM listener on HTTP://* to accept WS-Man requests
to any IP on this machine.
Enable the WinRM firewall exception.
Make these changes [y/n]? y
WinRM has been updated for remote management.
Created a WinRM listener on HTTP://* to accept WS-Man
requests to any IP on this machine.
WinRM firewall exception enabled.
There are options in winrm.cmd to fine tune anything, including the listening ports and / or SSL (HTTPS) support. In a trusted environment you probably don't want to issue commands using HTTP based mechanism, since you are located behind the trust boundary and have complete control over available (allowed) TCP ports.
You can now issue remote management commands against the configured server, but only if the communication is trusted. So in case you are in a workgroup environment (client and server in a workgroup), this should get you started (client command):
winrm set winrm/config/client @{TrustedHosts="<server ip or hostname>"}
You can specify multiple trusted servers using a comma:
winrm set winrm/config/client @{TrustedHosts="10.10.10.108, 10.10.10.109"}
This trusts the server(s) no matter what. Even over HTTP only.
Enumerating the configured listeners - remember, listener is located on the destination side - is done via:
winrm enumerate winrm/config/listener
OK, now we're able to issue commands to the remote side using WS-Management infrastructure. You can, for example, try this (client command):
winrs -r:http://<server ip> -u:<username> -p:<password>
<shell command>, ie.
winrs -r:http://10.10.10.108 -u:administrator -p:p$38E0jjW! dir -s
or
winrs -r:http://10.10.10.108 -u:administrator -p:p$38E0jjW! hostname
You can even explose HTTP based approach through your firewall if you're crazy enough. But using HTTPS would be the smart way out. What you need is a valid certificate with server authentication capability and a matching CN. Self-signed certs won't work.
Simple and effective.
If you happen to be on a Windows Vista or Windows Server 2008 box, there is some goodness going your way.
There is a basic managed TxF (Transactional NTFS) wrapper available (unveiled by Jason Olson).
What this thing gives you is this:
try
{
using (TransactionScope tsFiles = new TransactionScope
TransactionScopeOption.RequiresNew))
{
WriteFile("TxFile1.txt");
throw new FileNotFoundException();
WriteFile("TxFile2.txt");
tsFiles.Complete();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
WriteFile method that does, well, file writing, is here:
using (TransactionScope tsFile = new TransactionScope
(TransactionScopeOption.Required))
{
Console.WriteLine("Creating transacted file '{0}'.", filename);
using (StreamWriter tFile = new StreamWriter(TransactedFile.Open(filename,
FileMode.Create,
FileAccess.Write,
FileShare.None)))
{
tFile.Write(String.Format("Random data. My filename
is '{0}'.",
filename));
}
tsFile.Complete();
Console.WriteLine("File '{0}' written.", filename);
}
So we have a nested TransactionScope with a curious type - TransactedFile. Mind you, there is support for TransactedDirectory built in.
What's happening underneath is awesome. The wrapper talks to unmanaged implementation of TxF, which is built in on every Vista / Longhorn Server box.
What you get is transactional file system support with System.Transactions. And it's going to go far beyond that.
I wrote some sample code, go get it. Oh, BTW, remove the exception line to see the real benefit.
Download: Sample code
This sample is provided without any warranty. It's a sample, so don't use it in production environments.
There is some serious pain going on when you need to add a simple custom document property into multiple Word 2007 text areas.
Say you have a version property that you would need to update using the document property mechanics. And say you use it in four different locations inside your document.
So:
Yes. Ease of use.
Please, give me an option to get my menus and keyboard shortcuts back.
Pretty please.
There's some serious iPhone hacking going on during the last week or so.
Here's the wiki page, that has lots of information on the project. Main devs, working on the unlock problem are from US and Hungary as it seems.
The forum is here. Deliverables here.
The progress is steady. There is a now a way to issue commands to the system, including moving of files, activation (which was achieved in three days) and directory listing inside the sandboxed filesystem.
I'm following the progress (with Hana - she's cheering along), because it's fun to know the internals of a locked down Apple device.
WS-RM 1.1 is finished. GoodTimestm.
OASIS published two specs:
WCF, as it turns out, will have support for WS-RM 1.1 implementation in Orcas. On this note, there is a new CTP out this week.