CAT | Technology

You’ve probably already encountered the problem receiving a virtual machine image that isn’t in the right format. You received a demo image from Microsoft but don’t have an Hyper-V instance to run the image and you just want to use your VMware player already installed on your machine.

Let me take you through a few simple steps to convert your system image to the format you need.

If you have access to a Hyper-V instance:

  1. Download and install the VMware vCenter Converter Standalone. (
  2. open VMware vCenter Converter
  3. Choose the option “Convert Machine” in the upper left corner.
  4. A dialog box appears, fill in the Hyper-V server address and credentials and choose “next”. probably a dialog box appears with a certificate warning, ignore this message Smile.
  5. Now you’re provided with a list of all the virtual machines configured on the hyper-V server.
  6. select the virtual machine you want to convert and choose “next”.
  7. Now you’re asked for the destination type. Choose “VMware Workstation or other VMWare virtual machine”. Now depending on the VMware product you have installed, choose the appropriate VMWare version (for example VMware Player 3.0).
    The path you specify to export the image to must be a valid UNC path!
    vmware converter
  8. In the next screen you are able to adjust some parameters of the VMware machine the tool is going to create. click “next”.
  9. Review the parameters and press “Finish”.
  10. A conversion task is now added to the task list of vCenter Converter. The conversion may take a while, depending on the size of the source.

You can also use the vCenter conversion tool to directly ‘grab’ a running system and have a VMware image of it.

If you don’t have access to a Hyper-V instance:

Here you have two options:

Convert using Virtual PC

  1. install virtual PC on your system.
  2. using virtual PC, start the VHD image you want to convert.
  3. Shut the virtual machine down, now a .vpc file is created by virtual PC.
  4. Use VMware vCenter Converter to convert the image by pointing to the .vpc file.


WinImage is a shareware tool that let’s you convert a VHD file into a VMDK image. VMDK is an old format of VMware but they’ll upgrade it during import. You have a How-To for this method here:

· ·

I’m working on a project where we use SAP and SharePoint in a best of both world manner. We chose Duet Enterprise as the enabling technology.

After a few hickups during installation and some gotcha’s while trying to consume a few custom developed webservices, we thoughed we had tamed the beast. Ready for demo!

But then we met mister Murphy. During the weekend, the guys from SAP BC installed support package 7 for NetWeaver 7.02.

We got a nice email from them:

The support package update SAPKB70207 now finished successfully !
The new package level is:
SAP_ABA            702         0007 (+1)
SAP_BASIS          702         0007 (+1)
PI_BASIS              702         0007 (+1)
ST-PI      2008_1_700        0004 (+1)
SAP_BS_FND     702         0005 (+1)
SAP_BW              702         0007 (+1)
WEBCUIF             701         0003
IW_CNT               100         0002
IW_FND               100         0002
IW_TNG               100         0002
CPRXRPM            500_702               0005 (+1)
ST-A/PI 01M_BCO700     0001

Unexpectedly this update to SAP broke all of our Duet Enterprise service calls.

We got the following error in SharePoint:

Error while executing Wcf Method: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. —> System.ArgumentException: An item with the same key has already been added.

At the SAP side, everything was OK:

Here is the full error stack, retrieved with ULS viewer:
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentException: An item with the same key has already been added. Server stack trace:
at System.ThrowHelper.ThrowArgumentException(ExceptionResource resource)
at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add)
at System.Collections.Generic.Dictionary`2.Add(TKey key, TValue value)
at System.Xml.ContentTypeHeader.ParseValue()
at System.Xml.ContentTypeHeader.get_MediaType()
at System.Xml.XmlMtomReader.ReadRootContentTypeHeader(ContentTypeHeader header, Encoding[] expectedEncodings, String expectedType)
at System.Xml.XmlMtomReader.Initialize(Stream stream, String contentType, XmlDictionaryReaderQuotas quotas, Int32 maxBufferSize)
at System.Xml.XmlMtomReader.SetInput(Stream stream, Encoding[] encodings, String contentType, XmlDictionaryReaderQuotas quotas, Int32 maxBufferSize, OnXmlDictionaryReaderClose onClose)
at System.Xml.XmlDictionaryReader.CreateMtomReader(Byte[] buffer, Int32 offset, Int32 count, Encoding[] encodings, String contentType, XmlDictionaryReaderQuotas quotas, Int32 maxBufferSize, OnXmlDictionaryReaderClose onClose)
at System.ServiceModel.Channels.MtomMessageEncoder.MtomBufferedMessageData.TakeXmlReader()
at System.ServiceModel.Channels.BufferedMessageData.GetMessageReader()
at System.ServiceModel.Channels.BufferedMessage..ctor(IBufferedMessageData messageData, RecycledMessageState recycledMessageState, Boolean[] understoodHeaders)
at System.ServiceModel.Channels.MtomMessageEncoder.ReadMessage(ArraySegment`1 buffer, BufferManager bufferManager, String contentType)
at System.ServiceModel.Channels.HttpInput.DecodeBufferedMessage(ArraySegment`1 buffer, Stream inputStream)
at System.ServiceModel.Channels.HttpInput.ReadBufferedMessage(Stream inputStream)
at System.ServiceModel.Channels.HttpInput.ParseIncomingMessage(Exception& requestException)
at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)
at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout)
at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.Request(Message message, TimeSpan timeout)
at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown
at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at BCSServiceProxy.ManageProject_in.FindProjectPhaseByElements(FindProjectPhaseByElementsRequest request)
at BCSServiceProxy.ManageProject_inClient.BCSServiceProxy.ManageProject_in.FindProjectPhaseByElements(FindProjectPhaseByElementsRequest request)
at BCSServiceProxy.ManageProject_inClient.FindProjectPhaseByElements(ProjectPhaseSimpleByElementsQuery ProjectPhaseSimpleByElementsQuery_sync) -
-- End of inner exception stack trace ---
at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters)
at Microsoft.SharePoint.BusinessData.SystemSpecific.Wcf.WcfSystemUtility.Execute(Object[] args)

After investigation we saw that it had something to do with MTOM encoded attachments, which we were not using in these calls. WCF tried to parse them where it should not. Obviously there was something strange going on, although we did not touch the SharePoint solution.

We found an interesting post: which explains a possible syntax error in the soap header. Since the error occurs in System.Xml.XmlMtomReader.ReadRootContentTypeHeader() this seemed a very similar problem.

After some time (I don’t dear to tell you how long) we decided that there must be something wrong with the SOAP runtime of SAP, possibly with the MIME type of the response. Just before issuing a call for help to SAP support, our ABAP colleague did a thorough search in the SAP knowledge base.

Look what he found: (you will need a SAP s-user to view this URL)

sap note 0001527413 as pdf

Apparently SAP had a problem in the NetWeaver SOAP runtime. They fixed part of the problem in support pack 7 and part of it in support pack 8. That’s why there was infuence on Duet.

After installing the SAP Note (an emergency fix so to speak) everything worked as before. Hurray!

· · · ·

A lot of times I read on forums that people are asking if it is possible to migrate their data and customizations from CRM2011 online to CRM2011 on premises. Customizations is not really a problem because you can just import and export the solution. However, migrating data is not so straightforward.

Many companies first start with the CRM2011 Online and then later migrate to CRM2011 on premises. Why do they do this? Here are some possible reasons:
- Because CRM2011 Online might have technical limitations for their specific needs  (not that there are so many differences)
- They didn’t want to pay for server licenses and CALs in the beginning because they were just "trying out" CRM2011.
Now after some time they see what a great product Dynamics CRM is and want to invest in implementing it in their own environment.
- They don’t like the idea of having their company data in the cloud and not in their own environment. (I think we can trust Microsoft
with our data, but you would be surprised how many companies think like this)
- Legal issues: In the past I was working with a customer that wanted to change certain clauses in the contract, because they didn’t agree with certain points. From experience I’ve learned that Microsoft rarely or never changes its contract.

I’m not saying these are all valid reasons to switch. There are also many benefits of choosing for CRM2011 Online, think of scalability for instance.

But if we come back to the question: Is it possible to migrate data and customizations from CRM2011 online to CRM2011 on premises?
Yes it is, no problem! To support this procedure Microsoft has released a handy whitepaper that will guide you step by step through the process
of migrating your online CRM2011 organization to CRM2011 on premises.

You can download the whitepaper via the following link: Microsoft CRM Online Data Migration to Microsoft Dynamics CRM 2011 on-premises

Another solution is to use a third party data migration application like Scribe. Using this application you can connect to your online environment and your on premises environment to copy over the data.

No tags



Microsoft xRM

At Capgemini we are encouraged to organize ourselves around communties to share knowledge and experience and to keep contact with the company. One of those initiatives is the xRM Special Interest Group (SIG) around Microsoft Dynamics CRM 2011.

Microsoft Dynamics CRM 2011 is a bit ambiguous personality. It’s a configurable & extensible “classic” CRM solution with it normal focus on customer relationship management. But the ‘C’ can easily be replaced by an X where x can be any relationship-oriented thing. This makes MS Dynamics CRM a ready to use product out-of-the box (more or less) as well as a (business) Application development platform (where the business is relationship management of x).  Hence the term XRM.
It is not an all-round, general purpose Development framework but a more high-level environment where low-level plumbing is delivered out-of-the box but and the environment gives you a chance to “tweak”  it some way to produce various relationship oriented business application without the need to write everything from scratch. This gives a quicker time-to-market, more reliable products and maybe more useful end-product.
Maybe the name Dynamics 2011 doesn’t do the product justice? Maybe this made a lot of developers bypass this product all together? Until some enlighten spirits saw opportunities to sell this XRM concept to decision-makers at various levels.

Christophe Permentier, one of the early believers of XRM  in the Mscop, facilitates the xRM SIG. Christophe laid out his roadmap for the future SIG session had he wanted to start with explaining XRM in a “CRM-out approach”. He first wanted to give us insight how the core CRM functionality of Sales, Customer & service work and how you can extend it. This should give us a good base to understand XRM that is using MS Dynamics CRM as a platform to build your own relationship oriented business application

It’s important to remember where the MS CRM comes from when evaluating it as a development platform. It’s not Visual studio on top of .NET or whatever general purpose framework. 
The foundation of the products ware and still are geared, I think, to enable end users with limited or no development experience (or interest) to add workflows or do other modifications.

Of course at certain point IT affinity & development skills are necessary (understanding 1 – n relationships, creating workflows, form modifications, adding JavaScript, writing plug-ins …) But nevertheless the core principle is to minimize the amount of custom code to write and maximize the plumbing inside. It’s more about configuration to enable creating new business applications for X- relationship management.

Microsoft Dynamics CRM can be modified and extended in different ways all of which require different levels of skill and different skill sets.

  • create workflows visually
  • change the look and feel of the forms
  • add new entities
  • add new attributes
  • Plug-ins (.NET code -> business logic -> reacting on events generated by the platform)

Some things are not easy to change ( or  remain unsupported if you do so).  Most notably the User Interface. Apparently MS want to keep the UX smooth among its products that directly interact with MS CRM like Office products.

For tool support we can make difference between

  • Standard tooling
    1. Outlook (mail) integration
    2. Microsoft Windows Workflow Foundation (not sure if you use “raw” or via a wizard. I reckon the latter)
    3. Extending Forms with Scripts via “code-behind”
    4. Adding iFrames to Forms
    5. Changing or Adding to the User Interface Menus
    6. Someone mentioned also something the SiteMap ( ?) to change the activities pane (the pane on the left)
    7. Using  Silverlight to give Rich UX
    8. Integrating to SharePoint
    9. ERP integration (I think via Sharepoint. I’m not sure about this one) There was also a question about integrating Duet (SAP solution). But I believe it was not possible .
    10. Dashboard (CRM 2011)
    11. MS Reporting Services & MS Analysis Services (cubes to extract reports there was a question about performance but I lost track of the conversation)
  • Custom tooling
    1. All CRM services are accessible via SOAP enabled so it can be used to build your own apps that consume these web-services that unlock the CRM server-side.
    2. .NET code (visual studio) to write logic to hook into event generated by the CRM platform
    3. Christophe mentioned something about “accelerators “ ; ready to use customizations that you can import into your CRM instance.
  • Productivity Tool-support
    1. Focus on visual configuration
    2. No modeling tool for entity modeling

Being a “specialized” development platform means you cannot build every application with is. It geared toward a particular kind of application that has some of the following characteristics

  • to track any type of relationships you as an organization has with your “environments 
  • document handling, paper work sent from here to there.
  • e-mail integration
  • Role-based security
  • Long-running “transactions”/Workflow


You will not build a real-time tracking system for gas-exploitation with MS dynamics CRM. But maybe many applications (or parts of applications) MS CRM could be a valid alternative to custom development

 An interesting insight from Yves Goeleven was the fact that licensing a CRM user is quite costly and might have an effect on leaving certain part of the XRM solution you’re building “outside “ MS CRM . Meaning may be making a portal application where thousands of users can interact with the CRM back-end, administered by a couple of real CRM users. A ”server” license costs more but is still more cost-efficient. 

Another important remark was made regarding the changing  role of a developer in an xRM story.  It will be a more configuration role but on the other hand leave “boring” stuff out. Still there is plenty of ”Custom development” to done with external tools like agileXrm to create workflows and make custom UIs in the form of portals for example.

Capgemini is already very active in this rather new development platform. How about you? What are your thoughts and/or experiences?

Thanks for reading,

Best regards,


We recently encountered an issue that our complete Team Foundation Server got trashed. Before going any further, I will give a swift overview of the system configuration. The virtual server was configured with the following elements:

  • Windows server 2008 R2
  • Active Directory Domain Services
  • SQL Server 2008 R2
  • SharePoint Server 2007
  • Team Foundation Server 2010

As this is best practice, the machine itself was configured with two drives in such a way that all the databases were on an extra drive. This drive was not lost an can thus be used to restore the actual data that was already in our TFS environment. The restoration of the data itself will be discussed later, but first there were some issues that had to be dealt with.

Snapshot Restoration of a server that Contains the ADDS.

The issue is that you cannot restore a server that has ADDS on it. The machine SID changes and therefore you cannot log on to the new machine anymore. The machine is operational, but it won’t recognize your credentials at all. When you are working in a multi server environment, it might be a good idea to run a backup AD on another server that can be used in case of a disaster on the main server. In our case however, where we have a single server environment, it is not possible to restore the machine with a snapshot. Therefore it is smarter to set up an single server environment without the ADDS feature installed. The components we need, work all perfectly without Active Directory an therefore you can rely on the standard user management that windows provides.

Installing the components

The order in which you install is rather crucial and this is something I had to learn the hard way. I installed the components in the following order:

  • SQL Server 2008 R2
  • SharePoint Server 2007
  • Team Foundation Server 2010

After that I  reattached the databases we recovered in SQL server and used the attach collection feature in TFS. The databases are recognized as valid Collections and can be attached, but no Team projects will be shown. This is probably due to the fact that we are in a different “machine domain” now and that the TFS configuration database is not containing any information on this collection we just imported. Some other things might be in play here, but that is out of scope for this article

To be able to solve this issue in a more efficient manner it is important to understand that it is better to tear down some of the functionality we already restored therefore I detached all the TFS databases that were available in SQL server (including the warehouse- and configuration-database) and re-attached all the databases we recovered from the old environment. For each Collection if TFS there should be a database, make sure to attach everything.

When this is done, an important step is to point TFS to the correct configuration-database. Replacing the default is not enough!

This operation can be performed by running the following TFS commands:

TFSConfig RemapDBs /DatabaseName: ServerName ; TFS_Configuration /SQLInstances: ServerName

After that we need to change the DB ownership to the current user so that your TFS has effectively access to it. This is done by performing the following command:

TFSConfig Accounts /ResetOwner /SQLInstance: ServerName /DatabaseName: DatabaseName

Opening the TFS administration console will show that the collections from the previous server environment are now available and that the same is true for the team projects.

Some settings are however still not correct and will have to be rectified in the TFS administration console.  The most important ones are the references to the reporting server, user permissions and if you have configured a build server, you will  have to set this up again according to your needs. Basically you will have to recreate a build -controller and -agent and the build definitions will have to be reconfigured. (Parameters such as build agent and output directory will have to change).

That is basically it to get TFS working again. The users will have to reconfigure their connection to the server in their client, but that is a minor detail and can be done rather fast. When you take a backup of this new server, you can be sure that it will be restorable when another disaster strikes…





At Capgemini, employees are regularly invited to attend sessions about new Microsoft technologies. Recently, Tom Haepers and his colleagues gave a presentation on ERP+ explaining us the what, why and how.

ERP+ is Capgemini’s global Service offering to help customers to envision the integration of their SAP installation with Microsoft products. The complementary nature of both “technology stacks” safeguards the existing investments but also give the opportunity to create new applications that bring out the strengths of each platform. The entanglement of SAP in heart of business processing and the usage of Microsoft products in the day-to-day work make this a natural symbiosis.

While organizations have invested large amounts in packaged and custom applications, Information workers still like to work in their familiar Office tools. As Mark Claessens ( VP) , put it : Having all the facilities to work with data in SAP, didn’t stop users from pumping SAP data into Excel and use it from there on (with some security consequences …) but the user-experience the people get from Office tools is unbeatable.

One of the challenges we as a system integration company face, is to let these information workers stay in their “comfort zone” while letting them integrate with other systems in their organization without actually knowing it comes from other systems.

ERP+ is Capgemini’s sales framework to offer a strategy for our customers to make this possible with the best-of-breed tools to accomplish this. It was already launched in 2009 and is a joined partnership between Capgemini and Microsoft. Tom Haepers, ERP+ Champion within Capgemini Belgium, immediately emphasized that ERP+ is NOT there to replace SAP with Microsoft counterpart products like for example replacing SAP CRM with Microsoft Dynamics CRM.

ERP+ is geared at maximizing the investments already made in both products and offering possibilities to create new end-to-end solutions by integrating both technology stacks seamlessly, unleashing new productivity gains and/or cost reductions. Depending on an the installed SAP , ERP+ offers a strategy framework for showing and aiding the customers how they can maximize the integration possibilities. Each technology stack has its own strengths and challenges. Bringing the strength of both stacks together is , brings the best-of-both worlds.

The ability to bring Line-of-business (LoB) data into a central portal location can improve usability and provide a better value for the user of the application and potentially other stakeholders as well.

Tom Van Gaever, Microsoft SharePoint specialist, gave an example about a call-center have several programs to handle requests from customers. Depending on the specific problem another program needs to be opened. Being able to create an application to be a portal to all those disparate programs would help the call-center agent a lot to quickly react on the specific question of the customer. As Tom stated , they could create a total new custom program from scratch. Or they could opt for a solution of the ERP+ type.

Another example was given to show the ability to develop custom frontend UI’s to simplify complex SAP screens . This improves the user experience, especially for a casual SAP user, by bringing their data and enabling business processes in a familiar tool like SharePoint or Outlook like for example Leave requests. Other examples were mentioned like self-service capabilities, collaboration , creating new business work flows end-to-end integrating different systems, coupling unstructured data with structured data, federated search over different sources, mobile application integration and of course bringing it to the “cloud”.

Being able to propose and more important deliver these kind of integration solutions is also vital for Capgemini. While 10 to 12 years ago the classic ERP implementation were one of the big cash-cows for integrator around the world, those days are a bit over. We were very successful in riding this first wave and we liked it a lot : big project, many resources and high ….. J. Opening up out-of-the box Sap systems and helping implementing sector specific Sap solutions were also a source of activity. But these Greenfield implementations are not as wide spread as in the beginning.

SOA enabling SAP systems represented a second wave of opportunities and was an important stepping stone for the current wave; integration of structured data like found in SAP systems and unstructured data like Word documents or emails, making portal solutions, process orchestration, etc.

As the integration of the Microsoft stack with SAP stack is important for our customers (even if they don’t know it yet or realize the potential ), we know it is important for us. ERP+ is our answer to ride this third wave to help our customers and meanwhile generating business for us. And we is in a good position to pull this off due the large Sap customer base and our Microsoft and SAP expertise.

We are already actively working on ERP+ solutions , doing proposals or preparing proof-of-concepts. Joeri Stoutjesdijk, SAP specialist, stressed on the importance of the POC they are preparing for the Flemisch Water Service company.

The SAP & Microsoft interoperability partnership goes back as early 1990’s with various products and tools to help the integration and/or cooperation of SAP systems & Microsoft products. And this at different levels like Windows Server compatibility , SQLServer server layer compatibility, business services layer integration (SAP .NET connector, WCF LOB Adapter SDK) & presentation-services integration (Duet)

But it is still a technological challenge . For example the SAP WSDL’s have their own definition, slightly away from the WS*-standards (dates for example) and are fairly complicated, with nested and complex structures. Also enabling a user to log in once and gain access to the resources of multiple software systems without being prompted to log in again, proves to be not to be for the faint of heart. That is why this POC is important : We have to know if things like Single Sign On work for this particular customer. We need to know the complexities of the enablement of SAP Enterprises services within SharePoint and so on.

Even with the advent of the latest Microsoft SAP integration offering named Duet Enterprise, the collaboration between Sap developers and Microsoft Developers is important to make things happen. Duet Enterprise is an Add-On on both sides; SAP Net Weaver stack and SharePoint stack . While SharePoint for example can discover SAP information as External Content Types in Business Connectivity Services . SAP developers need to adapt SAP data and expose it in an interoperability friendly way and SharePoint developers need to create these ECT. So it is important that both speak the same “language” and help them work together to build a solid solution. And this is not only on data exchange both also the concepts and way of working in both stacks like for example pessimistic locking (SAP) versus optimistic locking (Microsoft). There are still a lot of challenges.

The key message , I think, is that the main advantage of the integration between Microsoft products and SAP enables users not have to leave his familiar end-user interface and does not need to learn a completely other way of working. Also he does not have to have (intimate) knowledge of SAP or the SAP UI to do his job. We don’t need to create a new application from scratch , we integrate what is available with the help of tools and offer new opportunities and make sure the investments of our clients made in Sap (or other Lobs) can be reused in new and productive ways. ERP+ is our guidance to make this successful for our customer and us.

Thanks for reading.


· ·



Yves Goeleven – Azure MVP

Capgemini Belgium has a Azure MVP among its ranks; Yves Goeleven.  MVP or Microsoft Most Valuable Professional is an initiative by Microsoft to reward people that significantly contribute to the Microsoft community by actively helping other people on forums, or through articles or lead local communities.

Azure is  Microsoft’s platform for their Cloud service offering. Microsoft uses the term for their tool set & product extensions that enable enterprises to take advantages of cloud computing such as practically unlimited storage, instant scalability, low start-up costs, pay-only-for-what-you-use , and many more.

Yves has been working and promoting Azure for the last 3 years. Yves immediately saw the benefits of cloud computing . While the Microsoft Azure products were still very experimental and subject to change, Yves was experimenting with them and becoming knowledgeable on the subject. Andy Mulholland , our Global Chief Technology Officer, already had words of praise for Yves’s contributions. Now this MVP award is a continuation of the recognition of Yves’s leadership.

Besides being our local Microsoft community leader, Yves is one of the founding members of the Belgian Azure User Group, a proficient speaker , active blogger and excellent writer.  At Capgemini we now can easily tap into Yves’s knowledge and move along in the world of cloud computing. Something that will definitely shape our IT world in the coming years.

Thanks for reading.

Best regards,




Log4Net: Logging on the Azure platform

Log4Net on Azure

I have always been a big fan of the Log4Net framework for all of my logging needs in applications and I would like to use this framework in my azure projects as well. And to be honest it isn’t even that difficult: just configure it to the TraceAppender as everything written to the tracelog will be transferred by the diagnostics manager to azure table storage. 

But there are two problems, or inconveniences, with this approach. First of all, you have to both configure log4net and the diagnostics monitor. This is a pretty repetitive task that you don’t want to perform over and over again for every application you create.  Secondly log4net is configured using the application’s configuration file, which is rather inconvenient on azure as it isn’t editable.


To solve both of these problems we can instead create a custom appender that both sets up the diagnostics monitor as well as read the log4net configuration from the service configuration file instead of the application configuration file.

In order to do so, you have to inherit a class from AppenderSkeleton.

public sealed class AzureAppender : AppenderSkeleton

First thing that needs to be done is ensure that the configuration values are read from the service configuration file, if they are present. This is a bit clumbsy on azure as you cannot check for the presence of a configuration key, all you can do is act on the exception thrown when the key is not present. Make sure to set all values before the ActivateOptions method is called.

The following example shows you how to read the error level from config and apply it to the log4net environment.

private static string GetLevel()
           return RoleEnvironment.GetConfigurationSettingValue(LevelKey);
       catch (Exception)
            return "Error";

 private void ConfigureThreshold()
     var rootRepository = (Hierarchy) log4net.LogManager.GetRepository();
     Threshold = rootRepository.LevelMap[GetLevel()];

The appender for this article supports the following configuration settings:

  • Diagnostics.ConnectionString Sets the connection string to be used when transferring the log entries to table storage
  • Diagnostics.Level Sets the threshold that log4net will use to filter the logs to output.
  • Diagnostics.Layout Defines the layout and content that log4net will use to create the log entries
  • Diagnostics.ScheduledTransferPeriod Specifies the interval, in minutes, that will be used by the diagnostics manager to transfer logs to azure table storage
  • Diagnostics.EventLogs Configures which of the event log sections will be transferred from the azure instance to azure table storage

When the options have been set and activated, the log4net environment has been completely configured to make proper use of our custom appender and we can start azure diagnostics monitor. Note that the diagnostics monitor also has a threshold that allows you to filter the logs written to storage. But as log4net is already filtering, we don’t need to do it here anymore so we set the filter to Verbose.

 private void ConfigureAzureDiagnostics()
    var traceListener = new DiagnosticMonitorTraceListener();

    var dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();

    //set threshold to verbose, what gets logged is controled by the log4net level
    dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;



    DiagnosticMonitor.Start(ConnectionStringKey, dmc);

private void ScheduleTransfer(DiagnosticMonitorConfiguration dmc)
    var transferPeriod = TimeSpan.FromMinutes(ScheduledTransferPeriod);
    dmc.Logs.ScheduledTransferPeriod = transferPeriod;
    dmc.WindowsEventLog.ScheduledTransferPeriod = transferPeriod;

private static void ConfigureWindowsEventLogsToBeTransferred(DiagnosticMonitorConfiguration dmc)
    var eventLogs = GetEventLogs().Split(';');
    foreach (var log in eventLogs)

That’s all there is to it basically, the only thing we need to do now is to apply the appender to the environment. This is done by creating an instance of the appender, configure it either in code or using the settings in the service configuration file, and finally configure the log4net environment.

var appender = new AzureAppender();

You can find the source for this appender in the NServiceBus project and feel free to use it in your projects. I also want to give special credits to Andreas Ohlund for creating the first version of this appender.




Generics part III: Constraints

There are three kinds of type parameter constraints. There are primary constraints, secondary constraints and constructor constraints. The number of constraints that you can specify depends on the type of constraint, as shown in this table:

Constraint Type Allowed
Primary Constraint 0 or 1
Secondary Constraint 0 or more
Constructor Constraint 0 or 1


When applying primary constraints you actually make a deal with your compiler. The deal says that the type parameter you will pass on is of a certain type OR derived from this type. For example: if you have a system where you do the management of vehicles, you might have your model have a base class Vehicle, and some derived types: Car, Truck, Boat and Airplane. The syntax looks like this:

internal sealed class ExampleOfConstraintOfVehicle<T> where T : Vehicle
     public void DoSomething(T vehicle)
        // something with the vehicle

Now, when instantiating this class with something that isn’t a vehicle, you will get an error at compile time:

The type ‘[type]‘ must be convertible to ‘SystemObject.Vehicle’ in order to use it as parameter ‘T’ in the generic type or method ‘SystemObject.Program.ExampleOfConstraintOfVehicle<T>’


You can also tell your compiler that the type you will pass along as a type parameter is a reference type or a value type. This is done by using the “class” or “struct” keyword. So changing the example above will result in the following:

Reference type:

internal sealed class ExampleOfConstraintOfRefType<T> where T : class

Value type:

internal sealed class ExampleOfConstraintOfValType<T> where T : struct

Secondary constraints represent interface types. Here your deal with the compiler says that any type you pass along as a type argument will implement that interface. The table in the beginning of the post mentions that 0 or more secondary constraints are allowed. This is logical since a class can implement several interfaces and so you can also demand that a secondary constraint checks that a number interfaces is implemented in the type argument.

internal sealed class ExampleOfConstraintOfIConstraint<T> where T : ICloneable

Again, here the argument is checked at compile time. If you pass a type argument that doesn’t comply with the constraint, you will see the following error:

‘The type ‘[type]‘ must be convertible to ‘[IConstraint]‘ in order to use it as parameter ‘T’ in the generic type or method ‘SystemObject.Program.ExampleOfConstraintOfIConstraint<T>’


Another type of constraints are the constructor constraints. Here the deal with the compiler is that the type argument you’re passing along implements a default constructor (public and no parameters). Passing a constructor with parameters is not possible.




Log4net: RollingFileAppender with XML


Unfortunately, there is no easy way to create a fully qualified XML file. The problem is that a rolling file appender writes its entries sequentially. As we all know, an XML file has to start with opening a root element and end with closing the root element.

Of course, we can always provide some kind of workaround.


· ·

Older posts >>