Michael Stephenson

keeping your feet on premise while your heads in the cloud
posts - 356 , comments - 417 , trackbacks - 11

My Links

News

View Michael Stephenson's profile on BizTalk Blog Doc View Michael Stephenson's profile on LinkedIn

Twitter












Archives

Post Categories

Image Galleries

BizTalk

Mates

Tuesday, July 8, 2014

Hybrid Connections Webcast

We have lined up a webcast for UKCSUG covering BizTalk Services Hybrid Connections on the 4th August with Santosh from the Azure Product Team

Posted On Tuesday, July 8, 2014 7:24 PM | Comments (0) | Filed Under [ BizTalk ]

Monday, July 7, 2014

Why use Service Bus Relay when I can use Hybrid Connections?

Im slowly in the process of moving over to a new website so ill be cross posting on both blogs for a while.

Here is my new article talking about service bus relay and hybrid connections

Posted On Monday, July 7, 2014 7:47 AM | Comments (0) | Filed Under [ BizTalk ]

Thursday, June 5, 2014

8 ways any BizTalk customer can use Azure right now

just posted this on technet wiki

http://social.technet.microsoft.com/wiki/contents/articles/24755.8-ways-any-biztalk-customer-can-use-azure-right-now.aspx


Posted On Thursday, June 5, 2014 11:02 AM | Comments (0) |

Monday, May 19, 2014

BizTalk Server Best Practices

Theres a few articles out there about various opinions on best practices for BizTalk.  I thought id create a place on technet wiki to collate them all.


Please add any ive missed

Posted On Monday, May 19, 2014 1:15 PM | Comments (0) |

Monday, April 21, 2014

BizTalk HL7 Testing - Tool to get config from BTS Management DB

As a follow up to the HL7 testing framework I published recently the below video is about a tool which is part of the framework which will allow you to generate the configuration you might require for the tests to stub an application or to send messages to biztalk by pointing the tool at your biztalk management database and letting it inspect the ports you have already setup.

Posted On Monday, April 21, 2014 10:33 AM | Comments (0) |

Sunday, April 20, 2014

Automated Testing of BizTalk HL7 solutions with Specflow

Recently ive been working on a small widget to help with automated testing of BizTalk HL7 implementations.  In the below link there is a video which walks through how all of this works.


Ive open sourced an assembly and the sample from the video if you would like to play around with this




Love to hear any feedback on how people get on with it.


Posted On Sunday, April 20, 2014 11:41 PM | Comments (0) |

Monday, March 17, 2014

BizTalk Services – Can I create a mapping service in the cloud

Ive recently been playing with some of the use cases you might be able to implement using Windows Azure BizTalk Services. In this case I wanted to look at the options for exposing the transformation capability of BizTalk as a service which applications could use. This is something that you might occasionally do in BizTalk Server where you simply take a message transform it and return a response. It allows you to abstract this transformation logic outside of the application and perhaps centralize things like reference data.

 

This sample would look a little like the below diagram.

 

Rather than writing a big article about this I have decided to make a short video walking through how you can do this. The video is available at the following location:

https://www.youtube.com/watch?v=cn93YN7HHoI

Also just a quick thanks for the WCF Loopback Binding which was created by Synthesis Consulting.

http://synthesisconsulting.net/blog/2012/5/17/biztalk-wcf-loopback-binding.html

Posted On Monday, March 17, 2014 1:30 AM | Comments (0) |

Monday, March 10, 2014

Just some thoughts on light weight integration

I was having a discussion with a friend the other day about how the integration world is changing and how integration products are changing to reflect this. While "back in the day" we had the large enterprise integration products which were a big deal in the 1990's and through the 2000's but things are taking a different direction these days.

Today integration is very much about light weight integration, API's, making it simpler, making it cheaper. While these things are all completely valid reasons for this change in direction, one of the ones I think isn't talked about as much is the rate of change in the industry and the impact that has on the integration solutions we need to build.

To give a view of the rate of change I have drawn the below graph. It's not based on anything scientific or any actual numbers it's simply a case that if you said to me, "Mike draw a trend line on a bit of paper to represent the rate of change happening in the industry across the last 15 years" the below is what I would draw. It's based on my gut feel and Im sure people will have their own opinions that the line needs to be moved a bit to one way or another.

 

The key point here is that the rate of change is increasing significantly and more steeply in recent years than it has ever had to in the past. Over the years we have had to consider some of these major changes in general practices in the industry around integration:

  • Mainframe/Host Integration
  • XML
  • Web Services
  • EAI
  • ESB
  • SOA
  • REST
  • JSON
  • Mobile
  • API
  • SaaS applications
  • Cloud Hosting
  • PaaS Service Integration

I'm sure you can all think of many more, but in the future any cloud related stuff, mobile stuff is bound to increase even more and also there will be more device and internet of things style paradigms to think about.

The rate of change in the industry these days is so fast that most organizations can no longer keep up and by the time they get up and running using the latest cool application which promises the world then a newer cooler cheaper application is already out and has a following telling you this is the only way you should be doing your apps. The good news for integration people is that this means everything needs to integrate with everything. Great that should mean more work! But at the same time a lot of companies are beginning to view integration as a commodity. Rather than a nasty evil black art which they didn't understand they sometimes now view it as something they should be able to grab someone off the street and get them to seamlessly integrate application A and application B. At this point I must admit I'm thinking back to the recent BizTalk summit and a tongue in cheek comment by Tord that integration developers are just drag and drop guys. This is a big shift and in lots of ways it's a good thing. Integration can be hard, but should be hard because you have hard requirements and not hard because you have a tool which is a nightmare to use.

Coming back to the light weight integration idea. Well I think this is where the vendors are really starting to make some good progress. Making tools that are simpler to use and that scale is definitely a good thing. "Back in the day" everyone thought light weight integration meant that we did it in custom code or batch scripts but now a days we have enterprise ready tools which let us develop integration solutions which can be delivered in days and scale to enormous capacity. Awesome!

To me one of the key things about these new tools evolving over the last couple of years is the light weight delivery option. Being able to deliver in days/weeks is one of the key things we need to be able to keep up with the industry rate of change I mentioned earlier. More than ever Im also seeing temporary integration solutions being created. Companies are saying "can we create this solution, we only need it for about 9 months because we have this other project which is going to completely change this anyway". In these cases a light weight integration platform is a great way you can do this. I have sometimes referred to this kind of delivery platform as an Agile Integration Platform. Im not saying you cant be agile with the more traditional integration products. Let's take BizTalk as an example I have delivered many successful BizTalk projects in days and under agile processes however these are usually delivered on a pre-existing BizTalk capability which we have setup and are ensuring the team is working with specific development processes. I would argue its almost impossible to get a from scratch BizTalk project live with a brand new customer in under a few months. With Windows Azure BizTalk Services or Windows Azure Service Bus this is completely do-able if you have the right requirements.

 

Hmm is the grass really greener?

So far ive painted a rosy picture of how the new breed of light weight integration products will change integration, but there are a few concerns I have which ive discussed below.

Integration as a Commodity

The first is in a comment I made earlier about "companies are starting to treat integration like a commodity". I think this will be the root cause of a lot of pain for some companies in the future. There is a big shift towards the idea of the "full stack developer" and while this isn't necessarily a bad thing you will still need someone who really understands integration. I am a big believer that integration developers think differently than application developers. Maybe that's a generalization but I would argue that an experienced and good integration developer is always thinking about dependencies and things like "if I tweak this part of the solution here, what does that mean to this bit here or this external application over there". The vast majority of application developers just don't think like that. Application developers often think inside the context of their applications container and don't consider things outside. I think integration solutions developed with this kind of mindset could end up with some big problems down the line. That said perhaps with lighter weight integration tools they will be easier to fix?

 

What about Integration Patterns

As integration developers we have spent many years understanding the enterprise integration patterns book by Hohpe and Woolf and how to implement these patterns. We also understand the many good and bad practices from an architectural and development angle. When I see some of the many demos and web casts from product vendors these days I can't help but wonder sometimes if the light weight integration toolset isnt just going to encourage having a cloud hosted container which is going to contain a complete spaghetti mess of point to point integration but because its inside a broker we will call it an ESB. I maybe wrong but I have a feeling sometime in the next couple of years this is going to be a theme of discussion in the industry.

I think in the Microsoft space this is where over the coming years the combination of Service Bus (on-premise or cloud) + Windows Azure BizTalk Services +/or BizTalk Server working as a team will help you to have this light weight approach yet still have the power to deliver enterprise integration patterns in a manageable way.

 

What about the Complex Stuff?

Even though we will have lighter weight tools the reality is that in many cases customers' requirements are still going to be complex. We are still going to need heavy duty integration capabilities such as queued messaging, complex transformation, complex content based routing, dynamic discovery, rules based integration, human workflow interaction and all of these great things. This means there is still a place for our enterprise level traditional integration platforms but hopefully we will find that we don't need to use them for everything. Some of the simpler requirements we can implement elsewhere in the integration stack and the complex stuff can be implemented by the heavier duty tools. I remember many times people refer to integration being like taking a sledge hammer to crack a nut. One of the dangers is that we go the opposite way and have a claw hammer which is pretty good at cracking nuts but useless at knocking down walls. Now a days we have a hole bag of different hammers we just have to work out which one to use.

 

It's not all in one box

One of the things Ive seen people struggle with in more recent light weight integration projects is the more fragmented architecture which you tend to have. In traditional enterprise integration products you tend to have one big box with everything inside it and its all managed in one place. If you take a solution build in the lighter weight fashion you may be using capabilities within a platform which aren't logically grouped together around your solution. You might have an API hosted in a Windows Azure Web Site which talks to a Windows Azure BizTalk Service Bridge which then talks via the BizTalk Adapter Service to your SAP back end, and you might be considering the new Azure Caching Service and a few other bits. From a management perspective or even just an understanding the solution perspective these things are isolated capabilities and you would have a number of places to check if there were problems. I think down the line there will be tools which will bring the holistic solution based view on top of the platform capabilities which will help here but at present I see this fragmented architecture being a challenge for many companies. I guess this could be one of the tradeoffs you need to make if you want to have a platform which can evolve rapidly and you want to pick and choose which capabilities are the right ones for you to use and then just pay for those and not the whole platform.

 

Conclusion

This article was a bit of a brain dump of a few things Ive had in my head for a while now, I hope they make some sense. I'm sure that everyone will have their own opinion but I think the only way I can summarize my thoughts on this topic is that although integration is changing like crazy along with the rest of the industry at its heart integration is still really kind of the same. The core things we know to be true about how to do integration properly are still relevant. It's just that the tools we are using are changing a bit and we need to be better at adapting to change in the solutions we build. Perhaps the scale of solutions we build as a general trend will also increase over time too.

 

 

 

Posted On Monday, March 10, 2014 12:17 PM | Comments (0) |

Sunday, March 9, 2014

Real-world WABS - Part 1

Ive done a video about some thoughts about using windows azure biztalk services in the real world.

Check it out here:

Love to hear peoples thoughts

Posted On Sunday, March 9, 2014 2:04 PM | Comments (0) |

Friday, March 7, 2014

The Future of BizTalk?

This week BizTalk 360 held a really exceptional conference in London which had many great speakers. I was disappointed to have to withdraw from being a speaker a few weeks ago and I was unable to attend the full conference, but there was some really good content in this event. One of the things I got thinking a lot about was based on the talks by Guru and Jon about the current BizTalk Services offering and what's coming. In this meeting Jon held up the old BizTalk diagram from a few years ago which is often used to explain how the inner working of BizTalk work. Let's take a moment to reflect on that.

 

 

In the event, one of the common things to do is to consider how the new world of Azure related offerings over lays on the original features of BizTalk. This is my attempt to do that.

 

 

It's interesting to see that the more modular feature set offered by various Azure offerings which exist now or which are proposed can overlay on the original features to some degree. It is important however to note that this is not a 1:1 mapping. For example while bridges are conceptually similar to the pipeline/port model of the BizTalk Server product they can work in isolation without the rest of the product being there. This conceptually offers some great opportunities and I think in the future this integration offering is going to be about being able to offer an integration solution which can scale not only in terms of being able to increase the amount of messages it can process but which can also offer a multi-tenanted capability which helps customers to scale in terms of complexity too. Perhaps you can start small with just some bridges offering the functionality you need, then you add Service Bus and are in an EAI and messaging world. Then you grow to add workflow and service mediation capabilities and then all of the added value features. Perhaps you go the other way and use the rules engine on its own because that's all you need.

Conceptually exciting times ahead and hoping to see what opportunities these products could give us integration folks.

Posted On Friday, March 7, 2014 11:00 AM | Comments (0) |

Why aren’t there more BizTalk accelerators?

I've recently been talking to a few friends about the applications which various integration products claim to support integration into. It's quite an interesting thing to consider these days and for some vendors it's a great way of looking really cool by having loads of application icons showing how many apps you can integrate with. In BizTalk world a few years ago we used to be in a really good place but I think now a days BizTalk looks weak in this space when compared against some competitors. The thing I always wonder is, "Is this an actual weakness or just a perceived one"?

If you look at most modern applications today then tend to support either a SOAP or a REST API and more and more are going that way. If that's the case then if you have a SOAP and a REST adapter then surely you can connect to all of these applications? Well in the real world you tend to find that "yes that is actually the case". You might not have dragged a pretty branded icon onto your designer but by using the REST or SOAP adapters you can then integrate into the vast majority of these applications.

I always used to think that BizTalk adapters fell into the following categories (with a few examples):

  • Protocol Adapter
    • SOAP
    • REST
    • FTP
    • File
    • MLLP
    • MSMQ
    • MQ-Series
  • Application Adapters
    • SharePoint
    • SAP
    • Oracle E-Business

 

What you have tended to find over the years is that the application adapters have become less common as vendors tend to move their interfaces over to an API model with support for a protocol adapter. This is a good thing as it means buying an adapter which doesn't come out of the box should be less common and its then just a case of configuring the adapter correctly and sending the right data.

At this point your probably thinking "Wasn't Mike supposed to be talking about accelerators"? Well yes that was the aim of this article. So if we are in a world now where most integration is done via API's and we have a few protocol adapters which speak the languages of these API's already then surely all we need now is to create the appropriate message types and configure the adapters correctly. Actually that's pretty much the case. So getting to the point of this article this means that the key gap we have in terms of application connectivity isnt so much adapters its more about guidance and making it easier. What id like to see is the creation of more accelerators for BizTalk which speed the development of integration with these key applications. For some reason people have never really built many community driven accelerators and I think their has been a bit of a perception that they should be for hard problems like HL7. Why cant accelerators help me for simpler integration problems? If I want to integrate with Twitter and I can grab an accelerator which gives me all of the schemas for the right version of Twitter and tells me how to configure the adapter that should be pretty awesome.

To get a bit more detailed I envision a scenario where I am deciding to integrate with DropBox so in my visual studio solution I can just go to nuget and download the BizTalk accelerator for dropbox. This will automatically give me the schemas required to do the main actions with Dropbox and also the binding samples with configuration required and a central place to go for guidance on this accelerator. If we followed this model we could easily create accelerators for:

  • Dropbox
  • Box
  • Amazon SNS
  • Amazon AWS
  • Windows Azure
  • Facebook
  • Twitter
  • Get Satisfaction
  • Google Apps
  • Linked In
  • Dynamics CRM
  • SharePoint

 

For most of the functionality these accelerators would just become versioned schemas and configuration and guidance. This would make them a great candidate to be developed outside of full BizTalk Server or BizTalk Services releases and also potentially developed and released by the community. I think this could offer a number of opportunities for vendor or community lead initiatives.

There would of course be some gaps, but that is where we should try to get Microsoft to focus their efforts. Some of the gaps would be around protocol adapters such as AMQP and MQTT and these are the type of things we want Microsoft solving properly with full product support for important protocols. We would also want them to provide support for something like a polling REST adapter or similar for some usage scenarios.

Ok so here is the challenge for the next year. Let's as a community see if we can get the BizTalk accelerator ecosystem into an awesome place, stick a comment on this post with ideas on any accelerators you would like to see then lets get some people teamed up and starting some github projects or codeplex projects to create community accelerators. After this let's get them on nugget and get a little guidance on how to use it and perhaps a demo video on YouTube or something.

If you do anything cool in this space please let me know as I'd gladly buy you a pint!

Posted On Friday, March 7, 2014 10:13 AM | Comments (1) |

Saturday, February 8, 2014

Introducing ROBODOC

We are currently in ward 23 the kids cardiac ward at the Freeman Hospital in Newcastle and have been playing with the code club kids programming books and AJ has created his first computer programme.

The Dr's work so hard here so we decided to help by creating ROBODOC the new NHS Super Computer Doctor.  

To use ROBODOC you simply ask it for a diagnosis  and it will give you advice to help you.  - Note this is not real Dr's advice so please dont use this to genuinely diagnose your health issues :-)

Anyway if anyone wants to play with ROBODOC, you need python and the code is below:


#ROBODOC - The NHS Super Computer Doctor
#AJ Stephenson
#Aged 8
#Written on: 2014-01-08


import random

# save the answers to use later
ans1="I think lots of medicine is required!"
ans2="I recommend a long session in the play room"
ans3="I blame the hospital food!"
ans4="I think it could be man flu?"
ans5="I prescribe TLC!"
ans6="On no its highly contagious"
ans7="Hmmmm it doesnt look good!"
ans8="Man up theres nothing wrong with you"


loopVal=1

while loopVal == 1:
    # open game
    print("welcome to ROBODOC \n\tThe new NHS super computer doctor \n")
    
    # get the users for advice
    question = input("Ask me for a diagnosis.  \n Then press ENTER and ill work my magic \n")

    print("shaking ...\n" * 4)

    # choose a random answer
    choice = random.randint(1,8)

    if choice == 1:
        answer=ans1
    elif choice == 2:
        answer = ans2
    elif choice == 3:
        answer = ans3
    elif choice == 4:
        answer = ans4
    elif choice == 5:
        answer=ans5
    elif choice == 6:
        answer = ans6
    elif choice == 7:
        answer = ans7
    else:
        answer = ans8

    # print the answer
    print (answer)
    print ("\n\n\n")


    #input("\n\n\Press the RETURN key to finish.")


Wonder how many people we can get to like ROBODOC?

Posted On Saturday, February 8, 2014 6:53 AM | Comments (0) |

Saturday, February 1, 2014

BizTalk Administrator Application Quality Checklist

Over the years one of the tools we have used to help guide improvement of the development of BizTalk applications and deployment to through test environments and into production is the BizTalk Administrator Application Quality Checklist. This is based on some work my friend Mo Uppal did around managing applications being migrated into the application automated releases he was implementing. The idea was that it could help you to understand where a particular application delivery sits in terms of the things that would make it easy to implement a good automated process.

On the back of this work I helped Mo with I thought there are some more general BizTalk opportunities here. The idea is to create a dashboard style view of your applications to show which ones meet certain standards of behaviors which will make the life of the BizTalk Administrator easy. Often in an organization I find that if there are multiple BizTalk applications they often get delivered to different standards and give the deployment and administration teams a completely different set of challenges. In the deployment area as an example you should aim to have your BizTalk applications fundamentally the same from a deployment perspective. They will always have some differences with application specific functionality but if they are 70%+ the same then this makes the admin teams life easy and supports an easy transition to an automated approach.

To help achieve this we came up with the BizTalk Administrator Application Quality Checklist (hmm maybe that's a bit of a mouthful). There is lots of stuff around in the community to help BizTalk administrators setup environments and infrastructure in a good way but there isnt a lot of information to help BizTalk administrators have a view or some guidance around what a good BizTalk application should look like and I've come across scenarios where even in the same development team or consultancy the applications they build look completely different from a deployment and management experience.

The fall out of this is that it gives the BizTalk Administrator a bad experience and the BizTalk administrator gets hassle from various areas as result of the problems which often follow poor deployment processes, but the Administrator often feels that the development team don't do enough to help but perhaps cant articulate this to the development team and it all results in friction.

To help with this the BizTalk Administrator Application Quality Checklist aims to let the BizTalk administrator look at all of the BizTalk and integration deliverables they are given and then to indicate if they meet certain behaviors which will make things easy for the BizTalk administrator. The result of this is that when there is conflict the BizTalk Administrator can pull out the check list and show that the application there are problems with are not in a good place in terms of the checklist so we know how to fix these problems. In an ideal world the BizTalk administrator would go through the check list with the development team right at the start of the project and then they would implement things in the right way from day 1.

In the check list I break things down into a few areas:

  • Development – Development Machine
  • Development – Build Server
  • Handover from Development to Deployment Team
  • Deployment Team

I am a strong believer that you need to start in the development team by doing a few things in a certain way to set you off on the right path. As an example if you done do the absolute basics like have source control, have a build server and a reasonably good configuration management solution then your deployment/administration team will never have a positive process for deploying and managing BizTalk applications.

 

Real world Example

In the BizTalk Maturity Assessment I talked through a case study where we did some work to fix the problems a customer was having and although we guided the high level improvement initiative with the maturity assessment, the BizTalk Administrator Application Checklist was one of the lower level tools we used to help deal with some of the issues and also to leave the customers BizTalk administrator team with a tool they could use in the future to maintain this level of improvement.

When we first engaged with the customer we reviewed their BizTalk application status against the check list and found the below.

 

 

Even though you can only see part of the checklist you can see that the only thing which the customer did which was a good thing was to have code in source control. When you look at all of the red, it's pretty obvious why they were having problems in this area.

As part of the improvement initiatives we worked with the check list and get this back in order and as new applications were developed we ensured they met the same standards and the level of quality was maintained. You can see the below picture shows part of this improvement. You may also noticed we used the checklist to manage some non-BizTalk integration components we also looked after.

 

 

As a side note also notice that the column headers have comments in them describing what each header means if you are not sure.

 

Conclusion

There are a few links at the bottom of the article to let you have a look at the checklist and hopefully as a BizTalk Administrator this will give you a good tool to evaluate your existing estate and to get support for any improvements you may need to make and also give you a the start of some guidance on what good BizTalk applications should look like.

I'd also love to hear any feedback on ideas for things to add to the checklist.

Download BizTalk Administrator Application Quality Checklist

Posted On Saturday, February 1, 2014 1:39 AM | Comments (0) |

Wednesday, January 29, 2014

Deployment Automation, DevOps and a bit of BizTalk

I should have done a shout out about this ages ago, but if anyone is interested in deployment automation and at a high level touching on how BizTalk played into this for a large enterprise check out the below video.

http://channel9.msdn.com/Events/ALM-Summit/ALM-Summit-3/Implementing-Successful-Continuous-Deployment-Practices-for-DevOps

Posted On Wednesday, January 29, 2014 4:09 AM | Comments (0) |

Friday, January 24, 2014

Flowing a Windows Identity through Azure Service Bus Queues

Ive recently written a whitepaper about how you could flow the details of a Windows Identity through Windows Azure Service Bus Queues and then use that on-premise to act as that user when accessing downstream resources.

The paper shows a walk through of setting up a complex scenario involving protocol translation, and kerberos multi-hop delegation to get the message from a queue with the identity associated and then to flow the identity through 2 WCF hops and then to impersonate the user when accessing a SQL database.

Im hoping this complex scenario is explained nice and clearly so that it helps people really understand what settings need to be configured where to implement this.

The paper is available in:

Also special thanks to Brian Milburn for reviewing this for me

Love to hear what people think


Posted On Friday, January 24, 2014 12:05 PM | Comments (0) |

Keeping up with the Jones’s in the Cloud

Been meaning to blog about this topic for a while and it's an area that I've been wondering if it will come up in the future as a governance challenge for organizations using the cloud. Let's consider the problem from the pre-cloud days in an enterprise scenario.

The Challenge in the past

Imagine that I am writing a WCF service which is running on premise to integrate with a line of business application. In the code I am using .net 4.0 developed with Visual Studio 2010 and I choose to reference a 3rd party SDK which we will say for argument sake is log4net. As a developer who wrote this WCF service in December 2011 and using version 1.2.10 of log4net which was the current version of the software at the time I completed my development and deployment at the end of December and everything was successful and since then the service has been happily deployed on a Windows 2008R server. The organization has had no functional requirement to make any changes to that component since its first release and other than the usual server patches which may be required and are fairly low risk we should be pretty comfortable without having to release a new version of that component (unless there is a functional change required) until we start to get close to the support end dates for the key dependencies the component has. These are listed below:

  • Visual Studio Support ends 14th July 2015
  • Windows Server 2008 R2 ends 13th January 2015
  • .net 4.0 mainstream support is in line with the operating system its running on

Between this time and now there have been some updates to log4net with the following releases:

  • V1.2.11 = February 16th 2012
  • V1.2.12 = September 19th 2013
  • V1.2.13 = November 23rd 2013

Although there have been these updates released to log4net from a risk management perspective the component is running on an unchanged platform so there is no reason currently for me to consider changing the component just yet and I can be pretty confident about it just working for a while yet.

 

What about the Cloud though?

Well this brings me on to the thing I have been wondering about. At the same time as the above WCF service I have just mentioned was developed, we were also developing some components which touch the cloud. I believe the things I am about to say are relevant to any component that interacts with or depends on stuff in the cloud but in this article I will talk through a specific example from a past project.

In the example here in addition to the WCF service we also have another service which uses the Windows Azure Service Bus. This component is an on premise component which is a listener to the Windows Azure Service Bus Relay which will receive messages and forward them to other on premise WCF services using the WCF Routing Service capability. This component was developed and released back at the end of December 2011 also and used v1.6.0 of the Windows Azure Service Bus SDK which was the current one at that time.

In this particular project it is also the case that there has been no business functional reason to change this component either and this component is also sitting on an on premise Windows 2008 R2 server listening for messages and gets its usual server patches applied following the enterprise standard.

What is the big difference between these projects though is that the on premise WCF service has had completely static dependencies since the end of December 2011 and we can be highly confident that it will just work whereas the Azure Service Bus listening component has dependencies on The Windows Azure Service Bus SDK and the Windows Azure Service Bus platform itself which since the end of December 2011 have changed quite a lot! At this point please consider that I am only using Service Bus as the example and I think this applies to all cloud dependencies which can change outside of your control.

If we take a look at the change list for the Windows Azure Service Bus SDK during this time we have a list like the below:

  • v1.6.0 = Dec 06th 2011
  • v1.7.0 = Jun 07th 2012
  • v1.8.0 = Oct 26th 2012
  • v2.0.0 = April 30th 2013
  • v2.1.0 = May 22nd 2013
  • v2.0.1 = April 30th 2013
  • v2.1.1 = July 31st 2013
  • v2.1.2 = July 31st 2013
  • v2.1.3 = Sept 11th 2013
  • v2.1.4 = Oct 19th 2013
  • v2.2.0 = Oct 22nd 2013
  • v2.2.1 = Oct 23rd 2013
  • v2.2.1.1 = Nov 6th 2013

 

In the 2 years since we released this component there have been 13 releases of the Windows Azure Service Bus SDK and this SDK has a direct dependency on the Azure Service Bus platform which I have no control over the rate of change so from a risk management perspective this puts me in a difficult position where I need to think about how I might protect myself from any changes which may cause me problems. Fortunately for us and hats off to the service bus team our component is still running v1.6.0 in production at present and it still works absolutely fine. However I do know that if we were to upgrade to the newer versions we would need to make some configuration changes to the WCF configuration as some of the token provider configuration has changed a little bit. While that is specific to this example you get the point that change does happen and change means the introduction of some risk even if it's small.

If you consider other projects where your usage of the cloud is much higher perhaps with more components touching or hosted in the cloud then your risk profile is going to be higher and more spread out.

Ok so what does this mean?

In the real world today most companies are already are using the cloud or are seriously considering using it. In my opinion based on the conversations I have had with people in the industry I don't think this particular challenge is something that people are really thinking about yet. Typically enterprise level organizations are slow to move and only change what needs to be changed (often because the business want it changed, not because IT does) so that leaves an obvious conflict where you have one thing changing quite fast and something else that trails behind.

My gut feeling is that with cloud governance being immature at present things like this will result in cases where solutions could become broken because organizations aren't keeping their applications up to date with the platform, particularly when those applications are in maintenance mode.

This also leaves the challenge about wanting to deploy non-functional changes to components. In many organizations that I have seen the common response to a desire to add a non-functional change to a component that is not broken and has no business change required and has no key performance benefits is to request that a business case is put forward for the change. The organization often doesn't understand the dependencies between their systems and thinks if they change one small part they need to retest entire systems so a small change can become a huge retesting effort and large cost. That is often the reason why small technical changes aren't done in the enterprise as soon as they should be or are rolled up into a release driven by functional change requirements.

What can we do about this?

The first and most important thing is that if you are going to adopt the cloud then you need to accept this is a challenge you will face and you may need to up your game to be able to deal with it. I guess when it comes down to it, it's really a case of being able to manage the risk and have a good application life cycle management process to help you be capable of deploying new versions of components in a light weight and in expensive way. Some practical tips that I think would be good as a guide are discussed below.

Architecture

In the architecture area it's important to have a good understanding of your solution dependencies and to actively monitor your cloud platform provider to understand what things in their pipelines will may have an effect on you. Arranging for R&D to be done with beta versions of SDK's and new features can help you to mitigate this risk, but also find new opportunities which you can benefit from.

Having a standard about how far behind the latest version of components you will allow your applications to be is a good thing and if you don't have one you should, but also you should consider that this may need to change for the cloud and its more regular release cadences. Often I hear architects define their standard as being N+1 or N+2 (whether they actually adhere to the standard or not) but In the above example after 2 years with log4net we were 3 revisions behind the latest build which is probably quite safe whereas with the Azure Service Bus SDK we were 1 major and 2 minor releases behind the latest. Now without knowing more detail on what's changed it's a bit more difficult to guess how risky that might actually be. I would assume that being 1 major release behind is a small amount of risk and a few minor releases if probably ok, but it's really down to how well you understand what the likelihood of breaking changes is from the vendor and what their major and minor changes actually mean. If you also consider this is just the SDK that has changes and for a platform as a service (PAAS) offering there may have been an unknown level of change that we may not be aware of which is happening behind the scenes on the PAAS platform. Certainly at a glance being 13 releases behind the latest seems more risky than 3. The key thing is you need to understand the technical detail of what's changing and that's the job of the technical architects combined with probably some of your support team.

Development

In development this is where some organizations really need to up their game. If we develop solutions in a componentized and stable way where we can replace/update one component in a solution without making the whole solution become fragile then we could be capable of handling this quite well. From a development perspective one of the secrets to this is about how you do your testing. I'm a huge fan of behavior driven development (BDD) and test driven development (TDD) approaches and if you are developing a component that is unit tested well and has good BDD tests you should be able to make even quite large changes to a component and still be very confident that when you take it outside of the development environment as long as its deployed and configured correctly it will just work. Although this is the place where I like my teams to be, it still surprises me how many organizations I come across where a deployment to test is made and they have no idea if the solution is going to work or not. The key point here is good development testing is the biggest way to mitigate risk.

I also find that for integration components build using a BDD style often have a better understanding of the things that depend on this component. This happens when the tests are written from the perspective of the things that will use the component and validate the behavior they expect. This helps to let developers understand these dependencies as well as test them in the development arena through testing against stubs.

In addition to testing development teams need to ensure they are using continuous integration approaches. Building your codebase every day and executing some tests is a good way will help to ensure it's still working and supportable. The last thing you want to do is get the latest code having not used it for a while and find out it just doesn't build and when you fix that half of your tests don't work. In the example above we mitigated a lot of risk by having the continuous integration server execute some tests which flexed the component by making calls via Windows Azure Service Bus. If there was a change introduced which broke something that we hadn't seen then this would be one of the first places that would detect it.

Testing

In the testing area we need to be able to take a risk based approach to testing. In large enterprises it's often the case that a small change results in a test team believing they need to retest huge portions of the system. Taking the time to understand the change and the associated dependencies can help you to identify the areas which do need testing and those which may only have a small amount of testing or none at all. In a solid componentized solution with a change like this I would hope we only need to do basic regression testing to be confident about changes that are related to issues like I'm talking about in this article.

In addition to a risk based approach, automation of testing is also a good thing to have. If you can click a button and have a whole bank of automated tests executed flexing parts of your solutions then this will really help you to deal with these things. The overall key to testing for these types of change is that we want to be in a position where we are able to do a small amount of testing which is relatively in expensive and to get the change shipped.

Release

In the release area having a release and deployment process which is simple yet effective and offers a strong roll back approach is the best way to help you be effective in this area. If you are able to reliably deploy your components in deploy without huge effort then you have mitigated a lot of risk and saved potentially a lot of cost which are two of the key things related to the deployment area which make enterprise organizations reluctant to do these type of changes.

One of the best examples of what good looks like in this area involved the use of the CA Lisa Release Automation product (formerly Nolio) which was implemented with an organization I have worked with where we ended up with a superb deployment story. In addition to the physical deployment process we also had a governance model to help us know what version is deployed where, when and by who and to be able to schedule deployments or roll backs. This was implemented by a good friend of mine Mo Uppal who I think is a great thought leader in this space (but unfortunately doesn't blog about his experiences) but has presented at the ALM summit in the past which went down very well and Sharma Kirloskar who was a key member of our automation team.

 

Business

If we accept that we are more likely to need to patch components outside of functional releases, we now need to change the process for interacting with the business to be able to accept and process the fact that IT needs to do these changes. To some degree you may consider that this is a tradeoff for the benefits you get with using the cloud but we just need to help make the enterprise capable of making changes and doing them in a way that doesn't make the business get really worried because IT wants to change something.

If you take the Azure Service Bus SDK example from above, from a technical perspective I have a very high level of confidence that if I were to update the component with the latest version of the Service Bus SDK and follow our development process of building it locally and letting the build run tests then checking it in and letting the build server do the same to produce our build package. The output would be a release package which we could deploy with confidence in 5 minutes and if there was a problem you could roll it back in 5 minutes. With the nature of the change technically it if it works on the build server with our development tests it's very unlikely the system test team would find any issues with it but they could run their automated tests anyway to keep everyone happy. In theory if we could do this from start to finish in very little time (a few hours). The challenge though is that typically in enterprise IT senior managers and business stakeholders are used to large complex and fragile systems and general pain so have a very pessimistic attitude to risk even when the organization may have small pockets of development which really have a strong and reliable process. This makes getting the business and management to approve these trivial changes to be a non-trivial task.

Infrastructure

So far I have talked about this challenge in terms of a component with a software dependency in the PAAS space but a similar challenge is probably true in the IAAS space too. With applications changing less frequently in the enterprise than the rate of change that cloud providers change at, IAAS is one of those areas which your organization might consider a safer area where the rate of change will be slower. This maybe the case but I would bet that there is still a need for the enterprise to be capable of speeding up. Let's take the average enterprise. I would bet most of them still have some Windows 2003 servers somewhere in their data center. In the real world enterprise data centers run servers, operating systems and applications that are sometimes outside of main stream support and sometimes even outside of extended support. The attitude is often "if it's not broke, why change it". In the IAAS space you need to think about the fact that you could be forced to changed it or at least discouraged from not changing it. Let's take the example where an average organization moves a large portion of its testing infrastructure to Windows Azure (or Amazon it doesn't really matter). So going up there today the gallery offers you the chance to build servers with Windows 2008 R2 SP1 onwards. So you might already have a problem that your servers are Windows 2008 and you need to upgrade. Well this difference between your production and test kit introduces some risk that you need to evaluate and manage. But say you're on Windows 2008 R2 SP1 and you setup your whole environment, what happens in 1 year time when Windows 2008 R2 SP1 ends mainstream support. Will you still be able to get this image from the gallery to create a new server to add to your test environment? Will there be a new service pack, if there is well that introduces the risk factor again.

Maybe the answer to that is to manage your own images and to upload them yourself, giving you more control of the virtual machine and what's on it. Well this might work but again it could introduce some risk. If it were on premise you would be in control of the version of VMWare or Hyper-V that your running and you would know that your guest operating system was compatible but in the cloud your provider will be continually patching and updating the underlying virtual machine host platform and maybe it will not be compatible or not supported with your guest.

The key thing is that infrastructure is in the same boat and while typically infrastructure departments are better (in my opinion) at understanding their system dependencies and also used managing the roll out of patches to servers the cloud brings this idea of keeping up to date to a whole new level.

Conclusion

In conclusion I hope this article doesn't come across as a doom and gloom story as its not intended to. What I want to articulate is that for the enterprise, the cloud brings in a new way of thinking. While the organization may have significant short term success with the cloud, you need to do some thinking about how you will manage and leverage this investment in the long term. This new way of thinking means that there are also new challenges you will need to handle otherwise you could feel some pain in the future.

If I could give one piece of advice on the most important thing organizations need to do so that they can embrace the cloud for the long term it is to accept that the cloud is constantly changing and you will need to invest in your people to give them ongoing support and training to help them keep up to speed with these changes and to get the benefits in the long term. Let's face it the enterprise is typically not great at investing in developer and IT pro training but it needs to change from a once per year one week course to an ongoing thing because the cloud is changing so regularly. You should probably also consider mentoring from experts with strong cloud experience.

As a dirty plug at the end of my article I would say that to address the training gap the best tip is to buy subscriptions to Pluralsight for your architecture, development, integration and support teams. This will give them access to training on many of the technologies associated with the cloud and let your staff train on a continuous basis and to keep pace with the rapidly changing cloud space rather than sending them away for a week's training at a significantly higher cost. Just to declare my bias that I have authored courses for Pluralsight but still use their training a lot.

Posted On Friday, January 24, 2014 9:56 AM | Comments (1) |

Thursday, January 23, 2014

RabbitMQ for .net Developers Part 2

My recent course on RabbitMQ for .net Developers part 2 has just gone live on Pluralsight

http://pluralsight.com/training/Courses/TableOfContents/rabbitmq-dotnet-developers-pt2

Yay!

Posted On Thursday, January 23, 2014 4:59 AM | Comments (0) |

Architectural Thoughts on JSON from a BizTalk Perspective

I wrote this article a while back and Saravana has been kind enough to publish it as a whitepaper through BizTalk 360's whitepaper gallery.

Its a discussion around JSON and BizTalk and some of the things BizTalk people need to think about as the use of JSON with BizTalk increases


Also a big thanks to my friends Richard Seroter, Steef-Jan Wiggers and Kent Weare for reviewing it

Be great to hear what people think

Posted On Thursday, January 23, 2014 3:26 AM | Comments (0) |

Considerations for Logging in Hybrid Integration Solutions

As many of my readers will know, I've been doing a lot of work around Hybrid integration solutions over the last few years involving Windows Azure Service Bus and various other technologies. One of the challenges which comes up in any architecture is how do you manage and implement logging. Well if you consider that we are now often building globally distributed applications in various data centers which we own, or data centers which we rent from cloud providers, this logging challenge is now even harder.

With this architecture in mind and all of the possibilities that the cloud gives us, let's consider and play around with the idea of a globally capable logging solution. My first thought on this is that we want a few requirements:

  1. I would like to be able to publish audit events which are things which need to be reliably delivered
  2. I would like to publish logging events which I can live with occasional loss of messages
  3. I would like to be able to configure the logging in my applications so that I can control how much is logged centrally or not
  4. I would like to be able to keep some logging just in the client application
  5. I want to offer an interoperable approach so non .net applications would be able to log messages.

 

Let's get started

In this imaginary solution let's consider a solution where Acme is the main business and their business partner calls an API they host in the cloud which uses Windows Azure Service Bus to bridge to their on premise BizTalk instance. BizTalk gets the message and then transforms it and calls into your on premise line of business application which processes an order and confirms to BizTalk that it's complete.

The solution will look something like this.

When we consider the logging requirements for this solution it will now look something like this.

You can see that the abstracted logging system means that all applications could have the ability to push logging information up to the logging system. This means that we want a logging system which exposes an interoperable way to be for applications to be able to send it messages and also to be hosted in a place where it can be reached by Acme and their partners.

 

How could we build the Logging/Auditing System

First we need a store for the logging and auditing events which is capable of holding a lot of data. A No-SQL type of database could be quite a good choice here. There data doesn't really need to be relational and it's a pretty simple structure. Since we want to accept messages from inside and outside of the organization we can host this in the cloud. Let's say for argument sake we would like to use a PAAS offering so let's choose An Azure Table Storage account. It's pretty cheap to store the data here which is great.

Next we need to think about how to get messages into this data store. Well we could just use a key and give applications access to the table directly via the REST API. Yep that's easily do-able but it's going to make the rest of this article a bit boring if we do that and we would lose some control over the information we would like the client to send. Instead we will sit Windows Azure Service Bus in front of the Table store. The clients will send messages to Windows Azure Service Bus and we will have something which can then process the messages from there and into the Azure Table.

The benefits of putting Windows Azure Service Bus in front of the table include:

  1. We will be able to offer an more interoperable interface for clients supporting REST, AMQP, NetTcp and WCF
  2. We will be able to use a Topic to provide a level of filtering of messages. This could reduce the amount of data in the central store and could allow us to turn up and down what we accept centrally
  3. We can filter and route logging information to different data stores. For example we could send audit messages to one table and debug messages to another
  4. We can provide different security access for different applications. For example each application could submit to different queues
  5. Azure Service Bus will allow us to filter the different types of log messages to subscriptions which we could process at different speeds. For example we could process error and audit events from a priority queue with lots of processors to get the events into the database as quick as possible and debug events could process much slower

Now that we have the log messages in Windows Azure Service Bus the next question would be how to get them to a permanent data store. A Windows Azure Worker Role would be a good choice of host for a background queue processing component. This worker role could poll a number of queues or subscriptions and then save messages into the Azure table store we described above. We may consider whether we store all messages in one table of store messages in different tables such as an audit messages table and a logging messages table. Either way the Azure Table Storage account can with just a tick box be geo-replicated giving us the benefits of the data being backed up to another data center.

After the core logging capability was in place we could then consider how we would manage and use the information we could capture. There is really two sides to the information.

  • Operations Data
  • Analysis Data

In the space of operations we would be considering what kind of information could be used for troubleshooting and reactively responding to support queries. We could also be looking for information which could be proactively used to identify operations issues and hook into Azure Notification Hubs as a great way to get alerts out to people who need to be aware. Building a custom dashboard hosted in an Azure Website or Web Role would be a good way to give your operators access to this data. Operators would be able to correlate log messages across applications and troubleshoot the flow of a specific transaction across systems.

The next obvious capability would be around analysis and in particular how could I gain useful insights into this logging information. There are many evolving cloud based business intelligence tools and in a logging system like this you could potentially build up a lot of data over time. One of the big benefits of the cloud is that you have the expandable compute power to burst the analysis of large amounts of data so you would be able to ask deep probing questions of your logging and auditing data across your applications, potentially across your global enterprise and also your partners.

When we consider all of these capabilities, the solution for the logging system might look something like the below:

 

Other benefits of this model of hosting in the cloud is that we could have multiple service bus instances in different Azure Data Centers and let our applications or partners log to the one that is most convenient for them and using a partitioned queue they would have a resilient queue to send to.

 

Integrating the Applications into the Logging System

Once we have the conceptual logging system in place and its capabilities to help us have a great insight into our hybrid solutions on a global scale we now need to consider how we might integrate into the logging system. As I mentioned earlier in the article one of the benefits of using Azure Service Bus is the ability to expose a number of different standards based interfaces in addition to some optimized ones for .net. With Azure Service Bus we will have AMQP & REST interfaces which should support an easy way to interop with most applications. They would just need to send a correctly formatted message along with any appropriate headers.

For .net applications we then can integrate the use of the Service Bus SDK.

Cloud API & .net Applications

As I've just mentioned for a .net application you could integrate with the use of the Service Bus SDK, but many organizations use logging components like log4net in their custom developed solutions so I did another article where I experimented with the idea of writing a log4net appender which would be capable of publishing log events to Windows Azure Service Bus. If you're interested in the detail of that then please check out the article on my blog called Log4net Service Bus Appender.

By using this appender if gives you an easy way to configure the log events which you would like your .net application to publish to the centralized logging system. Perhaps you want to publish all events, or perhaps just events of a specific logging level. It would also encapsulate the details of the logging behind the normal log4net interface and have the benefit of being able to optionally publish in an asynchronous fashion.

If you weren't using the log4net interface you could simple publish your own message to the Azure Service Bus and as long as it meets the serializable contract and contains the properties expected then it would be able to be processed ok.

BizTalk

In the BizTalk part of this demo solution you have a couple of choices when wanting to publish logging events to the central system. Many organizations who use BizTalk also use log4net in their solution so it would be possible to just implement this in the same way you would for a .net solution. Another option would be to publish messages to the message box with information that could be mapped to the audit event data type and then publish them to Azure Service Bus using the SB-Messaging adapter which comes with BizTalk.

On-Premise Application

The on premise line of business (LOB) application is most likely to be the difficult one to integrate into this solution. It really depends upon the capability to extend the application. In some applications you can add extension points to their workflow processes where you could perhaps make a REST call out to the Azure Service Bus to add information. Alternatively if you had less capability then you could just take advantage of using BizTalk to log before and after the interaction with the LOB application. This would mean you lose some insight into what is happening in the LOB application but you at least have options depending upon its capabilities.

What about devices?

If your solution includes devices there is no reason you couldn't develop the ability to send background REST calls to something like Azure Mobile Services which could then send information to the Azure Service Bus for you or perhaps you would have used Azure Mobile Services as the application platform for your mobile development. In this case things would get easier again and you could send messages from your mobile services API to the logging system. The below picture shows what this may look like:

 

What about costs?

One of the cool things about this kind of solution would be the potential to cost per use. Using the configuration knobs in the log4net configuration and BizTalk configuration your applications could be quite specific about what data you wanted to send to the centralized logging system based on the Level property. Even if you were streaming quite a lot of data you would still be able to keep control on the costs through the filter rules. You might use the subscription properties to only accept messages from certain applications or certain levels.

If you were to do the full solution including the dashboard, and various reporting options I can imagine you would need to think a lot more about the cost aspect of this solution but one of the big benefits of this approach is that with the log4net appender I mentioned earlier and a table storage account and a worker role component which wouldn't be difficult to implement I bet you could very easily get a prototype of this solution working. I would also expect that from this quick to demo prototype you would get a lot of interest in your organization in the ability to get this holistic view of across system and business unit instrumentation. If you decided not to take it any further then just remove the log4net stuff from your applications and it's taken away.

 

What about Application Insights?

If you are a follower of Visual Studio Online you will see that there is the developing Application Insights capability which is currently in preview. You may be wondering if this is the same thing. Based on what seems to be available in Application Insights at present I don't view these as the same thing although there is some overlap. In Application Insights you will be using agents to push through instrumentation based information to your Visual Studio Online instance. This is really useful stuff about the performance of your application but it's coming from the lower level end of devices, and servers and some stuff about your application. In this type of solution I'm thinking about a slightly different angle based on the following:

  1. I am thinking specifically about processes and transactions that span across applications. I want to bring this information about the process execution together to gain insights into it
  2. I am more interested in the human/business readable type of information such as "this key bit of logic did this" rather than "this is the level of the CPU usage"

I think that there is a small degree of overlap in how the support operators could use a centralized logging capability along with what Application Insights will eventually offer but that should be a complimentary overlap which allows them to better support cross business hybrid solutions which let's face it is a difficult thing to do.

 

Taking this approach further

In my fictitious example if I have many applications and my integration platform all capable of logging audit and diagnostic messages into my central cloud logging store then I can begin to get a good operational overview of how my applications are working, but taking that to the next level I would be able to look at taking advantage of the data processing capabilities in the cloud to get some interesting insights into my application and business process data. Earlier I eluded to the options for using SQL Reporting and HDInsight to analyze some of the data and thinking about it, if you went to the next level and build a pretty interface and some good reporting you wouldn't be far from a Business Activity Monitoring solution. You could also build a pretty visualization about how the process and logging has flown across applications in something like the BizTalk 360 Graphical Message Viewer but at a level higher than just BizTalk.

Using the topics in Windows Azure Service Bus as described earlier for some of the inbound logging events you could even create some rules to push out business process notifications via Windows Azure Notification Hubs and start to think about complex event processing opportunities.

 

Conclusion

In conclusion as we have developed more hybrid integration solutions the challenge of how we support them has become greater. It now involves support teams from our organization and other business units around the world and in many cases also support teams from our partners. This complex architecture makes it difficult for people to understand what is going on and a centralized logging capability at a global scale becomes an obvious requirement. If we think through what we need like I have tried to do above then a logging capability also becomes a great candidate for a Business Activity Monitoring (BAM) solution. At the recent BizTalk Summit Microsoft announced their plans for a BAM offering on Azure at some future point and that is something which excites me quite a lot. One of the key things about the BAM offering when it comes out from Microsoft is that we need it to have the following:

  1. It needs to be simple for all applications to plug into it not just BizTalk or BizTalk services
  2. We need to think how we can have flexible processes where information can be brought together but also support the process changing. We don't want the tightly coupled business activity model to the implementation. This is where I hope the Hadoop Business Intelligence capabilities allow us to be much more flexible here.
  3. We need to be able to store huge amounts of data and get data from all over the world

I have high hopes for the BAM module when it comes out but in the meantime hopefully this article provides some food for thought as to what you could do if you wanted to create a centralized logging system with the capabilities available on Azure today.

 

Posted On Thursday, January 23, 2014 3:20 AM | Comments (1) |

Thursday, January 2, 2014

Log4net Azure Service Bus Appender

In this post I wanted to explore the options around building a Windows Azure Service Bus appender for log4net (sample download). This would allow me to publish logging events outside of my application to an internet scale messaging system which can then provide opportunities to centrally store or process messages from all of my applications. This article is part of a few articles I will write while playing with the architectural idea of the centralized logging based on Azure Service Bus but in this case specifically focusing on a log4net appender that can be used by custom applications.

At this point I am going to assume you have some familiarity with log4net and Windows Azure Service Bus but only a basic level should be required.

To begin with in this article I want to start by writing an appender which I will plug into log4net. This will allow me to have a custom widget which I can use to process logging events from log4net and where I can decide what I want to do with them. This is the normal log4net approach and when you configure a logger through your .net configuration file you would normally specify a set of appenders which will process any log events your application creates. By default log4net comes with a set of appenders which you can configure out of the box but also I can write my own by deriving from the AppenderSkeleton class in log4net. This means that I can declare some properties to receive configuration data for my appender and then override a few methods and have quite an easy way to implement my appender.

Once I have written my appender I just need to make sure that the dll containing my appender is in the application bin directory and my configuration file references my appender then I should be good to go.

 

Appender Configuration

Below is an example of the appender configuration you would use for the ServiceBusAppender in your log4net configuration file.

<appender name="AzureServiceBus-Logging-Appender" type="MyLog4net.Azure.ServiceBus.ServiceBusAppender,MyLog4net.Azure.ServiceBus">

<ConnectionStringKey value="Log4netServiceBusConnection"/>

<MessagingEntity value="LoggingTopic"/>

<ApplicationName value="MyLog4net.ConsoleApplication"/>

<EventType value="LoggingEvent"/>

<CorrelationIdPropertyName value="CorrelationId"/>

<Synchronous value="false"/>

<filter type="log4net.Filter.LevelRangeFilter">

<levelMin value="INFO" />

<levelMax value="FATAL" />

</filter>

</appender>

 

There are a couple of key properties here:

  • ConnectionStringKey

This property is a string which points to a connection string you have declared in the config file. The connection string you have declared will be a connection string for a Windows Azure Service Bus Namespace

  • MessagingEntity

This is the name of a queue or topic in the Windows Azure Service bus which you would like to publish the message to

  • ApplicationName

The application name property will be used to indicate on the log message which application published the message.

  • EventType

The event type is used to indicate what type of event it is. An example of where you might use this is if you have multiple loggers using different instances of the appender which are configured differently, then you might use this to indicate the types of events being published. An example might be publishing audit events and standard logging events.

  • CorrelationIdPropertyName

Normally as you process flows across components you would include in this flow some context to relate execution in different places. An example of how you might do this would be to create a unique id when the user first clicks a button and then flow this through a web service call and into another component. From here you would usually use the log4net ThreadContext to set these variables to be available to each component as they begin executing. In this case the correlation id is a useful property to allow us to relate log messages from different components. In the configuration the CorrelationIdPropertyName allows you to specify the name of the log4net property that holds the CorrelationId in this component. This correlation id is then used as a specific element on the published log event.

  • Synchronous

The synchronous property allows you to control if an event is published on a background thread or not. There is a performance gain in publishing on a background thread but the trade of is you can't as easily respond to errors. For normal logging events you're probably happy to swallow the error and continue, but perhaps for audit events you might not want to do this.

 

In addition to the specific configuration properties the rest of the normal log4net stuff pretty much applies. As an example you can use the Appender filtering mechanism to control which messages are processed by the appender.

 

What happens inside the appender?

Inside the appender the code will start by creating an instance of the AzureLoggingEvent which is a type which will encapsulate the data I want to publish and setup the data as appropriate. The appender will then decide if you want to publish synchronously or asynchronously and then call the Internal Append method. In the internal append method we will serialize the AzureLoggingEvent to a JSON string to be the body of the published message. We will then setup the following context properties on the Azure Service Bus Brokered Message:

  • ApplicationName
  • Username
  • MachineName
  • Level
  • EventType

These properties will allow you to do some routing if you have published the message to a topic. Examples of routing uses could include:

  • Creating a higher processing priority for certain messages (perhaps Errors need to be processed quicker)
  • Routing for specific messages to also go to an operator who is looking for activity for a specific user

I'm sure you can think of other examples.

The Appender will then create a MessageingFactory, MessageSender and the other Azure Service Bus objects as required and will send the message. A quick point to note is Ill probably have a look at some point to include the transient error handling block but for now I have just included a basic retry mechanism (very basic).

 

Using the Appender

To illustrate the user of the log4net appender I've included a sample to show you how it can be used. In this sample I want to create three log4net loggers:

  1. ApplicationLogger

The application logger is a basic logger which will just use the console appender and event log appender to record information locally. This is just application diagnostic info.

  1. GlobalLogger

The global logger is intended to be used when the application needs to publish information to the Azure Service Bus for general log events. This logger will process log events asynchronously

  1. AuditLogger

The audit logger will send out audit events using the Azure Service Bus appender but will do it synchronously.

 

The configuration for setting up the appender looks like the below example:

Step 1 - Loggers

The configuration to setup the loggers is below

<root>

<level value="ALL"/>

<appender-ref ref="EventLogAppender"/>

<appender-ref ref="ColoredConsoleAppender"/>

</root>

<logger name="GlobalLogger" additivity="false">

<level value="ALL"/>

<appender-ref ref="EventLogAppender"/>

<appender-ref ref="AzureServiceBus-Logging-Appender"/>

</logger>

<logger name="AuditLogger" additivity="false">

<level value="ALL" />

<appender-ref ref="AzureServiceBus-Audit-Appender"/>

</logger>

There are 3 loggers as described above with the GlobalLogger and AuditLogger being the ones which will send messages to Azure Service Bus.

 

Step 2 - Appenders

The configuration for the appenders is below:

<appender name="AzureServiceBus-Logging-Appender" type="MyLog4net.Azure.ServiceBus.ServiceBusAppender,MyLog4net.Azure.ServiceBus">

<ConnectionStringKey value="Log4netServiceBusConnection"/>

<MessagingEntity value="LoggingTopic"/>

<ApplicationName value="MyLog4net.ConsoleApplication"/>

<EventType value="LoggingEvent"/>

<CorrelationIdPropertyName value="CorrelationId"/>

<Synchronous value="false"/>

<filter type="log4net.Filter.LevelRangeFilter">

<levelMin value="INFO" />

<levelMax value="FATAL" />

</filter>

</appender>

 

<appender name="AzureServiceBus-Audit-Appender" type="MyLog4net.Azure.ServiceBus.ServiceBusAppender,MyLog4net.Azure.ServiceBus">

<ConnectionStringKey value="Log4netServiceBusConnection"/>

<MessagingEntity value="AuditQueue"/>

<ApplicationName value="MyLog4net.ConsoleApplication"/>

<EventType value="AuditEvent"/>

<Synchronous value="true"/>

<CorrelationIdPropertyName value="CorrelationId"/>

<filter type="log4net.Filter.LevelRangeFilter">

<levelMin value="INFO" />

<levelMax value="FATAL" />

</filter>

</appender>

In the appenders you can see how the settings indicate each appender will send to different queues/topics in Azure Service Bus.

 

Step 3 Initialize Log4net

In the sample code when the console application starts I begin by setting up my loggers:

 

static void Initialize()

{

log4net.Config.XmlConfigurator.Configure();

ApplicationLogger = log4net.LogManager.GetLogger(typeof(Program));

AuditLogger = log4net.LogManager.GetLogger("AuditLogger");

GlobalLogger = log4net.LogManager.GetLogger("GlobalLogger");

 

}

Step 4 Setup my Correlation ID

I then setup my correlation id property using the below example. Note that in this sample it's a really basic implementation but if you were flowing the correlation id across components then this would be a little more involved.

log4net.ThreadContext.Properties["CorrelationId"] = Guid.NewGuid();

 

Step 5 Adding the Application Code

After this I will execute some code to log different messages to different loggers in my application.

 

ApplicationLogger.Debug("Program starting");

 

for (int i = 0; i <= 1000; i++)

{

ApplicationLogger.Info("This is info");

GlobalLogger.Info("The application has started");

}

 

//lots of code.....

ApplicationLogger.Debug("Ive done something");

 

try

{

ApplicationLogger.Debug("About to audit");

AuditLogger.Info("About to do something important");

ApplicationLogger.Debug("Audit complete");

//.... doing some work

throw new ApplicationException("This is an error");

}

catch(Exception ex)

{

//... log error locally

ApplicationLogger.Error("There was an error", ex);

//... send error to central error system

GlobalLogger.Error("There was an error", ex);

}

Console.ReadLine();

 

This results in a bunch of messages going to log4net. Some of them will be filtered out because the log4net configuration says we aren't interested in them and some will be sent to Windows Azure Service Bus.

Step 6 Configuring Azure Service Bus

The key thing here is that you can configure it however you like. As long as it matches the configuration of the MessageEntity and Connection string in the log4net configuration it will work fine. For this demo I have created the following queue setup:

 

There is an audit message queue which all events from the audit logger will be sent to. There is also a logging topic for the general logging events. On the loggingtopic I have created some subscriptions to demonstrate how you could filter messages. The first subscription is an all events subscription using the 1=1 routing rule so that it will get all messages. The Just Errors subscription has a rule of Level='Error' so that it will only get errors. You can create filter rules based on any of the properties we earlier published as context properties on the service bus message.

You can see that at this point its quite easy to configure a flexible routing of events depending on what you want to do with the events once they reach service bus.

You are now ready to run the sample and publish logging messages.

 

What does the message look like?

After running the demo you will see that the queue or subscription will have a lot of messages on it. You can use Service Bus Explorer to take a look at the details. The below picture shows you an example of one message.

 

You can see here that there are the context properties I described on the message in the Message Custom Properties box. These are what you can filter on. The message text is also the JSON string I mentioned earlier. It will look something like the below:

{

"Properties":

{

"CorrelationId":"5f4c5936-674b-474a-9b2a-c3d9002cc070",

"log4net:Identity":"",

"log4net:UserName":"CSCMAIN\\Michael",

"log4net:HostName":"CSCMain"

},

"CorrelationId":"5f4c5936-674b-474a-9b2a-c3d9002cc070",

"EventType":"LoggingEvent",

"MachineName":"CSCMAIN",

"ExceptionMessage":"This is an error",

"ThreadName":"10",

"UTCTimeStamp":"2014-01-02T06:22:59.1615766Z",

"TimeStamp":"2014-01-02T06:22:59.1615766+00:00",

"UserName":"CSCMAIN\\Michael",

"RenderedMessage":"There was an error",

"Domain":"MyLog4net.vshost.exe",

"ExceptionString":"System.ApplicationException: This is an error\r\n at MyLog4net.Program.Main(String[] args) in c:\\Users\\Michael\\Documents\\Visual Studio 2013\\Projects\\MyLog4net\\MyLog4net\\Program.cs:line 37",

"Identity":"",

"Level":"ERROR",

"ClassName":"MyLog4net.Program",

"FileName":"c:\\Users\\Michael\\Documents\\Visual Studio 2013\\Projects\\MyLog4net\\MyLog4net\\Program.cs",

"FullInfo":"MyLog4net.Program.Main(c:\\Users\\Michael\\Documents\\Visual Studio 2013\\Projects\\MyLog4net\\MyLog4net\\Program.cs:44)",

"LineNumber":"44",

"MethodName":"Main",

"LoggerName":"GlobalLogger",

"ApplicationName":"MyLog4net.ConsoleApplication"

}

We now have the capability to have some fairly useful logging information centrally published with the ability to process it.

 

What might I do with this now?

Now that we have messages on Azure Service Bus queues the most obvious thing to do with them might be to have a Worker Role or similar which polls the messages and then saves them to a database for operational analysis. I'll probably talk about that in a future post but for now the messages are somewhere safe and somewhere you can do some interesting stuff with them.

 

Download Sample

The sample can be downloaded from the following location:

http://cscblogsamples.blob.core.windows.net/publicblogsamples/MyLog4net.zip

 

Conclusion

As you can see it was really easy to implement this appender. Feel free to get the sample and give it a try or modify it if you choose. The one caveat is that this was only some R&D so I haven't really tried this in a production setup to see how it performs.

 

 

 

Posted On Thursday, January 2, 2014 5:15 AM | Comments (0) |

Wednesday, November 20, 2013

BizTalk Maturity Assessment vs BizTalk Health Check

Recently a few people have asked me about the differences between the BizTalk Health Check and the BizTalk Maturity Assessment so I thought I'd put a few thoughts out here.

 

Firstly let's start with what the BizTalk Health Assessment is.

 

Health Check

Most consultancies who have a BizTalk capability offer some flavour of BizTalk Health Check.  In these cases they will bring in a BizTalk consultant who will look at your production environment and do an evaluation of the current state of the environment and identify any deficiencies and updates which may be required or any optimizations which may have been missed.  Often the customer is in a painful place when they call someone in for a health check.

 

A health check is often focused on the customer's production environment but sometimes they will also cover test environments.

 

The focus of a health check is usually to answer the following questions:

 

  1. Is my infrastructure setup according to good practices for my scenario
  2. Is my BizTalk/SQL/Windows setup configured in line with good practices and appropriately for my scenario
  3. Am I doing the right admin/operator tasks to make sure my environment stays healthy

 

The main audience of the BizTalk Health Check is the BizTalk Administrator and perhaps the BizTalk Service Owner.

 

In addition to the BizTalk Health Check's which most consultancies offer, Tord Glad Nordhal who is one of the most respected BizTalk Admin/Infrastructure consultants in the community has also produced a BizTalk Health Check which has been made publically available. This health check pulls together his years of experience in this space and then made it available to BizTalk people and companies who use BizTalk so that they can access this information themselves. The aim is to get customers to setup BizTalk successfully and to maintain a healthy BizTalk environment.

 

Tord has made this tool available at the following location:

http://biztalkadmin.com/biztalk/

 

I'm a big fan of Tord's work and love the health check!

 

BizTalk Maturity Assessment

The BizTalk Maturity Assessment is taking a wider look at a customer's investment in BizTalk and is focusing on the following questions:

 

  1. What are the things I should do to ensure I have a positive experience with BizTalk
  2. How can I make sure I get a good ROI from BizTalk
  3. I have invested in BizTalk and its going wrong, how can I identify the cause of the problems
  4. Are the different areas of my business (eg: development, test, production support) displaying the right behaviours to ensure I am successful with BizTalk

 

There is a small amount of overlap with the health check because the Maturity Assessment has a section which looks at your infrastructure and operations areas, but the health check is a deep dive in the infrastructure and configuration area where as the Maturity Assessment is taking a higher level view of these areas.

 

The Maturity Assessment is taking a holistic view across your organisation and looking at the behaviours that should be displayed in different areas to see if they will compliment or conflict with making a successful investment in BizTalk.

 

The BizTalk Maturity Assessment has drawn from the experience of a number of well-respected people within the BizTalk Community (including Tord) and should help you to have the capability to deliver great solutions using BizTalk.

 

Conclusion

In conclusion the BizTalk Maturity Assessment and the BizTalk Health Check complement each other quite nicely and are both things which your organisation should be using on a regular basis (id recommend minimum quarterly).

 

Even better is that they are both free!

 

Posted On Wednesday, November 20, 2013 2:06 PM | Comments (0) |

Friday, November 15, 2013

RabbitMQ for .net Developers Part 1

Ive just completed a training course with Pluralsight for RabbitMQ for .net developers.

Part 1 of this course will  be a gentlish introduction to rabbitmq from the perspective of a .net developer.  It will help you get up and running and talk about some of the message exchange patterns your likely to use.

Check it out on the following link

Part 2 will be coming soon where we will look at some of the real world type topics and considerations for using RabbitMQ

Posted On Friday, November 15, 2013 3:40 PM | Comments (0) |

Thursday, November 14, 2013

Automated Build & Azure Service Bus

Recently with a little advice from Paolo who created the Windows Azure Service Bus Explorer tool ive worked to create a few tasks which will help us to do a little automation around the use of Windows Azure Service Bus.

Being a BizTalk guy at heart I have been thinking about the best ways to manage the configuration of my service bus namespace and in particular when my development team is working on components which interact with Service Bus.

In general we want each developer to be able to work on their own in isolation so when running some tests they dont interfere with other developers so each dev has their own service bus namespace.  From here I wanted to have an experience similar to a BizTalk binding file.

We use the Service Bus Explorer to manage the namespace but then once changes are made we export these to an xml file which we keep in TFS.  This file is then used like a binding file.  When the build runs on our component we will create a new namespace and import the xml to be able to set the namespace up with the right queues and topics etc for any test cases we wish to run.  We also use the Shared Access Secret API to be able to set the security the same for each developer so we dont need to keep juggling config settings around in the development environment.

With this we have created a codeplex project to share this with everyone.

In the project I have an assembly which encapsulates the use of the Azure Management API, the Service Bus Management API and some of the helper classes which come with Service Bus Explorer and provide a simple interface to these features which can then easily be called from powershell or you could also use the MsBuild tasks which come with the project.

In the documentation page on codeplex Ive got examples of how this works and I hope everyone finds this useful

Posted On Thursday, November 14, 2013 7:33 AM | Comments (0) |

Friday, October 25, 2013

Good News

A while back i asked people to vote for charlie for that mountain warehouse charity thing and apparently he won it and will get £5000 towards helping him to learn to walk

http://votecharlie.co.uk/

Cheers everyone

Posted On Friday, October 25, 2013 1:30 PM | Comments (0) |

Tuesday, October 15, 2013

Bridging Subsidiaries With the Cloud to Create a Global API

Ive just had an article published on InfoQ about bridging companies to create a consolidated API and some of the considerations you might have to make

Thanks to Brian Milburn for reviewing this!

Posted On Tuesday, October 15, 2013 9:04 PM | Comments (0) |

Powered by: