Charles Young

  Home  |   Contact  |   Syndication    |   Login
  193 Posts | 64 Stories | 508 Comments | 373 Trackbacks

News

Twitter












Article Categories

Archives

Post Categories

Image Galleries

Alternative Feeds

BizTalk Bloggers

BizTalk Sites

CEP Bloggers

CMS Bloggers

Fun

Other Bloggers

Rules Bloggers

SharePoint Bloggers

Utilities

WF Bloggers

Friday, December 6, 2013 #

We are now SolidSoft Reply. This morning, the company was acquired by Reply S.p.A. This is great news for us. We will continue to build the business under the SolidSoft name, brand and culture, but as part of a much larger team. Further information at http://www.reply.it/en/investors/financialnews/readd/%2c15230

Monday, April 29, 2013 #

Microsoft does not currently offer RHEL on subscription on the Windows Azure platform, and people have reported problems when trying to create and run their RHEL VMs.  So, does RHEL run on Azure?  Read on here.

 

http://solidsoft.azurewebsites.net/articles/posts/2013/does-red-hat-enterprise-linux-run-on-azure.aspx


Wednesday, January 30, 2013 #

I can't say I follow things that closely in the Windows Phone world, but I am aware of the upgrade to Windows phone 7.8.  I've been looking forward to this for a while.  The improvements in the UI look nice, and when I get it, I can try to kid myself that my company phone, a Nokia Lumia 800, is really an 820.


It appears that the roll-out of 7.8 started today in the US for Nokia 900 users.  It can take a while for upgrades to make it to all the eligible phones.  So, imagine my delight when, this evening, my phone informed me an update was waiting for me!  Yeah!  I eagerly started the upgrade process and excitedly informed my bemused family that I was about to get Windows Phone 7.8.

Er...no.  After a successful upgrade, the phone re-booted...into Windows Phone 7.5.

I did a little digging.  It appears that the last upgrade, code-named Tango, has just arrived on my phone.  Tango was released on 20th July last year.  That's just over six months before I got the upgrade.

Oh dear me.

I'll report back on Windows Phone 7.8 in late summer...if I'm fortunate enough to get it by then :-(

Update
 
Apologies to Nokia who I stupidly railed at in an earlier version of this post.   Of course, they simply manufacture the handsets.  In my case, the carrier is Vodafone and they are the company responsible for pushing updates to my phone.    It seems that back in September Vodafone decided to cancel the global roll-out of Tango updates to some users due to a WiFi concern.  Although the press only reported this as affecting a single HTC model, maybe this is connected with my experience.
 
Update 2 (Friday)
 
A colleague has been busy forcing upgrades on his Nokia Lumia 800 (there is a little trick you can use, apparently, that involves switching off your PC WiFi connection at just the right moment while using Zune, and then re-connecting).  He forced an upgrade to Tango.  Now, he reports that he got two further updates and then a third.  The third appears to be Windows Phone 7.8 (which at the time of writing he is currently installing).  So, best guess is that Tango is being rolled out as a precursor to the 7.8 update.  I'll report back on this later.
 
Update 3

After many weeks of non-information and constant complaints on their forum, Vodafone did eventually roll out Windows Phone 7.8.  This was, in fact, a patched version of 7.8.  While I have no problems with Vodafone withdrawing the roll-out of 7.8 in order to fix a bug, I do have issues with the inordinate length of time it took them to issue the patched version and, more importantly, the total lack of information provided by the company to their customers.


Tuesday, January 22, 2013 #

The C# compiler is a pretty good thing, but it has limitations. One limitation that has given me a headache this evening is its inability to guard against cycles in structs.  As I learn to think and programme in a more functional style, I find that I am beginning to rely more and more on structs to pass data.  This is natural when programming in the functional style, but structs can be damned awkward blighters.

Here is a classic gotcha.  The following code won't compile, and in this case, the compiler does its job and tells you why with a nice CS0523 error:

    struct Struct1
    {
        public Struct2 AStruct2
    }

    struct Struct2
    {
        public Struct1 AStruct1
    }

Structs are value types and are automatically instantiated and initialized as stack objects.  If this code were compiled and run, Struct1 would be initialised with a Struct2 which would be initialised with a Struct1 which would be initialised with a Struct2, etc., etc.  We would blow the stack.

Well, actually, if the compiler didn't capture this error, we wouldn't get a stack overflow because at runtime the type loader would spot the problem and refuse to load the types.  I know this because the compiler does a really rather poor job of spotting cycles.

Consider the following.  You can use auto-properties, in which case the compiler generates backing fields in the background.  This does nothing to eliminate the problem.  However, it does hide the cycle from the compiler.  The following code will therefore compile!

    struct Struct1
    {
        public Struct2 AStruct2 { get; set; }
    }

    struct Struct2
    {
        public Struct1 AStruct1 { get; set; }
    }

At run-time it will blow up in your face with a 'Could not load type <T> from assembly' (80131522) error.  Very unpleasent.

ReSharper helps a little.  It can spot the issue with the auto-property code and highlight it, but the code still compiles.  However, ReSharper quickly runs out of steam, as well.   Here is a daft attempt to avoid the cycle using a nullable type:

    struct Struct1
    {
        public Struct2? Struct2 { get; set; }
    }

    struct Struct2
    {
        public Struct1 Struct1 { get; set; }
    }

Of course, this won't work (duh - so why did I try?).  System.Nullable<T> is, itself, a struct, so it does not solve the problem at all.  We have simply wrapped one struct in another.  However, the C# compiler can't see the problem, and neither can ReSharper.  The code will compile just fine.  At run-time it will again fail.

If you define generic members on your structs things can easily go awry.  I have a complex example of this, but it would take a lot of explaining as to why I wrote the code the way I did (believe me, I had reason to), so I'll leave it there.

By and large, I get on well with the C# compiler.  However, this is one area where there is clear room for improvement.

Update

Here's one way to solve the problem using a manually-implemented property:

    struct Struct1
    {
        private readonly Func<Struct2> aStruct2Func;

        public Struct1(Struct2 struct2)
        {
            this.aStruct2Func = () => struct2;
        }

        // Let's make this struct immutable!  It's good practice to do so
        // with structs, especially when writing code in the functional style.
        // NB., the private backing field is declared readonly, and we need a
        // constructor to initialize the struct field.  There are more optimal
        // approaches we could use, but this will perform OK in most cases,
        // and is quite elegant.
        public Struct2 AStruct2
        {
            get
            {
                return this.aStruct2Func();
            }
        }
    }

    struct Struct2
    {
        public Struct1 AStruct1 { get; set; }
    }


Tuesday, November 13, 2012 #

Forget about Steven Sinofski's unexpected departure from Microsoft.   The real news from Redmond is that, after approximately 72 years of utter stagnation, the latest version of Visio has been upgraded to support UML 2.x!   It gets better.  It looks like it actually supports the latest version of UML (2.4.1). 

Unbelievable!


Sunday, July 8, 2012 #

At long last I’ve started using Windows 8.  I boot from a VHD on which I have installed Office, Visio, Visual Studio, SQL Server, etc.  For a week, now, I’ve been happily writing code and documents and using Visio and PowerPoint.  I am, very much, a ‘productivity’ user rather than a content consumer.   I spend my days flitting between countless windows and browser tabs displayed across dual monitors.  I need to access a lot of different functionality and information in as fluid a fashion as possible.

With that in mind, and like so many others, I was worried about Windows 8.  The Metro interface is primarily about content consumption on touch-enabled screens, and not really geared for people like me sitting in front of an 8-core non-touch laptop and an additional Samsung monitor.  I still use a mouse, not my finger.  And I create more than I consume.

Clearly, Windows 8 won’t be viable for people like me unless Metro keeps out of my hair when using productivity and development tools.  With this in mind, I had long expected Microsoft to provide some mechanism for switching Metro off.  There was a registry hack in last year’s Developer Preview, but this capability has been removed.   That’s brave.  So, how have things worked out so far?

Well, I am really quite surprised.  When I played with the Developer Preview last year, it was clear that Metro was unfinished and didn’t play well enough with the desktop.  Obviously I expected things to improve, but the context switching from desktop to full-screen seemed a heavy burden to place on users.  That sense of abrupt change hasn’t entirely gone away (how could it), but after a few days, I can’t say that I find it burdensome or irritating.   I’ve got used very quickly to ‘gesturing’ with my mouse at the bottom or top right corners of the screen to move between applications, using the Windows key to toggle the Start screen and generally finding my way around.   I am surprised at how effective the Start screen is, given the rather basic grouping features it provides.  Of course, I had to take control of it and sort things the way I want.  If anything, though, the Start screen provides a better navigation and application launcher tool than the old Start menu.

What I didn’t expect was the way that Metro enhances the productivity story.  As I write this, I’ve got my desktop open with a maximised Word window.  However, the desktop extends only across about 85% of the width of my screen.  On the left hand side, I have a column that displays the new Metro email client.  This is currently showing me a list of emails for my main work account.  I can flip easily between different accounts and read my email within that same column.  As I work on documents, I want to be able to monitor my inbox with a quick glance.

Windows 8 for productivity

The desktop, of course, has its own snap feature.  I could run the desktop full screen and bring up Outlook and Word side by side.  However, this doesn’t begin to approach the convenience of snapping the Metro email client.  Consider that when I snap a window on the desktop, it initially takes up 50% of the screen.  Outlook doesn’t really know anything about snap, and doesn’t adjust to make effective use of the limited screen estate.  Even at 50% screen width, it is difficult to use, so forget about trying to use it in a Metro fashion. In any case, I am left with the prospect of having to manually adjust everything to view my email effectively alongside Word.  Worse, there is nothing stopping another window from overlapping and obscuring my email.  It becomes a struggle to keep sight of email as it arrives.  Of course, there is always ‘toast’ to notify me when things arrive, but if Outlook is obscured, this just feels intrusive.

The beauty of the Metro snap feature is that my email reader now exists outside of my desktop.   The Metro app has been crafted to work well in the fixed width column as well as in full-screen.  It cannot be obscured by overlapping windows.  I still get notifications if I wish.  More importantly, it is clear that careful attention has been given to how things work when moving between applications when ‘snapped’.  If I decide, say to flick over to the Metro newsreader to catch up with current affairs, my desktop, rather than my email client, obligingly makes way for the reader.  With a simple gesture and click, or alternatively by pressing Windows-Tab, my desktop reappears.

Another pleasant surprise is the way Windows 8 handles dual monitors.  It’s not just the fact that both screens now display the desktop task bar.  It’s that I can so easily move between Metro and the desktop on either screen.  I can only have Metro on one screen at a time which makes entire sense given the ‘full-screen’ nature of Metro apps.  Using dual monitors feels smoother and easier than previous versions of Windows.

Overall then, I’m enjoying the Windows 8 improvements.  Strangely, for all the hype (“Windows reimagined”, etc.), my perception as a ‘productivity’ user is more one of evolution than revolution.  It all feels very familiar, but just better.


Saturday, June 23, 2012 #

The term ‘cloud’ can sometimes obscure the obvious.  Today’s Microsoft Cloud Day conference in London provided a good example.  Scott Guthrie was halfway through what was an excellent keynote when he lost network connectivity.  This proved very disruptive to his presentation which centred on a series of demonstrations of the Azure platform in action.  Great efforts were made to find a solution, but no quick fix presented itself.  The venue’s IT facilities were dreadful – no WiFi, poor 3G reception (forget 4G…this is the UK) and, unbelievably, no-one on hand from the venue staff to help with infrastructure issues.  Eventually, after an unscheduled break, a solution was found and Scott managed to complete his demonstrations.  Further connectivity issues occurred during the day.

I can say that the cause was prosaic.  A member of the venue staff had interfered with a patch board and inadvertently disconnected Scott Guthrie’s machine from the network by pulling out a cable.

I need to state the obvious here.  If your PC is disconnected from the network it can’t communicate with other systems.  This could include a machine under someone’s desk, a mail server located down the hall, a server in the local data centre, an Internet search engine or even, heaven forbid, a role running on Azure.

Inadvertently disconnecting a PC from the network does not imply a fundamental problem with the cloud or any specific cloud platform.  Some of the tweeted comments I’ve seen today are analogous to suggesting that, if you accidently unplug your microwave from the mains, this suggests some fundamental flaw with the electricity supply to your house.   This is poor reasoning, to say the least.

As far as the conference was concerned, the connectivity issue in the keynote, coupled with some later problems in a couple of presentations, served to exaggerate the perception of poor organisation.   Software problems encountered before the conference prevented the correct set-up of a smartphone app intended to convey agenda information to attendees.  Although some information was available via this app, the organisers decided to print out an agenda at the last moment.  Unfortunately, the agenda sheet did not convey enough information, and attendees were forced to approach conference staff through the day to clarify locations of the various presentations.

Despite these problems, the overwhelming feedback from conference attendees was very positive.  There was a real sense of excitement in the morning keynote.  For many, this was their first sight of new Azure features delivered in the ‘spring’ release.  The most common reaction I heard was amazement and appreciation that Azure’s new IaaS features deliver built-in template support for several flavours of Linux from day one.  This coupled with open source SDKs and several presentations on Azure’s support for Java, node.js, PHP, MongoDB and Hadoop served to communicate that the Azure platform is maturing quickly.  The new virtual network capabilities also surprised many attendees, and the much improved portal experience went down very well.

So, despite some very irritating and disruptive problems, the event served its purpose well, communicating the breadth and depth of the newly upgraded Azure platform.  I enjoyed the day very much.

 


Wednesday, March 28, 2012 #

For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed.

Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism.

a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source.

b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying.

c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so. 

One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools.

All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries.

Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge.

Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.
For the last decade, I have repeatedly, in my imitable Microsoft fan boy style, offered an alternative view to commonly held beliefs about Microsoft's stance on open source licensing.  In earlier times, leading figures in Microsoft were very vocal in resisting the idea that commercial licensing is outmoded or morally reprehensible.  Many people interpreted this as all-out corporate opposition to open source licensing.  I never read it that way. It is true that I've met individual employees of Microsoft who are antagonistic towards FOSS (free and open source software), but I've met more who are supportive or at least neutral on the subject.  In any case, individual attitudes of employees don't necessarily reflect a corporate stance.  The strongest opposition I've encountered has actually come from outside the company.  It's not a charitable thought, but I sometimes wonder if there are people in the .NET community who are opposed to FOSS simply because they believe, erroneously, that Microsoft is opposed.

Here, for what it is worth, are the points I've repeated endlessly over the years and which have often been received with quizzical scepticism.

a)  A decade ago, Microsoft's big problem was not FOSS per se, or even with copyleft.  The thing which really kept them awake at night was the fear that one day, someone might find, deep in the heart of the Windows code base, some code that should not be there and which was published under GPL.  The likelihood of this ever happening has long since faded away, but there was a time when MS was running scared.  I suspect this is why they held out for a while from making Windows source code open to inspection.  Nowadays, as an MVP, I am positively encouraged to ask to see Windows source.

b)  Microsoft has never opposed the open source community.  They have had problems with specific people and organisations in the FOSS community.  Back in the 1990s, Richard Stallman gave time and energy to a successful campaign to launch antitrust proceedings against Microsoft.  In more recent times, the negative attitude of certain people to Microsoft's submission of two FOSS licences to the OSI (both of which have long since been accepted), and the mad scramble to try to find any argument, however tenuous, to block their submission was not, let us say, edifying.

c) Microsoft has never, to my knowledge, written off the FOSS model.  They certainly don't agree that more traditional forms of licensing are inappropriate or immoral, and they've always been prepared to say so. 

One reason why it was so hard to convince people that Microsoft is not rabidly antagonistic towards FOSS licensing is that so many people think they have no involvement in open source.  A decade ago, there was virtually no evidence of any such involvement.  However, that was a long time ago.  Quietly over the years, Microsoft has got on with the job of working out how to make use of FOSS licensing and how to support the FOSS community.  For example, as well as making increasingly extensive use of Github, they run an important FOSS forge (CodePlex) on which they, themselves, host many hundreds of distinct projects.  The total count may even be in the thousands now.  I suspect there is a limit of about 500 records on CodePlex searches because, for the past few years, whenever I search for Microsoft-specific projects on CodePlex, I always get approx. 500 hits.  Admittedly, a large volume of the stuff they publish under FOSS licences amounts to code samples, but many of those 'samples' have grown into useful and fully featured frameworks, libraries and tools.

All this is leading up to the observation that yesterday's announcement by Scott Guthrie marks a significant milestone and should not go unnoticed.  If you missed it, let me summarise.   From the first release of .NET, Microsoft has offered a web development framework called ASP.NET.  The core libraries are included in the .NET framework which is released free of charge, but which is not open source.   However, in recent years, the number of libraries that constitute ASP.NET have grown considerably.  Today, most professional ASP.NET web development exploits the ASP.NET MVC framework.  This, together with several other important parts of the ASP.NET technology stack, is released on CodePlex under the Apache 2.0 licence.   Hence, today, a huge swathe of web development on the .NET/Azure platform relies four-square on the use of FOSS frameworks and libraries.

Yesterday, Scott Guthrie announced the next stage of ASP.NET's journey towards FOSS nirvana.  This involves extending ASP.NET's FOSS stack to include Web API and the MVC Razor view engine which is rapidly becoming the de facto 'standard' for building web pages in ASP.NET.  However, perhaps the more important announcement is that the ASP.NET team will now accept and review contributions from the community.  Scott points out that this model is already in place elsewhere in Microsoft, and specifically draws attention to development of the Windows Azure SDKs.  These SDKs are central to Azure development.   The .NET and Java SDKs are published under Apache 2.0 on Github and Microsoft is open to community contributions.  Accepting contributions is a more profound move than simply releasing code under FOSS licensing.  It means that Microsoft is wholeheartedly moving towards a full-blooded open source approach for future evolution of some of their central and most widely used .NET and Azure frameworks and libraries.  In conjunction with Scott's announcement, Microsoft has also released Git support for CodePlex (at long last!) and, perhaps more importantly, announced significant new investment in their own FOSS forge.

Here at Solidsoft we have several reasons to be very interested in Scott's announcement. I'll draw attention to one of them.  Earlier this year we wrote the initial version of a new UK Government web application called CloudStore.  CloudStore provides a way for local and central government to discover and purchase applications and services. We wrote the web site using ASP.NET MVC which is FOSS.  However, this point has been lost on the ladies and gentlemen of the press and, I suspect, on some of the decision makers on the government side.  They announced a few weeks ago that future versions of CloudStore will move to a FOSS framework, clearly oblivious of the fact that it is already built on a FOSS framework.  We are, it is fair to say, mildly irked by the uninformed and badly out-of-date assumption that “if it is Microsoft, it can't be FOSS”.  Old prejudices live on.


Thursday, February 23, 2012 #

While coding a very simple orchestration in BizTalk Server 2010, I ran into the dreaded "cannot implicitly convert type 'System.Xml.XmlDocument' to '<message type>'" issue. I've seen this happen a few times over the years, and it has often mystified me.

My orchestration defines a message using a schema type. In a Message Assignment shape, I create the message as an XML Document and then assign the document to the message. I initially wrote the code to populate the XML Document with some dummy XML. At that stage, the orchestration compiled OK. Then I changed the code to populate the XML Document with the correct XML and...bang. I could no longer cast the XML Document to the message type.

I spent some time checking this through. I reverted back to the original code (with the dummy content), but the problem persisted. I restarted Visual Studio (several times), deleted the existing ‘bin’ and ‘obj’ folders and re-built, and tried anything else I could think of. No change.

It then occurred to me to think a little more carefully about exactly what I was doing at the point the code broke. My response message is very simple, and to create the XML content, I am therefore concatenating strings. To ensure I got the right XML, I used BizTalk to generate an example of the XML from the schema. The schema contains two root elements for the request and response messages. To generate the XML, I temporarily changed the 'Root Reference' property of the schema from 'default' to the element that represents the response message...

...and forgot to change the property back :-(

So, I changed the property back to 'default' and...

...success!

I experimented further and ascertained that if the 'Root Reference' property is set to anything other than 'default', the assignment code in my orchestration breaks. This is totally repeatable on the machine I am using. I spent some time looking at the code that BizTalk generates for schemas. When 'Root Reference' is set to 'default', BizTalk generates separate schema classes for each candidate root element, as well as a class for all root nodes. When set to a specific element, BizTalk outputs a single class, only. Apart from that, I couldn't see anything suspicious.

I can't find anything on the Internet about this, so would be interested if anyone else sees identical behaviour. The lesson, here, of course, is to avoid using schemas with multiple root elements. I have now refactored my schema into two new schemas.


Friday, December 16, 2011 #

It's always exciting when a new application you've worked on goes live. The last couple of weeks have seen the 'soft' launch of a new service offered by the UK government called 'Tell Us Once' (TUO). You can probably guess from the name what the service does. Currently, the service allows UK citizens to inform the government (as opposed to Register Officers, who must still be notified separately) just once of two types of 'change-of-circumstance' event; namely births and deaths. You can go, say, to your local authority contact centre, where an officer will work through a set of screens with you, collecting the information you wish to provide. Then, once the Submit button is clicked, that's it! With your consent, the correct data sets are parcelled up and distributed to wherever they need to go - central and local government departments, public sector agencies such as the DVLA, Identity and Passport Service, etc. No need to write 50 letters!

With my colleagues at SolidSoft , I'm really proud to have been involved with the team that designed and developed this new service. For the past few years, we worked originally on the prototypes and pilots (there was more than one!). Over the last eighteen months or so, we have been engaged in building the national system, and development work in on-going. It's been a journey! The idea is very simple, but as you can imagine, the realisation of that idea is rather more complex. Look for future enhancements to today's service, with the ability to report events on-line from the comfort of your own home and the possible extension of the system to cover additional event types in future.

Interaction with government has just got a whole lot better for UK citizens, and we helped make that happen. It's a pity that I don't intend to have any more children (four is enough!), and I really hope I don't have to report a death in the near future, but if I do, I'll be beating a path to the door of my local council's contact centre in order to 'tell them once'.

See http://www.guardian.co.uk/government-computing-network/2011/dec/16/tell-us-once-matt-briggs?utm_source=twitterfeed&utm_medium=twitter

http://www.guardian.co.uk/public-leaders-network/2011/nov/10/tell-us-once-birth-death


Friday, December 9, 2011 #

Yesterday, Microsoft announced the forthcoming release of BizTalk Server 2010 R2 on the BizTalk Server blog site.  This is advanced notice, given that this new version will ship six months after the release of Windows 8, expected in the second half of next year.  On this basis, we can expect the new version of BizTalk Server to arrive in 2013.  Given the BizTalk team’s previous record of name changes, I wonder if this will eventually be released as BizTalk Server 2013.

Microsoft has been refreshingly open in recent months about their future plans for BizTalk Server.  This strategy has not been without its dangers with some commentators refusing to accept Microsoft’s statements at face value.  However, yesterday’s announcement is entirely in line with everything Microsoft has been saying, both publically and privately, for some time now.  Since the release of BizTalk Server 2004, Microsoft has made little change to the core technology with, of course, the exception of a much re-vamped application packaging approach in BizTalk Server 2006.  Instead, Microsoft chose to put investment into a number of important ‘satellite’ technologies such as EDIFACT/X12/AS2 support, RFID Server, etc.  Maintaining the stability of the core platform has allowed BizTalk Server to emerge as a mature and trusted workhorse in the enterprise integration space with widely available skills in the marketplace.

In terms of its major investments, Microsoft’s focus has long shifted to the cloud.  Microsoft has candidly communicated that, given this focus, they have no current plans to add major new technologies to the BizTalk platform.  In addition, they absolutely have no intention of re-engineering the core BizTalk platform.  In my direct experience in recent months, this last point plays very well to prospective and existing enterprise customers.  It takes us straight to the heart of what most organisations want from an integration server: a ‘known quantity’ with a good track record for dependability, scalability and stability and a significant pool of available technical resource.

The announcement of BizTalk Server 2010 R2 illustrates and illuminates Microsoft’s stated future strategy for the product.  An important part of Microsoft’s platform for enterprise computing, it will continue to be enhanced and extended.  It will match future developments in the Windows platform and new versions of Visual Studio.  However, we should not expect to see any dramatic new developments in the world of BizTalk Server.  Instead, the BizTalk platform will continue to steadily mature further as the world’s best-selling integration server.

One of the big messages of yesterday’s announcement is that BizTalk Server will increasingly support its emerging role in building hybrid solutions that encompass systems and services that reside both on-premises and in the cloud.  At SolidSoft , we are increasingly focused on the design and implementation of cloud-based and hybrid integration solutions.  Integration is challenging, and Azure is a young, fast evolving platform.  Microsoft has discussed at length their vision of Azure within a wider ‘hybrid’ context.  The availability of a tried and tested, mature, on-premises integration server is a vitally important enabler in building hybrid solutions.  Better than that, the announcement makes it clear that, as well as new support for the Azure service bus, BizTalk Server 2010 R2 licensing will be revised to open up new opportunities for hosting the server in the cloud.  This ties in with the push in Azure to embrace more fully the IaaS (infrastructure-as-a-service) model and, perhaps most importantly in the BizTalk space, to reduce or eliminate existing barriers between the on-premises and off-premises worlds.   BizTalk Server and Azure belong together.


Sunday, September 25, 2011 #

At last, I can announce that ‘BizTalk Server 2010 Unleashed’ has been published and is available through major booksellers in both printed and electronic form. The book is not a new edition of the old ‘BizTalk Server 2004 Unleashed’ book from several years ago, although Brian Loesgen, our fearless team leader, provided continuity with that title. Instead, this is entirely new content written by a team of six authors, including myself.
 
 
 
BizTalk Server is such a huge subject. It proved a challenge to decide on the content when we started our collaboration a couple of years back (yes, it really was that long ago!). We quickly decided that the book would principally target the BizTalk development community and that it would provide a solid and comprehensive introduction to the chief artefacts of BizTalk Server 2010 solutions – schemas, maps, orchestrations, pipelines and adapters. Much of this content was written by Jan Eliasen and forms part 1 (“The Basics”) of the book.
 
On the day my complimentary copies were delivered, I was working on the implementation of a pipeline component, and had an issue to do with exposing developer-friendly info in Visual Studio. I used this as a test-run of Jan’s content, and sure enough, discovered that he had clearly addressed the issue I had, including sample code. Jan’s contribution is succinct and to the point, but is also very comprehensive (he’s even documented things like creating custom pipeline templates!). I particularly appreciate the way he included plenty of guidance on testing individual artefacts.
 
My contributions to part 1 is a chapter on adapters (the ‘adapter chapter’ as we fondly called it). This explores each of the ‘native’ adapters and the family of WCF adapters. There is also some content on the new SQL adapter which is part of the BizTalk Adapter Pack. In that respect, it overlaps with ‘Microsoft BizTalk 2010 Line of Business Systems Integration’ which I reviewed recently, and also in respect of the SharePoint adapter. However, ‘Microsoft BizTalk 2010 Line of Business Systems Integration’ provides a whole lot more information on a range of LoB adapters. It is written in a different style to BizTalk Server 2010 Unleashed and is highly complementary.
 
Although the original plan was to include content on custom adapter creation, this didn’t, in the end, get covered in any depth. One reason for this is that, going forward, most custom adapter development for both BizTalk and Azure Integration Services (still some way off) is likely to be done using the WCF LoB Adapter SDK. That suggested that we would have had to document two distinct adapter frameworks in order to do the job properly, and this proved a little too much to tackle. Room there for another book, methinks.
 
Part 1 accounts for about half the content of the book. Beyond this, we wanted to add value by covering more advanced topics, including the use of BizTalk Server alongside WCF and the emerging Azure platform, new features in BizTalk Server 2010 and topics that have been only partially covered elsewhere. So, for example, Anush Kumar was contributed an entire section (part 4) on RFID including the new RFID Mobile Framework. Anush is well-known in the BizTalk community due to his involvement in the development of RFID Server. Between Jon Flanders and Brian Loesgen, the book includes content on exploiting WCF extensibility in BizTalk, integrating via the Azure service bus (please note that this content was written before the advent of topics/subscriptions or Integration Services), the BAM framework and the ESB toolkit.
 
There is also a whole section (part 3) written by Scott Colestock that introduces the Administration Console and describes deployment approaches for BizTalk solutions.
 
Rules
That leaves one more subject for which I was responsible. One of the main reasons I was asked to contribute to the book was to document rules processing. Although there is some great content out there on the use of the BRE, I have long felt there is a need for a more comprehensive introduction. Due to some early confusion, I originally intended a total of seven short chapters on rules, but this content was refactored into two longer chapters. The first chapter introduces the Business Rules Framework. My idea was to emphasise the entire framework up front, rather than simply explore the rules composer and other tools. I also tried to explain the typical ‘feel’ of rules processing in the context of a BizTalk application, and the relationship between executable rules and higher-level business rules.
 
The second chapter investigates rule-based programming. It attempts broadly to achieve two related goals. The first is to explain rules programming to developers, to demystify the model, explain the techniques and provide insight into how to handle a number of common issues and pitfalls that rules developers face. The second is to provide a solid theoretical introduction to rules processing, including concepts that are not generally familiar to the average developer. I resisted the temptation, though, to provide an in-depth explanation of how the Rete Algorithm works, which I’m sure will be a relief :-) You can read the Wikipedia article on that.
 
Conclusions
So there you have it. BizTalk Server 2010 is a mature enterprise-level product which, although it has a long future ahead of it, won’t change fundamentally over time. Microsoft has publically stated that their future major investments in EAI/EDI will be made in the Azure space, although new versions of BizTalk Server will continue to benefit from general improvement and greater integration with the evolving Azure platform. So, hopefully, our content will serve for some time as a useful introduction to BizTalk Server, chiefly from a developer’s perspective.

Monday, September 19, 2011 #

One benefit of my recent experience on a BA flight was that I got plenty of time to read through “Microsoft BizTalk 2010 Line of Business Systems Integration”. I’d promised the publisher weeks ago that I would take a look and publish some comments, but August has been such a busy month for me, and they have had to be patient.   I should point out that, for the sake of transparency, that with another BizTalk book about to be released (next week) which I helped co-author, I have an urgent and obvious need to make good on this promise before I start to blog on other stuff.
 
BTS10LoBI is a really welcome addition to the corpus of BizTalk Server books and fills a conspicuous gap in the market.  BizTalk Server offers a wide-ranging library of adapters.  The ‘native’ (built-in) adapters understandably get a lot of attention, as do the WCF adapters, but other adapters, such as the LoB adapters and HIS adapters, are often overlooked.  I came to the book with the mistaken assumption that its chief focus was on the BizTalk Adapter Pack.  This is a pack of adapters built with the WCF-based LoB SDK.  In fact, the book follows a much broader path.  It is a book about LoB integration in a general sense, and not about one specific suite of adapters.  Indeed, it is not simply about adapters.  It focuses on integration with various LoB systems, and explains how adapters and other tools are used to achieve this.

This makes for a more interesting read.  For example, one, possibly unintended, consequence (given that it represents collaboration between five different authors) is that it illustrates very effectively the spectrum of approaches and techniques that end up being employed in real-world integration.  In some cases developers use adapters that offer focused support for metadata harvesting and other features, exploited through tools such as the ‘Consume Adapter Service’ UI.  In other cases, they use native adapters with hand-crafted schemas, or they create façade services.  The book covers additional scenarios where third-party LoB tools and cloud services (specifically SalesForce) are used in conjunction with BizTalk Server.  Coupled with lots of practical examples, the book serves to provide insight into the ‘feel’ of real-world integration which is so often a messy and multi-faceted experience.

The book does not cover the BizTalk Adapter Pack comprehensively.  There is no chapter on the Oracle adapters (not a significant issue because they are very similar to the SQL Server adapter) or the Siebel adapter.  On the other hand, it provides two chapters on the SAP adapter looking at both IDOC and RFC/BAPI approaches.  I particularly welcome the inclusion of chapters on integration with both Dynamics CRM 2011 and Dynamics AX 2009.  I learned a lot about Dynamics CRM which I haven’t had occasion personally to integrate with in its latest version.  The chapter on SalesForce mentions, but does not describe in any detail, the TwoConnect SalesForce adapter which we have used very effectively on previous projects.  Rather, it concentrates on direct HTTP/SOAP interaction with SalesForce.com and, very usefully, advocates the use of Azure AppFabric for secure exchange of data across the internet. 

The book provides two chapters on integration with SharePoint 2010.  The first explores the use of the native adapter to communicate with form and document libraries, and provides illustrated examples of working with InfoPath forms.  It would have been reasonable to stop there, but instead, the second chapter goes on to describe how to integrate more fulsomely with SharePoint via its web service interface, and specifically how to interact with SharePoint lists.
 
Increasingly, the BizTalk community is waking up to the implications of Windows Azure and AppFabric.  This is an important step for developers to take.  Future versions of BizTalk Server will essentially join and extend the on-premise AppFabric world.  As Microsoft progressively melds their on/off premise worlds, BizTalk developers will increasingly have to grapple with integration of cloud based services, and integration of on-premise services via the cloud.  The book is careful to address this emerging field through the inclusion of a chapter on integration via the Azure AppFabric service bus.   As I mentioned above, this is applied specifically to SalesForce integration in a later chapter.  The AppFabric Service Bus is a rapidly-evolving part of the Azure platform, and is set to introduce a raft on new features in the coming months which will greatly extend the possibilities.  Eventually we will see cloud-based integration services appear in this space.  So, the inclusion of this chapter points out the direction of major future evolution of Microsoft’s capabilities and offerings in the integration space.

The book is not shy about providing guidance on practical problems and potential areas of confusion that developers may encounter.  The content is clearly based on real-world experience and benefits from ‘war stories’.  The value of such content cannot be underestimated, and can save developers hours of pain and frustration when tackling new problems.  All in all, I thoroughly welcome this book.  My thanks to the authors, Kent Waere, Richard Seroter, Sergei Moukhnitski, Thiago Almeida and Carl Darski.


Sunday, September 18, 2011 #

I'm sitting is a nice new hotel in Redmond - the Hotel Sierra is well worth considering if you are staying in the area. I'm sleep-deprived and jet-lagged, and it's raining hard outside, but hey, I just got to play with one of the Samsung tablets they handed out at Build, and was not disappointed.  Microsoft is doing something trully remarkable with Win8 Metro.
 
On the other hand, I am deeply disappointed with the UK flag carrier, British Airways. Indeed, I've lost patience with them big-time. So forgive me for getting this off my chest. I am very much in the mood to do as much reputational damage to them as I can.
 
When I checked in on-line, they had booked me into one seat but I could see another with more legroom (a front row). Because of repeated experience over the last few years with defective headsets (I always carry my own earphones these days after one flight here we went through three different headsets before finding one in which one of the earphones actually worked) and bad headset connections (having to constantly twiddle the jack to try to hear anything), I spent a little while consciously debating with myself the intangible risks of changing my seat – i.e., I could easily be swapping a ‘working’ seat for a broken ‘one’. Of course, there was no way to know, so I opted for the seat with more legroom.
 
MISTAKE! Forget about dodgy headsets. Nothing worked. Not even the reading light! Certainly not the inflight entertainment. They failed to show me the safety video (the steward did panic a little when he realised they had failed to comply with their legal obligations). So I sat for 9.5 hours in a grubby, worn-out cabin with nothing!
 
To be fair, they did offer to try to find me another seat (the plane was very full), but I opted for the legroom because I wanted to try to get some sleep. So I could probably have got in-flight entertainment. The point is, though, that this is now more than just an unfortunate couple of co-incidences over the last two or three years. I am reasonably fair-minded and understand that sometimes, with the best will in the world, things just go wrong.  In any case, I was bought up to put-up or shut-up (as my mother would say - it's part of the culture).  However, I am forced to conclude that this is now a repeated trend that I experience regularly to the point where I am consciously suspicious of the seats they give me, and clearly with good reason.  BA simply fails to maintain its cabins to anything like a reasonable or acceptable standard (I must trust they do a better job in maintaining the engines). I used to feel some patriotic pride in BA.  Not now.  It’s so sad to see the British flag carrier consistently deliver such an embarrasingly poor and second-rate service. I will be asking SolidSoft in future to, where possible, book me onto a different carrier and will do what I can to convince the company to use other carriers by default.
 
Personally, I think the UK government should give flag carrier status to someone else (Virgin, I guess).
 
 
 

Thursday, September 15, 2011 #

I've just installed the Windows 8 Developer Preview.  These are some first impressions:

Installation of the preview was quite smooth and didn't take too long.  It took a few minutes to extract the files onto a virtual image, but feature installation then seemed to happen almost instantaneously (according to the feedback on the screen).  The installation routine then went into a preparation cycle that took two or three minutes.  Then the virtual machine rebooted and after a couple of minutes more preparation, up came the licence terms page. 

Having agreed to the licence, I was immediately taken into a racing-green slidy-slidy set of screens that asked me to personalize the installation, including entering my email address.  I entered my work alias.  I was then asked for another alias and password for access to Windows Live services and other stuff.  There was a useful link for signing up for a Windows Live ID.  I duly entered the information.  Only on the next screen did I spot an option to not sign in with a Live ID.  I didn't try this, but I felt a bit peeved that the use of a Live ID had appeared mandatory until that point.  I suspect the idea is to try to entice users to get a Live ID, even if they don't really want one.

A couple more minutes of waiting, et voilà.  The Metro Start screen appeared, covered in an array of tiles.  Simultaneously I got an email (on my work alias) saying that a trusted PC had been added to my Live account.  I clicked the confirmation link, signed into Windows Live and checked that my PC had indeed been confirmed. Then Alan started chatting, but that is a different matter.

Of course, Oracle's Virtual Box (and my Dell notebook) haven't quite mastered the art of touch yet.  For non-touch users a scroll bar appears at the bottom of the Metro UI. I had a moment's innocent fun pretending to swipe the screen with my finger while actually scrolling with the mouse.  Ah, happy days.  Then I discovered that the scroll wheel on my mouse does the equivalent of finger swiping on the Start page.

I opened up IE10.  Wow!  I thought IE9's minimal chrome story was amazing.  IE10 shows how far short IE9 falls.  There is no chrome.  Nothing.  Nadda.  Of sure, there is an address box and some buttons.  They appear when needed (a right mouse click without touch) and disappear again as quickly as possible.  It’s the same with tabs which have morphed, in the Metro UI, into a strip of thumbnails that appear on demand and then get out of the way once you have made your selection.  Click on a new tab and you can navigate to a new page or select a page from a list of recents/favourites.  You can also pin sites to 'Start', which in this case means that they appear as additional tiles on the Start screen.  I played for a minute and then I suddenly experienced the same rush of endorphins that hit me the first time I opened Google Chrome a few years back.  Yes, sad to say, I fell in love with a browser!  A near invisible browser.  A browser that is IE for goodness sake! A browser that does what so many wished IE would do years ago. It gets out of your way.

Do you like traditional tabs?  That's not a problem, because the good-ole desktop is just a click (or maybe a tap or a swipe) away.  There is even a useful widget on the now-you-see-me/now-you-don't address bar that takes you to desktop view.  It is a bit of a one way trip, and results in a new IE frame opening on the desktop for the current page.  On the desktop, IE10 looks just like IE9.  It is, however, significantly more accomplished, and has closed much of the remaining gap between IE9, the full HTML5 spec and some of the additional specifications that people incorrectly term 'HTML 5'.  Microsoft has more than doubled its score on the (slightly idiosyncratic) HTML5 Test site (http://html5test.com/) and now just pips Opera 11.51, Safari 5.1 and Firefox 6 to the post for HTML5 compliance (it beats Firefox by just 2 points, although it is 1 point behind if you take bonus points into consideration) by that measure, although it still falls behind Google Chrome 13.

Pinning caused me some issues which I suspect are simply bugs in the preview.  Having pinned a site, every time I went into the Metro version of IE10, I found that I couldn't click on links, hide the address bar, view tabs, etc.  I eventually had to kill my IE10 processes to get things working properly again.  I noticed that desktop and Metro IE10 processes appear with slightly different icons in the radically redesigned task manager.

One slight mystery here is that the beta of 64-bit Flash worked fine in Desktop view but not in Metro.  No doubt this will long since have become a matter of history by the time all this stuff ships.

For a few minutes, I was rather confused about the apparent lack of a proper Start menu in the desktop view.  If you click on Start, you go back to the Metro Start page.  And then the obvious dawned on me.  In effect, the new Metro Start screen is simply an elaboration of the old Start menu.  In previous version, when you click Start, the menu pops up on top of the desktop.  It is quite rich in previous versions, and allows you to start applications, perform searches for applications and files or undertake various management and administrative tasks.  Windows 8 is really not very different.  However, the Start menu has now morphed into the new Metro Start page which takes up the whole screen.  Instead of a list of pinned and recent applications, the Start screen displays tiles.  Move the mouse down to the bottom right corner (I don't know what the equivalent touch gesture is), and up pops a mini Start menu.  Clicking 'Start' takes you back to the desktop.  Click on 'Search' to search for applications files or settings.  The settings feature is really powerful.  In fact, in Windows 7, searching for likely terms like 'Display' or 'Network' also returns results for settings, but you get far more hits in Windows 8.  The effect is rather like 'God Mode' in Windows 7.  [update: no, I'm wrong.  Windows 7 gives you a similar number of hits, BUT you need to click the relevant section in the search results to see them all.  I've clearly not being using Search effectively to date!]

The mini Start menu is available in the desktop as well.  In this case, if you click 'Search', the search panel opens up on the right of the screen and results then open up to take over the rest of the screen. As I experimented, I found that while things were fairly intuitive, the preview does not always work in a totally predictable fashion.  I also suspect that the experience is currently better for touch screens than for traditional mice (I note Microsoft is busy re-inventing the mouse for a Windows 8 world - see http://www.microsoft.com/hardware/en-us/products/touch-mouse/microsite/).  This is hardly surprising given that Windows 8 is clearly in an early state and is unfinished.  I suspect the emphasis to date has been on touch, and not on mouse-driven equivalents.

Once I grasped the essential nature of the Metro Start page and its correspondence to the Start menu is earlier versions of Windows, I began to feel far more comfortable about the changes. Sure, all the marketing hype is about the radical new UI design features.  However, this really is just the next stage of the evolution of the familiar Windows UI.  Metro is absolutely fabulous as a tablet UI (better than iOS/Android IMHO, which after all, are really just the old 'icons on a desktop' approach with added gestures), and I think it will actually be quite good for desktops, once it is complete.  I note, though, that people have already discovered the registry hack to switch Metro off (see http://www.mstechpages.com/2011/09/14/disable-metro-in-windows-8-developer-preview/), and I think MS would be wise to offer this as a proper setting in the release version.  I anticipate, though, that I will not be switching Metro off, even on a non-touch desktop.

Shutting down presented a little difficulty.  I am used to using the Start menu to do this (the classic 'Start' to stop conundrum in Windows).    I couldn't find a 'Shut Down' command on the Start screen.  I eventually did Ctrl-Alt-Delete (or rather, Home-Del in Oracle Virtual Box) and then found a Shut Down option at the bottom left of the screen.

Booting the VBox image takes 20 seconds on my machine.  20 seconds!   I'll say that again. 20 seconds!!!!  Yes, 20 seconds, just about exactly.  That's on a virtual machine on my notebook.  On the host, it would be significantly faster.  This is Windows like we have never known it before.  Frankly, it is the ability to boot fast and run Windows happily on ARM devices (I'll have to take that on trust as I haven't yet seen it for real) that are the really important changes.  Almost more important than the Metro UI. The nay-sayers and trolls say it can't be done.  I think Microsoft has done it, though.

My last foray into Windows 8 this evening was to launch Visual Studio 2011 Express and have a quick peek at the templates for Win8 development.  I have a lot to explore.

The say first impressions are the most important.  When I saw the on-line video of Windows 8 a couple of months back, I almost fell off my chair in surprise.  Now I have got my hands on an early version I am really quite impressed. Like everyone else, I couldn't see how Microsoft could possibly compete against Apple and Google in the tablet space.  Now...well...I look forward to seeing if and how Apple and Google will respond.  If it is true, as Steve Ballmer states, that Microsoft had 500 thousand downloads of the preview in less than 24 hours, then tectonic plates have already shifted and Microsoft is firmly on track to become a major contender in the tablet space. OK, that's only one in every 14,000 people on the face of planet earth, and yes, the release version of Lion had double that number of hits in the first 24 hours.  Nevertheless, it is a huge figure for an early technical preview of an operating system that won't ship for another year.  It means people are very, very keen to start developing for Metro (I know we are at SolidSoft).  And if Windows 8 succeeds on tablets, what will that mean for Windows Phone which also uses the Metro concept?  Don't ever, ever underestimate Redmond.


Wednesday, September 14, 2011 #

Following the previous post, here is a second bit of wisdom.  In the Load method of a custom pipeline component, only assign values retrieved from the property bag to your custom properties if the retrieved value is not null.  Do not assign any value to a custom property if the retrieved value is null.

This is important because of the way in which pipeline property values are loaded at run time.  If you assign one or more property values via the Admin Console (e.g., on a pipeline in a Receive Location), BizTalk will call the Load method twice - once to load the values assigned in the pipeline editor at design time and a second time to overlay these values with values captured via the admin console.  Let's say you assign a value to custom property A at design time, but not to custom property B.  After deploying your application, the admin console will display property A's value in the Configure Pipeline dialog box.  Note that it will be displayed in normal text.  If you enter a value for Property B, it will be displayed in bold text.  Here is the important bit.  At runtime, during the second invocation of the Load method, BizTalk will only retrieve bold text values (values entered directly in the admin console).  Other values are will not be retrieved.  Instead, the property bag returns null values.  Hence, if your Load method responds to a null by assigning some other value to the property (e.g., an empty string), you will override the correct value and bad things will happen.

The following code is bad:

    object retrievedPropertyVal;
    propertyBag.Read("MyProperty", out retrievedPropertyVal, 0);

    if (retrievedPropertyVal != null)
    {
        myProperty = (string)retrievedPropertyVal;
    }
    else
    {
        myProperty = string.Empty;
    }

Remove the 'else' block to comply with the inner logic of BizTalk's approach.


Here is a small snippet of BizTalk Server wisdom which I will post for posterity.  Say you are creating a custom pipeline component with custom properties.  You create private fields and a public properties and write all the code to load and save corresponding property bag values from and too your properties.   At some point, when you deploy the BizTalk application and test it, you get an exception from within your pipeline stating, unhelpfully, that "Value does not fall within the expected range."  Or maybe, while using the Visual Studio IDE, you notice that values you type into custom properties in the Property List are lost when you reload the pipeline editor.

What is going on?   Well, the issue is probably due to having failed to initialise your custom property fields.  If they are reference types and have a null value, the PipelineOM PropertyBag class will throw an exception when reading property values.  The Read method can distinguish between nulls and, say, empty strings, due to the way data is serialised to XML (e.g., in the BTP file).   Here is a property initialised to an empty string:

            <Property Name="MyProperty">
              <Value xsi:type="xsd:string" />
            </Property>

Here is the same property set to null:

            <Property Name="MyProperty" />

The first is OK.  The second causes an error and leads to the symptoms described above.

ALLWAYS initialise property backing fields in custom pipeline components.  NEVER set properties to null programmatically.


Monday, August 22, 2011 #

In my previous post I mentioned the free AI course being run by Peter Norvig and Sebastian Thrun (122,314 and rising) in conjunction with Stanford University School of Engineering.  Professor Andrew Ng is running a related course on Machine Learning. This is also a free on-line course run along the same lines as the AI course. Over 30,000 people have signed up so far.
 
I mention this because Andrew has just confirmed that he will be speaking at this year’s Rules Fest. Rules Fest is all about the practical application by developers of reasoning technologies to real-world problems. It brings together people from across the whole spectrum of public and private sector organisations, including commercial and research organisations and academia, to inspire, inform and enlighten developers and architects. Machine learning is central to the rapidly evolving world of intelligent systems, and we are very excited that Andrew will be speaking at the event.

Saturday, August 20, 2011 #

Peter Norvig and Sebastian Thrun are offering a free on-line course on AI later this year in conjunction with Stanford University. The course is broadly based on Peter Norvig's book "Artificial Intelligence: A modern Approach" written jointly with Stuart Russell. Along with my colleagues on the Rules Fest committee, we have been following this with interest. In a few days, well over 100,000 people have signed up (112,774 at the time of writing, and still increasing fast). The course broadly overlaps with our natural areas of interest at Rules Fest which is all about the practical application of reasoning technologies in real-world computing. It is very encouraging to us to see the huge interest this course is generating. We will doubtless be contacting Peter, yet again, to see if he will speak at next year's conference (we keep plugging away at this).
 
In another development, we all woke up to the news a couple of days ago that HP, as part of its dramatic change in strategy, has bid almost $11Bn to acquire enterprise search company, Autonomy. Autonomy offers proprietary technology that exploits Bayes theorem, Shannon's information theory and specific forms of SVD to create an intelligent search platform with learning capabilities. Clearly, HP sees this type of technology as playing a major and lucrative role in their future.
 
Some time ago, at an event organised by the excellent BizTalk Users' Group in Sweden, I was asked to do a little crystal ball gazing. I trotted out the line that the next few years will see AI-related and reasoning technologies, formally thought of as esoteric and impractical, find their place at the heart of enterprise computing alongside existing investments in traditional LoB/Back Office applications and integration services. With the advent of cloud computing and platforms such as Azure, we have the horsepower available to make this a practical and feasible possibility for mainstream enterprise computing. AI used to be a dirty word. No longer!

Tuesday, June 21, 2011 #

Microsoft has announced availability of the June CTP for Windows Azure AppFabric. See http://blogs.msdn.com/b/appfabric/archive/2011/06/20/announcing-the-windows-azure-appfabric-june-ctp.aspx. This is an exciting release and provides greater insight into where the AppFabric team is heading in terms of developer and management tooling. Microsoft is offering space in the cloud to experiment with the CTP, but this is limited, so register early to get a namespace!
You can download the SDK for the June CTP. However, we ran into a lot of trouble trying to do this today. Whenever we followed the link, we ended up on the page for the May CTP. We found what appeared to be a workaround which we were able to repeat on another box (and which I reported on Connect), but then a few minutes later I couldn't repeat it. Just now, the given link appears to be working every time in IE, but not in Firefox!   Frankly, the behaviour seems random!   It looks like the same URL points to two different pages, and I suspect that which page you end up on is hit and miss.
The link to the download page is http://www.microsoft.com/download/en/details.aspx?id=17691. If you end up on the wrong page, try again later and you may get to the right place. Or try googling "Windows Azure AppFabric SDK CTP – June Update" and following a link to this page. For some reason, that sometimes seems to work.
Good luck!

Thursday, June 2, 2011 #

I spent some time today summarising the new features in the Windows Azure AppFabric May CTP for SolidSoft consultants. Microsoft released the CTP a couple of weeks ago and has a second CTP coming out later this month.  I might as well publish this here, although it has been widely blogged on already.  There is nothing that you can’t glean from reading the release documents, but hopefully it will serve as a shorter summary.

The May CTP is all about the AppFabric Service Bus.  The bus has been extended to support ‘Messaging’ using ‘Queues’ and ‘Topics’

‘Queues’ are really the Durable Message Buffers previewed in earlier CTPs.  MS has renamed them in this CTP.  They are not to be confused with Queues in Windows Azure storage!  Think of these as ‘service bus queues’.  They support arbitrary content types, rich message properties, correlation and message grouping. They do not expire (unlike in-memory message buffers).  They allow user-defined TTLs.  Queues are backed by SQL Azure.  Messages can be up to 256KB and each buffer has a maximum size of 100 MB (this will be increased to at least 1GB in the release version).  To handle messages larger than 256KB, you ‘chunk’ them within a session (rather like BTS large message handling for MSMQ).  The CTP currently limits you to 10 queues per service namespace. 

Service Bus queues are quite similar to Azure Queues.  They support a RESTful API and a .NET API with a slightly different set of verbs – Send (rather than Put), Read and Delete (rather than Get), Peek-Lock (rather than ‘Peek’) and two verbs to act on locked messages – Unlock and Delete.  The locking feature is all about implementing reliable messaging patterns while avoiding the use of 2-phase-commit (no DTC!).  Queue management is very similar, but configuration is done slightly differently.  AppFabric provides dead letter queues and message deferral.  The deferral feature is a built-in temporary message store that allows you to resolve out-of-order message sequences.  Hey, this stuff is actually beginning to get my attention!

Today’s in-memory message buffers will be retained for the time being.  MS is looking at how much advantage they provide as low-latency non-resilient queues before making a decision on their long-term future.  This is beginning to sound like the BizTalk Server low-latency debate all over again!  Currently, the documented recommendation is that we migrate to queues.

‘Topics’ provide new pub/sub capabilities.  A topic is…drum roll please…a queue!   The main difference is that it supports subscription.  I assume it has the same limitations and capabilities as a normal queue, although I haven’t seen this stated.  It is certainly built on the same foundation.  You can have up to 2000 subscriptions to any one topic and use them to fan messages out.  Subscriptions are defined as simple rules that are evaluated against user and system-define properties of each message.  They have a separate identity to topics.  A single subscription can feed messages to a single consumer or can be shared between multiple consumers.  Unlike Send Port Groups in BizTalk, this multi-consumer model supports an ‘anycast’ model for competing consumers where a single consumer gets a message on a first-come-first-served basis.  MS invites us to think of a subscription as a ‘virtual queue’ on top of the actual topic queue.  Potential uses for anycasting include basic forms of load balancing and improved resilience.

The CTP supports AppFabric Access Control v2.0.  It is fully backward-compatible with the current service bus capabilities in AppFabric.

CTP does not have load balancing and traffic optimization for relay.  These were in earlier CTPs, but have been removed for the time being.  They may reappear in the future.

June CTP

The June CTP will introduce CAS (Composite Application Services).  CAS is a term used by other vendors (e.g., SAP) for similar features, and has been a long time coming in the Microsoft world.  The basic idea is that you build a model of a composite application, the services it contains, its configuration, etc., and then drive a number of tasks from this model such as build and deployment, runtime management and monitoring.  Some of us remember an ancient Channel 9 video on a BizTalk-specific CAS-like modelling facility that MS were working on years ago.  It was entirely BizTalk-specific and never saw the light of day.  However, one connection to make is that CAS will provide capabilities that are conceptually related to the notion of  ‘applications’ in BizTalk Server.  

We will get a graphical Visual Studio modelling tool to design and manage CAS models.  The CAS metamodel is implemented as a .NET library, allowing models to be constructed programmatically.  Models are consumed by the AppFabric Application Manager in order to automate deployment, configuration, management and monitoring of composite applications.

So, things are rapidly evolving.  However, we won’t see anything on Integration Services until, I suspect, next year.  It’s important to remember that the May CTP is all about broadening the existing Service Bus with messaging capabilities, rather than about delivering an integration capability.  So, even though we are seeing more BizTalk Server-like features, we are still a long way off having what Burley Kawasaki called a “true integration service” in the cloud.   Obviously, Azure Integration Services will exploit and build on the Service Bus, but a lot more needs to be done before we have integration-as-a-service as part of the Azure story.


Wednesday, April 13, 2011 #

Internet Explorer has made huge strides in the last couple of years. Microsoft has at last begun to lay the ghost of IE6 to rest with a solid, fast, standards compliant, reasonably up-to-date (not quite the same thing) forward-looking browser with the cleanest UI in the business.
However, one issue clouds the horizon. Other browsers, most notably Google Chrome, rev much faster that IE. The importance of this is that, by bringing out new versions on shorter (much shorter, in some cases) timescales, other browsers gain a clear advantage. They get emerging specifications and technologies out to end-users faster and therefore shape the future of the web in a way that IE could not touch for many years. This, in turn, builds a sense of momentum which increases the loyalty of the user base. Indeed, it is a major factor in growing the user-base, as we can see clearly with Chrome. With IE, by contrast, we have lived for years with a strong sense of Microsoft holding everyone else back.
Microsoft needs to speed up the cycle. I wondered if we would see any signs of this after the RTW of IE9 last month. Well, yes we can. Yesterday, Microsoft released Platform Preview 1 of IE10. Given Microsoft's record, this is absolutely astounding and provides further evidence that the bad old days for IE really are receding. Of course, after three weeks of development, things are not that different to IE9, but the HTML5 story is improved, thanks chiefly to several new CSS3 features.
IE9's HTML5 story is mixed. On the one hand, the browser lays a strong foundation for the future with comprehensive DirectX rendering and a much-repeated commitment to HTML5 as a future standard. On the other hand, the feature list count for HTML5 is noticeably lower than its nearest rivals. Put simply, IE9 supports less HTML5 than others, but supports it well. The IE team has repeatedly stated its position on this. Their main argument is that IE9 supports 'site-ready' HTML5 that will interoperate across all today's browsers and avoids features on which there is still disagreement or which may not make it to the final specification. A secondary theme is that it was more important to develop the sub-systems that underpin a great HTML5 experience than strive after having the longest HTML5 feature list.
Broadly, the argument plays quite well. Given IE's poor reputation amongst web developers, it is understandable that they want to stress their commitment to interoperability. Certainly, some of the other browsers are now carrying the burden of 'HTML5' features that won't actually make it into the W3C Recommendation (whenever it finally arrives) and features which are not broadly interoperable with other browsers. Of course, it's not really as clear cut as this. Some of the features that were once counted as part of HTML5 are still very much alive and kicking and will almost certainly be incorporated into future versions of HTML.
Try using each of the HTML5-enabled browsers in turn to browse the various HTML5 showcases provided by different vendors, and you will quickly get an idea of where HTML5 interoperability is really at, currently. And yes, the other HTML5 browsers tend to fare well on Microsoft's HTML5 preview site in terms of functionality, suggesting that Microsoft's take on HTML5 does indeed approximate to a 'lowest common denominator' interoperable approach. Of course, performance is a different story and seems to be the dominant theme of Microsoft's showcase apps.
The myth that Microsoft spends its days subverting every standard going whilst all other players effortlessly deliver interoperable perfection has never been accurate since a brief period in the first half of the 1990's when the company first ‘won’ this reputation thanks to some fairly cynical actions (nothing, of course, to do with web standards). It is certainly the case that the complaint of poor standards compliance can no longer be honestly sustained with respect to IE9. However, IE9 illustrates a dilemma that all browser vendors face. It can take an eon for emerging web specifications to stabilise and for standards organisations to ratify and publish a given standard. It simply isn’t the role of organisations like the W3C to be in the vanguard of developing new specifications for everyone else to follow. There has been a huge misunderstanding about this for years. The W3C, and others, follow where the industry leads. They work to foster collaboration and agreement amongst those who are actually coming up with the ideas, trying out new innovations and pushing the boundaries. That means that the more innovative browsers will always be ahead of the curve and, consequently, in danger of subverting interoperability.
This, then, is the big question about IE’s HTML5 support. Now that Microsoft has convincingly closed the field on its rivals, will it be content simply to follow where others lead, trailing at the rear end of a densely-packed group of HTML.next ‘feature-athletes’? Or should it attempt to move closer to the front of the pack in order to more effectively set the pace. Well, long-distance running is all about pacing and strategy. If you are in the race for the long-haul, it often pays to hold back for a while.   I think it probably suits Microsoft's purposes for the time being to merely match the pace set by others whilst all the time quietly building on their blindingly-fast DirectX rendering engine.   And after all, we are still some way off seeing an equivalent Microsoft HTML5 technology for mobile devices which is where, increasingly, it really counts. I suspect they are attempting to maintain their stamina and build up their strength for a future push to the front. After all, Microsoft always preferred being in the industry driving seat.

 


Tuesday, April 12, 2011 #

A business analyst colleague asked me to comment on how he could improve a use case diagram he had inherited.  Someone had carefully transposed the outcome of a whiteboard session into a Visio UML diagram.  We discussed the difference between <<includes>> and <<extends>> and I whinged on about how 'traditional' UML use case diagrams don't always convey very useful information.  I made one practical suggestion, though.  The use cases and associations were all neatly colour-coded using two colours (red and green).  However, I couldn't tell what these colours signified, so I suggested that he annotate the Visio diagram to make the semantics clear.

It turns out that the colours reflect the fact they had two different whiteboard pens.

Aaaaaaagh.

 Tomb of the Unknown Semantic


Wednesday, April 6, 2011 #

Rules Fest 2011 banner
 
Well, it's official. Rules Fest 2011 is on track for October.   The web site has been launched and the "Call for Speakers" has been published. Welcome to the world’s premier technical conference for reasoning technologies.
 
http://rulesfest.org/html/home.html
 
Building on the success of last year's conference, this year will be bigger and better in every way. We’ve hired a larger venue - the Hyatt Regency in Burlingame, just south of SFO. We really loved being at the Dolce Hayes last year (warmly recommended), and it was sad to have to move on, but we need more space to accommodate a significantly larger event with exhibitions, a career centre and much more. We are also running an extensive program of boot camp sessions. The hotel has great amenities and is conveniently located for Silicon Valley and excursions to San Francisco.

Hyatt Regency - Burlingame

We have two keynote speakers this year. Paul Haley is one of the best-known names in the industry – a co-developer of ART and ex-chairman of Haley Systems, subsequently acquired by Oracle. His company was responsible for the Haley Business Rules Suite (now evolved into Oracle Policy Automation). Paul is a consultant at Paul Allen’s company, Vulcan, where he is involved with Project Halo. Said Tabet, another ART veteran, co-founded the RuleML Initiative a decade ago. He has contributed extensively to the development of W3C and OMG rules standards over several years and is focussed on the Semantic Web and the use of reasoning technologies in financial services.
 
As vice-chair, my job is to oversee the call for speakers and plan the detailed agenda over the next couple of months. If you work with rules engines or other reasoning technologies and have a burning desire to share your knowledge, experience and insight with developers, technologists, researchers, technical leads, etc., then submit a short abstract for a presentation no later than 6th May. You’ll find all the details and T&Cs on the web site. If you just need to learn more about the practical application of reasoning systems (rules engines, constraint solvers and the like) to real-world problems, then book yourself a place at the conference. Book by May 15th to get the best discounts.
 
With Microsoft working on revamped rules offerings for AppFabric (as mentioned at last year's PDC), it would be great to see a good .NET turnout at the event.  Let me know if you plan to attend and are a .NET developer with a passion for rules.  If you are a BizTalk developer, even better.  I would love to do a boot camp for MS BRE, but would need some indication of committment to this a.s.a.p. in order to look for the funding to host a day of intensive hands-on experience.  So, if you are interested, ping me via the 'Contact' menu option on this site.
 

 


Monday, April 4, 2011 #

We all love benchmarks! With the recent release of new versions of some of major browsers, and as a small diversion over the weekend, I ran five well-known browsers against five well-known JavaScript micro-benchmarking suites using my laptop. The results are reproduced below. I have ranked the results for each benchmark suite from best to worst.
Celtic Kane – old version (current version was unavailable)
(Smaller is better)
Opera 11   77 ms
Safari 5   93 ms
Chrome 10   119 ms
IE 9   152 ms
FF 4   154 ms
 
 
 
 
 
Kraken 1.0 (Mozilla)
(Smaller is better)
FF 4   7,555.6 ms
Chrome 10   8,439.8 ms
Opera 11   12,918.8 ms
IE 9   16,551.9 ms
Safari 5   19,099.8 ms
 
 
 
 
 
Dromaeo (Mozilla) (all JavaScript tests)
(Bigger is better)
Chrome 10   457.53 runs/s
IE 9   403.96 runs/s
FF 4   386.74 runs/s
Opera 11   369.49 runs/s
Safari 5   257.42 runs/s
 
 
 
 
 
V8 v6 (Google)
(Bigger is better)
Chrome 10   7,737
FF 4   3,111
Opera 11   3,050
Safari 5   2,319
IE 9   2,119
 
 
 
 
 
SunSpider 0.9.1 (WebKit)
(Smaller is better)
IE 9   249.8 ms
Opera 11   289.9 ms
FF 4   295.2 ms
Chrome 10   309.0 ms
Safari 5   353.9 ms
 
 
 
 
 
So, what does this prove? Absolutely nothing! It's impossible to pick an overall winner from these results, or even to determine any particular trend.   I'll provide two observations, however. First, comparative micro-benchmarking remains as problematic as ever. Pick your preferred test to 'prove' whatever you wish.   Second, competition between browsers remains fierce. As a result, JavaScript performance has improved massively across the board in the last couple of years.  That's great news.  It means we are all winners!