Charlie Mott

  Home  |   Contact  |   Syndication    |   Login
  20 Posts | 0 Stories | 20 Comments | 0 Trackbacks

News

View Charlie Mott's profile on LinkedIn



Archives

Post Categories

Monday, February 4, 2013 #

source: http://geekswithblogs.net/charliemott

I am using the Windows Azure PowerShell CmdLets to perform deployments of Cloud Services.

This was all working well until I started to receive the error “total requested resources are too large for the specified vm size”.  Deployments via Visual Studio worked fine.

A few details about VM size are here: http://msdn.microsoft.com/en-gb/library/windowsazure/ee814754.aspx

The size of my extracted package files must have grown beyond 23,256MB (Calculated as: 229,400 MB (small vm disk space for local storage)- 200,00MB (default diagnostics store size) - 6,144 MB (reserved for system files))

I considered two options:

  1. Update from Small to Medium VM’s for the web role.
  2. Reduce the size of the diagnostics store.

I implemented option 2 and reduced the diagnostics store from the default 200,00MB to 150,00MB.  My deployments now work again.


Article Source: http://geekswithblogs.net/charliemott

Tracking the count of TODO’s in your solution can be used for the following:

  • Use as an additional measure of code quality.
  • Predict estimated completion dates through use of a burndown chart.
  • Once in production, TODO’s can be a good measure of how much Technical Debt lives within a solution.

As such, I have created a “FilesTextSearch” WF activity that can be plugged into a TFS build template.  I’ve uploaded this as patch 13789 in the codeplex Community TFS Build Extensions project.

The default configuration of this activity is used to count TODO’s in the solution code.

ActivityProperties  
 
- Search all directories below this base directory.
- Comma separated list of file extensions to search.
- List of strings to search (case insensitive).

This produces the following output in the build log:

ToDoLog2

Future Improvements

TODO Formatting

See my post on recommendations for TODO formatting: http://geekswithblogs.net/charliemott/archive/2011/11/22/todo-formatting.aspx.


Monday, October 8, 2012 #

Currently, the Azure PaaS does not offer a distributed\resilient task scheduling service.  If you do want to host a task scheduling product\solution off-premise (and ideally use Azure), what are your options?

Update: 09/01/2013 - There is a new way to schedule tasks in Windows Azure using the Scheduler (preview) provided with the Azure Mobile Services. See http://fabriccontroller.net/blog/posts/job-scheduling-in-windows-azure.

Update: 25/01/2013 - Yet another option. The new Azure “Add-on Store” has a scheduler.
See:
http://weblogs.asp.net/scottgu/archive/2013/01/23/windows-azure-store-new-add-ons-and-expanded-availability.aspx.

Update: 13/12/2013 - Finally there is a Windows Azure Scheduler Service.  See: http://weblogs.asp.net/scottgu/archive/2013/12/12/windows-azure-new-scheduler-service-read-access-geo-redundant-storage-and-monitoring-updates.aspx

________________

PaaS

Option 1: Worker Roles

Use a worker role to schedule and execute actions at specific time periods.  There are a few frameworks available to assist with this:

I found the Azure Toolkit option the most simple to implement. 

Step 1 : Create a domain entity implementing IJob for each job to schedule.  In this sample, I asynchronously call a WCF service method.

   1:  namespace Acme.WorkerRole.Jobs
   2:  {
   3:      using AzureToolkit;
   4:      using ScheduledTasksService;
   5:   
   6:      public class UploadEmployeesJob : IJob
   7:      {
   8:          public void Run()
   9:          {
  10:              // Call Tasks Service
  11:              var client = new ScheduledTasksServiceClient("BasicHttpBinding_IScheduledTasksService");
  12:              client.UploadEmployees();
  13:              client.Close();
  14:          }
  15:      }
  16:  }
Step 2 : In the worker role run method, add the jobs to the toolkit engine.
   1:  namespace Acme.WorkerRole
   2:  {
   3:      using AzureToolkit.Engine;
   4:      using Jobs;
   5:   
   6:      public class WorkerRole : WorkerRoleEntryPoint
   7:      {
   8:          public override void Run()
   9:          {
  10:              var engine = new CloudEngine();
  11:   
  12:              // Add Scheduled Jobs (using CronJob syntax - see  http://www.adminschoice.com/crontab-quick-reference).
  13:   
  14:              // 1. Upload Employee job - 8.00 PM every weekday (Mon-Fri)
  15:              engine.WithJobScheduler().ScheduleJob<UploadEmployeesJob>(c => { c.CronSchedule = "0 20 * * 1-5"; });
  16:              // 2. Purge Data job - 10 AM every Saturday
  17:              engine.WithJobScheduler().ScheduleJob<PurgeDataJob>(c => { c.CronSchedule = "0 10 * * 6"; });
  18:              // 3. Process Exceptions job - Every 5 minutes
  19:              engine.WithJobScheduler().ScheduleJob<ProcessExceptionsJob>(c => { c.CronSchedule = "*/5 * * * *"; });
  20:   
  21:              engine.Run();
  22:              base.Run();
  23:          }
  24:      }
  25:  }
Pros Cons
Azure Toolkit option is simple to implement. For the AzureToolkit option, you are limited to a single worker role.  Otherwise, the jobs will be executed multiple times, once for each worker role instance.
  Paying for a continuously running worker role, even if it just processes a single job once a week.  If you only have a few scheduled tasks to run calling asynchronous services hosted in different web roles, an extra small worker role likely to be sufficient.  However, for an extra small worker role this still costs $14.40/month (03/09/2012).

Option 2: Use Scheduled Task on Azure Web Role calling a console app

Setup a Windows Scheduled Task on the Azure Web Role. This calls a console application that calls the WCF service methods that run the task actions. This design is described here:

Pros Cons
Fairly easy to implement. Supportability - I RDC’ed onto the Azure server and stopped the scheduled task. I then rebooted the machine and the task was re-started. I also tried deleting the task and rebooting, the same thing occurred. The only way to permanently guarantee that a task is disabled is to do a fresh deployment. I think this is a major supportability concern.
  Saleability - multiple instances would trigger multiple tasks.
You can only have one instance for the scheduled task web role. The guidance implements setup of the scheduled task as part of a web role instance. But if you have more than one instance in a web role, the task will be triggered multiple times for each scheduled action (once per machine).
Workaround: If we wanted to use scheduled tasks for another client with a saleable WCF service, then we could include the console & tasks scripts in a separate web role (e.g. a empty WCF service with no real purpose to it).

SaaS

Option 3: Azure Marketplace

I thought that someone might be offering this type of service via the Azure marketplace. At the point of writing this blog post, I did not find anyone doing so.

https://datamarket.azure.com/

Pros Cons
  Nobody currently offers this on the Azure Marketplace.

Option 4: Online Job Scheduling Service Provider

There are plenty of online providers that offer this type of service on a pay-as-you-go approach.  Some of these are free for small usage.   Many of these providers are listed here:

http://en.wikipedia.org/wiki/Webcron

Pros Cons
No bespoke development for scheduler. Reliance on third party.

IaaS

Option 5: Setup Scheduling Software on Azure IaaS VM’s

One of job scheduling software offerings could be installed and configured on Azure VM’s.  A list of software options is listed here:

http://en.wikipedia.org/wiki/List_of_job_scheduler_software

Pros Cons
Enterprise distributed\resilient task scheduling service VM Setup and maintenance
  Software Licence Costs

Option 6: VM Gallery

A the time of writing this blog post, I did not spot a VM in the gallery that included pre-installation of any of the above software options.

Pros Cons
  No current VM template.

Summary

For my current project that had a small handful of tasks to schedule with a limited project budget I chose option 1 (a worker role using the Azure Toolkit to schedule tasks). 

If I was building an enterprise scale solution for the future, options 4 and 5 are currently worthy of consideration.

Hopefully, Microsoft will include tasks scheduling in the future as part of their PaaS offerings.


Wednesday, August 8, 2012 #

This article discusses, from a BizTalk\BizUnit perspective, implementing the Channel Purger Enterprise Integration Patten to “keep 'left-over' messages on a channel from disturbing tests.”

If one test fails, you may see subsequent tests also fail.  This is because service instances triggered from the failed test may still be running in BizTalk.  These can then produce results that conflict with the expected results of these following tests. This could include: event log messages; files arriving in the target locations; suspended messages; etc. 

However, it takes a long time to run some clean-up operation steps such as: terminating running service instances; clearing target folders; etc.  You don’t want to do this after every successful test.

In MSTest, you can use the TestContext.CurrentTestOutcome property in a [TestCleanup] method to run these clean-up operations only when a test fails.  This post on the MSDN forum tell you how to use this property in this way:  http://social.msdn.microsoft.com/Forums/sa/vsautotest/thread/d95523f9-1b74-4daa-bfc3-c393c1ef6c15


Friday, July 13, 2012 #

When setting up publishing for an Azure project from Visual Studio, are you unable to download the the .publishsettings file? Are you using IE9? Is the error “<file> couldn't be downloaded”?

If so, change the IE9 default settings to uncheck "Do not save encrypted pages to disk".

For further information, see: http://support.microsoft.com/kb/2549423


Tuesday, June 19, 2012 #

I’ve published a TechNet wiki page here. This identifies links that show BizTalk implementations of the patterns described in the book Enterprise Integration Patterns.

I look forward to seeing others’ contributions and updates to this wiki page.


Tuesday, June 5, 2012 #

I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests.

The industry average of writing unit testing code is around 30% of the original budget.  This will also apply to BizTalk projects and BizUnit.

Stubs

Stubs should be developed to isolate your BizTalk development team from external dependencies. This is described by Michael Stephenson here.

Failing to do this can result in the following problems:

  • In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with.
  • By the time you open the target location to see the data BizTalk has sent, it may have been swept away.
  • If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort.
  • By the time the data is visible in a UI it may have undergone further transformations.
  • In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data.
  • It is harder to write BizUnit tests that clean up the data\logs after each test run.
  • What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams.
  • There may be licencing costs of setting up an instances of the external system.

The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project.

I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk.

Tests

Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations).

Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits:

source: http://biztalkbddsample.codeplex.com – Video 1.

  • Requirements can be easily defined using Given/When/Then
  • Requirements are close to the code so easier to manage as features and scenarios
  • Requirements are defined in domain language
  • The feature files can be used as part of the documentation
  • The documentation is accurate to the build of code and can be published with a release
  • The scenarios are effective to document the scenarios and are not over excessive
  • The scenarios are maintained with the code
  • There’s an abstraction between the intention and implementation of tests making them easier to understand
  • The requirements drive the testing

These same tests can also be used to drive load testing as described here.

If you don't do this ...

If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks:

  • Developers are unlikely to check all the scenarios each time and all the expected results each time.
  • After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario?
  • There is no mechanism to prove adequate integration test coverage.

A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks:

  • It moves the tests downstream, so problems will be found later in the process.
  • Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc.
  • These automated tests may also get in the way of manual tests run on these environments.

Wednesday, March 21, 2012 #

Article Source: http://geekswithblogs.net/charliemott

BizUnit is defined as a "Framework for Automated Testing of Distributed Systems."

I've never seen a catchy label to describe what type of tests we create using this framework. They are not really "Unit Tests" that's for sure. "Integration Tests" might be a good definition.  However, I want a label that clearly separates them from the manual "System Integration Testing" phase of a project where real instances of the integrated systems are used.

Among some colleagues, we brainstormed some suggestions:

  • Automated Integration Tests
  • Stubbed Integration Tests
  • Sandboxed Integration Tests
  • Localized Integration Tests
  • LINK Tests

All these give a good indication of the tests that are being done. I think "Stubbed Integration Tests" is most catchy and descriptive. So I will use that until someone comes up with a better idea.


Tuesday, November 22, 2011 #

Article Source: http://geekswithblogs.net/charliemott

TODO's in code should only be used for a short period of time to remind you that something needs to be done. They should be addressed as soon as possible.

In order to know who owns a TODO task and how long it’s been outstanding, my company uses the following formatting standard:

Format:     // TODO : Owner Initials – Date Created – Description of task.

Sample:     // TODO : CM – 2011\11\22 – Move this class to a reusable location.

Using this pattern makes it easier to find and review items in the Visual Studio Task List or the Resharper TODO explorer.

The Carrot

In order to make it easy for developers to apply this rule, create a Visual Studio code snippet or Resharper template. Using the Resharper template facility provides macros for the current user name and the current date. Below is a C# template.  You could also make similar snippets\templates for other languages.

ResharperToDoTemplate

This actually makes the formatting use the logged in username rather than initials.  Ideally, I would have preferred to use the account logged into TFS.

Sample:     // TODO : cmott – 2011\11\22 – Move this class to a reusable location.

The Stick

How do you enforce such a rule? I have created a custom StyleCop rule. I followed the approach provided by the StyleCop Contrib project. This has a simple base class and easy to use unit testing framework. I have uploaded this TODO analyzer as patch 10902 to that project.

ToDoStyleCopWarning


Monday, July 25, 2011 #

Article Source: http://geekswithblogs.net/charliemott

What is the best approach for developing a WCF client application that sends messages to the WCF on-ramps exposed by the BizTalk ESB Toolkit?

I had considered various approaches:

  1. Generate an xml message from a string template using string replacements.  Then submit the message to the ESB endpoint using code similar to the Itinerary Test Client provided with the ESB Toolkit. Then for the response use XPath to get the required data.
    - This is obviously a lot of coding effort, inflexible and difficult to maintain.
  2. Modifying the generic wsdl for the ESB WCF service as described here.  
    - However, this is too much of an administrative nightmare.  You need to create a wsdl for each possible service method. In addition it does not support the catching of expected SOAP faults without additional effort.
  3. Our team then considered an approach similar to that described here.  This uses a generic ServiceContract interface that can take any object and serialise it to a System.ServiceModel.Channels.Message.
    - However, the problem is that developers will still need to manually generate the DTO classes using a tool such as svcutil (getting the schemas from the location specified in the wsdl).   As such, there is still a lot of effort required.
  4. A colleague then asked "why don't you just add a service reference to the end service and then re-point the client endpoint config to the ESB on-ramp".  I had previously dismissed this idea because the end service exposes many methods\actions while the ESB on-ramps expose just one with a different name.  However, he suggested you should be able to alter the action.

Using a WCF Message Inspector

Option 4 is the best option I think.  You can modify the Action header using a custom behaviour with a custom Message Inspector following the approach described here.

The modified message inspector code for the request-response on-ramp would be as follows:
 
        public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, IClientChannel channel)
        {
            request.Headers.Action = "http://microsoft.practices.esb/ProcessRequestResponse";
            return null;
        }
 
The client service code would be something like this:

        public int FindProducts (ProductSearchCommand searchCommand)
        {
            var client = new ProductServiceClient(this.EsbOnRampRequestResponseEndpointConfig);
            client.Endpoint.Behaviors.Add(new OnRampRequestResponseBehavior()); // this could be done via endpoint config (see here)
            ProductsDocument products = client.Find(searchCommand);
            client.Close();
            return products;
        }
 
The client direct call endpoint config would be modified to something like this to point to the ESB:
 
      <endpoint address="https://MyMachine/ESB.ItineraryServices.Generic.Response.WCF/ProcessItinerary.svc"
                binding="wsHttpBinding"
                bindingConfiguration="WSHttpBinding_ITwoWayAsync"
                contract="ProductService.ProductService"
                name="EsbOnRamp_RequestResponse">
        <identity>
          <userPrincipalName value="MyMachine\BizTalkUser" />
        </identity>
      </endpoint>

Notes

  • In BizTalk, for the WCF receive location for the default ESB on-ramp has been modified so that the WCF adapter Message “UseBodyPath” settings use "Body" for the "Inbound BizTalk message body" and "Outbound WCF message body".
  • The default on-ramp uses wsHttpbinding.  This binding is based on SOAP 1.2.  As such, soap faults will be returned using a soap 1.2 namespaces.  If the end service uses basicHttpBinding, then the client code will not be able to catch the soap 1.1 fault.  As such, mixing soap versions for the on-ramp and services called by an itinerary would require some additional research.

Friday, May 6, 2011 #

Article Source: http://geekswithblogs.net/charliemott

How do you send "Call Context" information in the header message to Dynamics AX 2012 WCF services from BizTalk? 

One difference between AX 2009 and AX 2012 services, is that you no longer always need to provide destination endpoint context information. This is described here:

In previous releases, each AIF endpoint was associated with a specific company. Microsoft Dynamics AX 2012 does not require that you associate integration ports with a specific company. You can use the integration port functionality to restrict service calls to a specific company. For an inbound message, the services framework obtains the company from the message header. If the message header does not contain a company context, the services framework uses the default company for the submitting user.

Adding WCF.OutboundCustomHeaders

In my requirement, I did need to send the Company code.

There is a very good article here about making BizTalk send header information to AX 2009 using WCF Services.

However, the implementation is slightly different with AX 2012. The schema namespace you need is http://schemas.microsoft.com/dynamics/2010/01/datacontracts

So the code will look like this. Notice I have only provided the header context information needed:

wcfHeader = System.String.Format(@"<headers><CallContext xmlns=""http://schemas.microsoft.com/dynamics/2010/01/datacontracts""><Company>{0}</Company></CallContext></headers>", companyCode);

msgRequest(WCF.OutboundCustomHeaders) = wcfHeader;

Other Tips

You may need to increase the size of the WCF-NetTcp adapter "Maximum receive message size". Otherwise, if the responses are large, you may get a System.ServiceModel.CommunicationException: The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error.


Monday, April 4, 2011 #

Article Source: http://geekswithblogs.net/charliemott

I hit problems in an orchestration consuming WCF services exposed by Microsoft Dynamics AX 2012 (AX6). When attempting to catch a fault response, I was getting the error "received unexpected message type 'http://www.w3.org/2003/05/soap-envelope#fault'"

In order to fix this, I needed to change the schema for the fault message in the port type that had been created using "Add Generated Items." I changed it to the SOAP 1.2 fault type.

Select Soap 1.2 Fault Schema

Thanks to this blog for pointing me in this direction:
http://masteringbiztalkserver.wordpress.com/2010/11/21/catching-soap-faults-from-wcf-service-in-biztalk-orchestration/


Friday, April 1, 2011 #

Article Source: http://geekswithblogs.net/charliemott

Often it is handy to organise BizUnit tests into test lists. This way longer running tests and edge case tests can be removed from check-in builds keep them running a bit more quickly. However, updating a .vsmdi test lists for each individual test method is time consuming.

A better solution is to use the new Category attribute that comes with MSTest in .NET4 (nUnit  has supported a categories feature for a while).  In the TFS build processes, the mstest command line can make use of the /category flag. The check-in build will run tests categorised as Happy Flow. A Full build (nightly build) will run the lot. (Note, it is possible to do specify multiple categories).

In my solution, I have created constants for the followoing categories:

Acme.TestCategory.TestType.WakeUpBizTalk
(e.g tests where we just fire messages through biztalk without checking the results.)
  
Acme.TestCategory.SourceSystem.DyanmicsAx
Acme.TestCategory.SourceSystem.DynamicsCrm
Acme.TestCategory.SourceSystem.Maximo
etc
  
Acme.TestCategory.TargetSystem.DyanmicsAx
Acme.TestCategory.TargetSystem.Icon
etc
  
Acme.TestCategory.UseCase.HappyFlow
Acme.TestCategory.UseCase.AlternativeFlow (e.g. fault scenarios)

Acme.TestCategory.DataType.CustomerServices
Acme.TestCategory.DataType.Journal
etc

So a sample test method with these attributes would be as follows.  Notice the multiple target systems.

/// <summary>
/// Test valid Tax Codes message from Dynamics AX
/// </summary>
[TestMethod]
[TestTimer(5, 0)]
[TestCategory(SourceSystem.DynamicsAx)]
[TestCategory(TargetSystem.Maximo)]
[TestCategory(TargetSystem.Oms)]
[TestCategory(DataType.TaxCodes)]
[TestCategory(UseCase.HappyFlow)]
public void DynamicsAx_TaxCodes_Valid_Success()
{
       ...

Note: For details of the TestTimer attribute, see here. This is also a handy feature for BizUnit tests to validate performance.


Monday, December 20, 2010 #

source: http://geekswithblogs.net/charliemott

Update (10/06/2011): I no longer recommend the approach below.  It is too much of an administrative nightmare to create a wsdl for each possible service method call.  See new advise here: http://geekswithblogs.net/charliemott/archive/2011/07/25/esb-toolkit-clients.aspx.

Question

How do you make it easy for client systems to consume the generic WCF services exposed by the ESB Toolkit using messages that conform to agreed schemas\contracts? 

Usually the developer of a system consuming a web service adds a service reference using a WSDL. However, the WSDL’s for the generic services exposed by the ESB Toolkit use messages with type="xsd:anyType".  Using these do not make it easy for the client system to use the input\output messages that conform to agreed schemas\contracts.

Recommendation

Take a copy of the generic WSDL’s and modify it to use the proper contracts.

This is very easy.  It will work with the generic on ramps so long as the <part>?</part> wrapping is removed from the WCF adapter configuration in the BizTalk receive locations. 

Attempting to create a WSDL where the input and output messages are sent/returned with a <part> wrapper is a nightmare.  I have not managed it. 

Consequences

I can only see the following consequences of removing the <part> wrapper:

  • ESB Management Portal - unless you intend to modify the MessageResubmitter.cs code and bindings, do not implement the above change to the "OnRamp.Itinerary.WCF" receive location.  This is used by the portal to resubmit messages using WCF.
  • ESB Test Client – I needed to modify the out-of-the-box ESB Test Client source code to make it send non-wrapped messages. 
  • Flat file formatted messages – the endpoint will no longer support flat file message formats.  However, even if you needed to support this integration pattern through WCF, you would most-likely want to create a separate receive location anyway with its’ own independently configured XML disassembler pipeline component.

Instructions

These steps show how to implement a request-response implementation of this.

WCF Receive Locations

  • In BizTalk, for the WCF receive location for the ESB on-ramp, set the adapter Message settings\bindings to “UseBodyPath”:

    • Inbound BizTalk message body  = Body
    • Outbound WCF message body = Body

Create a WSDL’s for each supported integration use-case

  • Save a copy of the WSDL for the WCF generic receive location above that you intend the client system to use. Give it a name that mirrors the interface agreement (e.g. Esb_SuppliersSearchCommand_wsHttpBinding.wsdl).
     
  • Add any xsd schemas files imported below to this same folder.
     
  • Edit the WSDL to import schemas

    For example, this:

<xsd:schema targetNamespace=http://microsoft.practices.esb/Imports />

… would become something like:

<xsd:schema targetNamespace="http://microsoft.practices.esb/Imports">
    <xsd:import schemaLocation="SupplierSearchCommand_V1.xsd" 
                          namespace="http://schemas.acme.co.uk/suppliersearchcommand/1.0"/>
    <xsd:import  schemaLocation="SuppliersDocument_V1.xsd" 
                            namespace="http://schemas.acme.co.uk/suppliersdocument/1.0"/>
</xsd:schema>

  • Modify the Input and Output message

    For example, this:

    <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">
      <wsdl:part name="part" type="xsd:anyType"/>
    </wsdl:message>
    <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">
      <wsdl:part name="part" type="xsd:anyType"/>
    </wsdl:message>

    … would become something like:

    <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">
      <wsdl:part name="part"
                          element="ssc:SupplierSearchCommand"  
                          xmlns:ssc="http://schemas.acme.co.uk/suppliersearchcommand/1.0" />
    </wsdl:message>
    <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">
      <wsdl:part name="part" 
                         element="sd:SuppliersDocument" 
                         xmlns:sd="http://schemas.acme.co.uk/suppliersdocument/1.0"/>
    </wsdl:message>

This WSDL can now be added as a service reference in client solutions.


Monday, July 5, 2010 #

source: http://geekswithblogs.net/charliemott

Roy Osherove on his blog and in his book gives guidance on the naming of unit test methods. For use with BizUnit end-to-end integration tests, I have extended these recommendations below. Implementing these conventions has the following benefits:

  • Makes it easily to understand the purpose of the test.
  • Make it easier to find specific tests.
  • Gives a visual feel for integration use case test coverage.

Hub-and-Spoke Solutions

For hub-and-spoke solutions, an integration use case can typically be identified by the source system and the message type.  As such, the following pattern is recommended.

Format:   SourceSystem_MessageType_MessageScenarioProperties_ExpectedBehaviour()

Sample:   Maximo_Invoice_Valid_Success()
Sample:   Maximo_Invoice_InvalidStructure_ValidationExceptionHandled()

Integration use cases that implement a convoy pattern may need a slightly different structure. There may be different source systems and\or different message types.  If the source system is different, replace with the orchestration name.  If the message types are different, exclude message type details.

Format:   ConvoyOrchestrationName_Scenario_ExpectedBehaviour()

Sample:   JournalConvoy_Valid_Success()
Sample:   JournalConvoy_SingleValidHeaderMessage_TimeoutExceptionHandled()

ESB Toolkit Solutions

In ESB Toolkit solutions, an integration use case can typically be identified by the rules that are used to select which itinerary to use. A rule might evaluate: an xpath value (message type, status value, message type version); receive location (WCF, File); etc.

This variety makes it more difficult to enforce consistency in test names. As such, the following is recommended. The test method names are prefixed with the message type for the same reasons as above. Rules are delimited by  "And". The rule list excludes the prefixed source system and message type. Try to be consistent in the ordering of the rules. e.g. (1) Version (2) Status.

Format:    MessageType_Action_OnRampLocationType_MessageStatus_ExpectedBehaviour

Sample:   Customer_Create_WsHttp_Valid_Success()

Remember, you can write more fine-grained BizUnit tests for the BRE itinerary selection rules without re-testing the whole itinerary. See the Business Rule Engine BizUnit Test Steps codeplex project for guidance here.  Also see Mike Stepheson's blog article on XPath query testing.

Code Analysis

I realise that the above recommended conventions breech the Microsoft Code Analysis Naming rule "CA1707: Identifiers should not contain underscores". However, I feel it is worth disabling this rule for test projects to gain the benefits described above.


Sunday, March 14, 2010 #

Article Source: http://geekswithblogs.net/charliemott

This article describes an approach to the management of cross reference data for BizTalk.  Some articles about the BizTalk Cross Referencing features can be found here:

Options

Current options to managing this data include:

However, there are the following issues with the above options:

  • The 'BizTalk Cross Referencing Tool' requires a separate database to manage.  The 'XRef XML Creation' tool has no means of persisting the data.
  • The 'BizTalk Cross Referencing tool' generates integers in the common id field. I prefer to use a string (e.g. acme.petshop.country.uk). This is more readable. (see naming conventions below).
  • Both UI tools continue to use BTSXRefImport.exe.  This utility replaces all xref data. This can be a problem in continuous integration environments that support multiple target BizTalk groups (even different clients).  If you upload the data for one client/group/application it would destroy the data for every other application in that group.  Yet in TFS, where builds run concurrently, this could break unit tests.

Alternative Approach

In response to these issues, I instead use simple SQL scripts to directly populate the BizTalkMgmtDb xref tables combined with a data namepacing strategy to isolate client\application data.

Naming Conventions

All data keys use namespace prefixing.  The pattern will be <company name>.<biztalk group and\or applicatoin>.<data type>.  The data must follow this pattern to isolate it from other company\group cross-reference data.  The naming convention I use is lower casing for all items. 

The table below shows some sample data.  (Note: this data uses the 'ID' cross-reference tables.  The same principles apply for the 'value' cross-referencing tables).

Table.Field Description Sample Data
xref_AppType.appType Application Types acme.petshop.erp
acme.petshop.portal
acme.petshop.assetmanagement
acme.petshop.ordermanagement
xref_AppInstance.appInstance Application Instances
(each will have a corresponding application type).
acme.petshop.dynamics.ax
acme.petshop.dynamics.crm
acme.petshop.sharepoint
acme.petshop.maximo
acme.petshop.oms
xref_IDXRef.idXRef Holds the cross reference data types. acme.petshop.vatcode
acme.petshop.country
xref_IDXRefData.CommonID Holds each cross reference type value used by the canonical schemas. acme.petshop.vatcode.exmpt
acme.petshop.vatcode.std
acme.petshop.country.usa
acme.petshop.country.uk
xref_IDXRefData.AppID This holds the value for each application instance and each xref type. UK
GB
44

Scripts

The data to be stored in the BizTalkMgmtDb xref tables will be managed by SQL scripts stored in a database project in the visual studio solution.

File(s) Description
Build.cmd A sqlcmd script to deploy data by running the SQL scripts below. (This can be run as part of the MSBuild process).
acme.petshop.purgexref.sql SQL script to clear acme.petshop.* data from the xref tables.  As such, this will not impact data for any other client/group.
acme.petshop.applicationInstances.sql   SQL script to insert application type and application instance data.
acme.petshop.vatcode.sql
acme.petshop.country.sql
etc ... 
There will be a separate SQL script to insert each cross-reference data type and application specific values for these types.

Tuesday, February 16, 2010 #

Article Source: http://geekswithblogs.net/charliemott

There are various blog articles that give sample .NET code that can be used to validate a message against a schema from a BizTalk orchestration.  These include: msdn, haloscan.com, biztalkgurus.com, eggheadcafe.com and Sujan Turlapaty.

Many of these blogs have subsequent comments about problems.  Under high loads, I too began to see these classes return “false positives” in my test environment.  (i.e. An XmlSchemaValidationException is being thrown against valid messages. When the messages are re-submitted individually, they pass validation without problems.)

Doing a bit more research into this, I found this Microsoft XML Team's WebLog article about the lack of thread safety of the XmlSchema and XmlSchemaSet classes. 

This article also states that "for some reason, this "breaks" more on 64-bit machines, but it's unsafe on all architectures".  This would explain why I only began to see the problem when we started testing on 64-bit servers.  I never experienced the problems on my 32-bit development or build machines.  Nor could I break the code in a unit test using Roy Osherove's ThreadTester.  So, another lesson learnt - always develop on the same OS type as the target machines.

To circumvent this problem, I am now calling a validation pipeline from the orchestrations. (I am using a pipeline with Saravana Kumar's Extended XML Validation Pipeline Component so that exceptions report all the validation errors, not just the first). This is working well. However, it does mean that I have had to set the orchestrations to run as long-running transactions.

Update (19/02/2010): Both the out-of-the-box validation pipeline component and Saravana Kumar's Extended XML Validation Pipeline Component, use the deprecated XmlSchemaCollection class. This class is more thread safe than the replacement XmlSchemaSet class.  As such, rather than the above solution of calling a pipeline from an orchestration, another solution would be to modify the validation method to use the XmlSchemaCollection instead of the XmlSchemaSet class.


Thursday, July 23, 2009 #

Article Source: http://geekswithblogs.net/charliemott

The "HTML Generator StyleSheet for BizTalk Maps" originally posted by Steve Hart is now on CodePlex.

http://biztalkmapdoc.codeplex.com

I have uploaded a new version that includes the following updates (not all of this was developed by me):
 

  • output of label data.
  • output of constants used.
  • updated table layout.
  • separate tables for each BizTalk page.

I've also included details of how this can be run in an MSBuild process.


Wednesday, May 13, 2009 #

Article Source: http://geekswithblogs.net/charliemott

This article describes our approach to testing BizTalk integration with Dynamics AX 2009.  It builds on the "Alternative Bindings" approach as described by Mike Stephenson.  
 

Alternative Bindings

We are communicating asynchronous with Dynamics.  As such, in our developer / unit test bindings, we have replaced use of the AIF Adapter with the MSMQ Adapter.   If you are communicating synchronously, you could use the WCF Adapter.
 

Mimic the Dynamics AIF Adapter

We also need to mimic the actions of the AIF Adapter.  To do this, we have used 2 test pipelines and a test schema in our developer bindings:
 

  • We created a test implementation of the AX envelope schema (AxEnvelope.xsd).  This has the same namespace and structure as the actual AX schema.  The difference is that our test schema has all the Header fields as promoted properties. 
Test AX Envelope Schema with Promoted Properties
  • On the send side, our send ports use our MimicDynammicsAifAdapterSend pipeline.  This has an XML Assembler component.  This wraps the outbound messages in the AxEnvelope envelope.  With property demotion, the header fields are set from the context properties (as set in our orchestrations). This enables us to test these values in our BizUnit validation steps.
     
  • On the receive side, our MimicDynamicsAifAdapterReceive pipeline has an XML Disassembler that strips off the envelope.  The header fields are prompted to the message context properties.  In particular, the MessageId envelope header field is required to be added to the message context so the response messages can be correlated to the message sent to Dynamics. The dissassember references the schemas: DynamicsAx5.Fault, DynamicsAx5.EntityKeyList, DynamicsAx5.EntityKey and all other message types we are expecting from Dynamics.

Stub Dynamics

In order to mimic Dynamics sending a response to messages we send to Dynamics, we also built a custom BizUnit test step (AxSendResponseStep). 

The response message can be a valid response (DynamicsAx5.EntityKeyList) or an invalid response (DynamicsAx5.Fault) as specified in the step parameters.  In the Dynamics response message, the MessageId is replaced with the same MessageId in the received message.  This ensures the Dynamics response messages can be correlated by BizTalk orchestrations.

Update: 03/04/2011 - This approach only applies to integration with AX4 and AX5.  These versions provide the AIF Adapter.  In AX6 (AX2012), services are exposed as WCF services.  As such, regular approaches to stubbing out WCF services can be used.


Monday, April 20, 2009 #

Article Source: http://geekswithblogs.net/charliemott

Ever wondered what the differences are between "id" and "value" cross referencing in BizTalk?

Functional

The functional difference is documented on msdn.

... The cross reference ID methods can be used to establish and lookup the relationship between the IDs used by the two systems. ... the cross reference value methods can be used to translate the value used by one system to the value used by the other system.

I had no idea what this meant.  So digging a bit under the covers to understand this reveals the following:

Id cross referencing supports one-to-one bi-directional mapping data. In Id cross referencing, there is a one-to-one relationship of cross reference data between applications. This is enforced by the database table constraints.

Value cross referencing supports one-way many-to-one mapping data. For example, Germany, France or Spain can all be mapped to Europe. The tables for Value cross referencing are similar to the Id tables apart from the constraints. As a result, value cross referencing can only be guaranteed in a single direction. For example, you cannot map Europe to a single country.

Technical

A technical difference is identified by Michael Stephenson:

In ID cross referencing it will hit the database everytime....
VALUE cross referencing implements a simple cache....if you update data in the database you should restart each host instance that uses it before the changes will take effect.