BizTalk: Complex decoding in data transformations

Sometimes we need to make complex decoding in the data transformations. It could happen especially in the big EDI documents as HIPAA.

Let’s start with examples.

In one example we need to decode a field. We have the source codes and the target codes for this field. The number of both codes is small and the mapping is one to one or many to one (1-1, M-1). One of the simplest solution is to create a Decode .NET component. Store the code table as a dictionary and decoding will be fast. We could hard code the code table, if codes are stable, or extract it in cache, reading it from a database or a configuration file.

The next example is on the opposite side of complexity. Here we need to decode several fields. Target codes related to several source codes/values (M-1). It is not the value-to-value decoding but this decoding also includes several if-then-else conditions, which opens a can of worms with 1-M and M-M cardinality.
Moreover the code tables are big and cannot be placed in the memory.

We can implement this decoding with numerous calls to the database to get the target codes and perform these calls inside a map or inside a .NET component. As a result for each document transformation we calls database many times.

But there is another method of implementing this without flooding the database with these calls. I call this method the “SQL Decoding”.

We might remember that SQL operations are the set operations working with relation data directly. The SQL server is very powerful in executing these operations. Set operations are so powerful, we might decode all fields in a single operation. It is possible, but all source values should be in database at this time. All we have to do is to load the whole source document to the SQL data. Hence our method is:

  1. Load the source message to SQL database.
  2. Execute encoding as a SQL operation or series of operations.
  3. Extract the target message back from the SQL.

We can do all structure transformations in maps (XSLT) and perform only decoding in SQL form. Or we also can do some structure transformations in SQL. It is up to us.

The Pros of this implementation are:

  • It is fast and highly optimized.
  • It does not flood database with numerous calls.
  • It nicely utilizes the SQL engine for complex decoding logic.

The Cons are:

  • Steps 1 and 3 can be not simple.

In real life we usually don’t have a clear separation between our scenarios and the intermediate solutions can be handy. As an example we can load and extract not a whole message but only part of it, related to decoding.

Personally, I use the SQL Decoding in the most complex cases where mapping takes more than 2 days of development.


  • If you are familiar with LINQ, you can avoid steps 1 and 3 and execute set operations directly on the XML document. I personally prefer to use LINQ. But if XML document is really big, SQL approach works better.

Conclusion: If we need a complex decoding/encoding of the Xml document, consider to use the set operations with SQL or LINQ.

Print | posted on Saturday, January 11, 2014 9:30 AM


No comments posted yet.
Post A Comment