January 2008 Entries

If you are getting an error when attempting to access your ASP .NET application that looks like:

Parser Error Message: Could not load file or assembly 'System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

You are probably missing AJAX v1.0!  


Overall, when an incoming XML message contains multiple potential single messages, the extraction process needed to separate the messages out is thought of as "splitting " or "shredding" the message. The splitter pattern then, is a reliable, uniform way to address splitting/shredding throughout your applications.

An example for why your application might need to split messages would be something like a single input XML message with multple pension fund benefits for multiple persons which require multiple output XML messages with each pension benefit grouped for each person in a single transformed XML message.

There are alternative ways to successfully shred the message, but the pattern I like best can be tested by:

  • Developing an orchestration test harness that picks up the messge "as is"
  • Developing  the mappings for the splitting off of the single messages
  • Identifying which element under which node contains what will demarcate a single message. (e.g. social security number, beneficiary id, etc.)
  • Developing an xsl map isolating only the unique identifiers
  • Creating a new map, with a scripting functoid--under the properties,choose the inline xsl template. Cut and paste the xsl you created earler and dump it in.
  • Cycling through the unique identifiers by looking at the xpath info while iterating within a loop shape.
  • Adding an expression shape to build the single messages
  • Outputting the resultant single messages to a send port

Matt Meleski has a very good example of how to actually implement this pattern using a map and an orchestration.


Of course, I may like this pattern best because Matt has done such an exemplary job sharing his implementation!

Darren Jefford also has a way cool pattern for splitting messages which relies on the schema element of the envelope and you guessed it... xpath!    .  Check out his blog:






Simply, JAR stands for Java ARchive. It's like a ZIP (Windows) or TAR (Unix/Linux) file with multiple other files contained within it. So, in the case of the *.jar files--what you're compressing are classes that your application/framework needs.

...now...that was the very short version... if you're looking for a much richer explanation...
jump over to Behrouz Fallahi's page... (http://www.devx.com/tips/Tip/13397)  
During the heyday of Microsoft's Visual C++ development (read: late 80's - early 90's) this naming convention was talked about a lot. In essence the naming convention prefixes each variable name with letters that denote the data type + a shortened description name -- with each word of the description beginning with a capital letter. Let's say you needed to create a variable that would contain the string value of a book title. You might choose a name like: strBookTitle or maybe just strTitle.

Today, there's a whole lot less talk about the convention--mainly because new standards derived from the Java world and from the .NET world have eclipsed it. However, if for some reason you find yourself delving into a legacy code conversion project written in VC++, you will undoubtedly find yourself immersed in Hungarian Notation!:-)

I almost forgot the key as to why the naming convention is called "Hungarian Notation".  The reason is Hungarian born Dr. Charles Simonyi of Microsoft invented it!:-) As an aside, he was a tourist in space on board Soyuz TMA-10 -- arriving @the International Space Station (ISS) in April, 2007. How cool is that?   

For the record, here's a list of the notation's prefixes:

  • by - byte
  • c - character (single)
  • d - double
  • dw - DWORD (unsigned long)
  • fn - function pointer
  • g_ - global type
  • h - handle
  • hdc - handle to a windows device context
  • hwnd - windows handle
  • i - integer
  • I - interface
  • l - long
  • lp - long pointer
  • lpstr - long pointer to a string
  • m_ - class member
  • n - number or integer
  • p - pointer
  • str - string
  • sz - pointer to 1st character of a "0" terminated string
  • ui - unsigned integer
  • v - void
  • w - WORD  (unsigned short)
  • X - Nested class
  • x - instantiation of a nested class

There are several circumstances which will display this output (or exception in some cases) in .NET, however the primary reason is because there is a mismatch between the expected format and the casting taking place in your application.  An example would be attributes which are bound to LDAP that do not readily cast to string.

Here is a quick solution (that has its roots in vc++) for successfully casting this datatype to string:

byte[] myByteArray = (byte[])result.Properties["memberOf"][counter];
string myString = "";
foreach (byte b in myByteArray)
    char singleChar = Convert.ToChar(b);
    myString += singleChar.ToString();

I just received some feedback from Joe. He was kind enough to offer alternative code:


                  byte [] bytes = Encoding.ASCII.GetBytes("This is a test");
                  String s = Encoding.ASCII.GetString(bytes);


If you are getting the message: The test form is only available for requests from the local machine it is because you are probably testing the web service from the remote box you just migrated the web service to!

The quick solution to that is to follow the advice of Juan Ignacio Gelos...

...and do the following:

1. Edit the web.config file for your web service application. Add or Edit:


            <add name="HttpGet"/>
            <add name="HttpPost"/>

So, the next question you may have is Why does the web.config file that automatically gets generated when building a new web service default to exclude the web service protocols HttpGet/HttpPost?  In a nutshell, Microsoft decided that for security reasons, it would be more practical to disable the feature.  For more details please visit MS: http://support.microsoft.com/default.aspx/kb/819267

I am currently working on a manuscript about business rule engines; their purpose in large scale enterprise integration projects; their role in SOA architectures--and their untapped capability in enriching data warehouse-based intelligence delivery.

Today, I am outlining the criteria for comparing two divergent products: TIBCO's iProcess Decisions (part of the BPM product suite) and Microsoft's Business Rule Engine (BRE) (installed with BizTalk Server).

For the nuts and bolts of this first comparison, I will be using the same SQL Server 2005 database instance.

From what I've observed, databases have loosely become de facto rule engines for many large organizations. Frequently the only  persistent objects within an enterprise, is a growing network of federated databases--rich with evolving triggers,stored procedures, SQL Agents, etc.. After just a few hours of analyzing the types of stored procedures, triggers, etc.--it becomes obvious that many of the business processing rules are firmly embedded with the treatment/manipulation of the data through updates, inserts, and deletes. Aside from valid data-integrity related reasons for triggers and stored-procedures, one will find a myriad of status value changes, aggregates, date comparisons, etc.  In essence,  the clear application of business rules for business processes, aptly described by TIBCO as an event cloud.

I like TIBCO's descripton of an "event cloud"--because without the clarity of business rules as applied to specific, repeatable processes in an organization,  a cloud is just what it is!