Totzkeeeeee's Blog

Just because I can...

  Home  |   Contact  |   Syndication    |   Login
  216 Posts | 4 Stories | 345 Comments | 321 Trackbacks

News


My blog is worth $14,678.04.
How much is your blog worth?

Tag Cloud


Archives

Post Categories

Blog Roll

Cool Sites

Bruce Johnson of Object Sharp writes about a puzzling performance issue that took a little head-scratching to solve.  In the end it came down to a data type mismatch between the parameter variable and the field definition.  The parameter was nvarchar(40) and the field was defined as char(36).  SQL was converting the field values to nvarchar(40) instead of converting the single parameter value to char(36).

It makes sense that it would convert the char(36) to nvarchar(40) because converting the other way would mean a loss of information not only due to the shorter field length but aslo in the conversion from unicode to single byte character data.

I faced a similar situation some time ago with a join.  The join fields were both indexed and of the same data type and yet a query involving the join was very slow. Show plan (it was SQL 6.5) confirmed my suspicion that iterative table scans were happening but coulnd't figure out why.  For some reason SQL Server was ignoring the index on the fields.  We rebuilt the indexes, removed and re-added the indexes, did an update statistics (again a 6.5 thing) restarted the SQL service to clear caches, removed and recreated the join etc... Nothing had an effect. 

After much head banging and hair pulling I discovered that the join fields were indeed of the same data type but of different precision and scale.  e.g. FieldA was decimal(8, 4) and FieldB was decimal(8, 2). 

Once we corrected the mismatch, the query sprung to life.

Dave
Just because I can...

posted on Thursday, August 17, 2006 11:44 AM