Close this search box.


Here is something interesting that came up a newsgroup thread a little while ago, that I though merited some more explanation.

Have you ever seen “the operation has been cancelled” errors from a long running query (one that takes more than 30 seconds) while a cube or partition is being processed?

What you may be seeing is a  “by design” behaviour, where a pending commit lock will cancel long running queries that are blocking it from completing. There is a property that can be set at both the server and processing command that controls this behaviour called ForceCommitTimeout which defaults to a value of 30 seconds.

Books Online has the following description of ForceCommitTimeoutForceCommitTimeout

A signed 32-bit integer property that defines the timeout, in milliseconds, before a commit should cancel other commands that preceded the current command, including queries in process.

The default value for this property is zero (0), which indicates that other commands will not be forced to timeout when the current command times out.

As you can see from the screen shot below, this is not entirely accurate. The default value is actually 30 seconds (30000 milliseconds). I’m not sure if this changed with SP2, but this might explain the couple of recent threads on the newsgroup.

In order to understand more about this setting we need to answer the question of what exactly is going on during processing?

  • When an object is processed a new version of the object is created and this new versions is where the processing actually takes place.

Take for instance dimension Reseller. First time you process this dimension you will see in the dimension folder files with names like 1.(All).astore … 1 stands here for the version of the dimension. Once you process the dimension you will see files with version 2 the file above will change to 2.(All).astore. So during commit operation the old set of object files dies and new files then start being used by Analysis Server.

The DDL definition of the Reseller dimension which is kept in the Dim Reseller.1.dim.xml file will go away and you see new file created Dim Reseller.2.dim.xml. If you look inside this file, you will see <ObjectVersion>2</ObjectVersion>

  • When the processing is complete a “pending commit” lock is placed on the object that was being processed (and any dependant objects). If there are any current queries executing against the object in question they will be allowed to continue, any new queries will be queued up until the current version can be swapped out for the new version.
  • If the ForceCommitTimeout has expired and there are queries still executing they are cancelled and the old version is swapped out for the new one and any queries that were queued up are allowed to commence.

This means that increasing the ForceCommitTimeout will give currently executing queries more time to complete. But it also means that queries that were executed soon after the “pending commit” lock was taken will be stalled for the timeout period before they even start to be executed. This will result in the perception of inconsistent performance as any queries executed during this window might take nearly twice as long to produce results.

The “pair” to this setting is the CommitTimeout, where you could cause the commit of the process operation to timeout and roll back, allowing currently executing queries to continue to completion. I can’t really see people using this option nearly as much as the ForceCommitTimeout as you are basically saying that Queries are more important than the Processing operation.

(If you are interested the full thread that sparked the idea for this post can be found here:

 Thanks to Edward for his assistance with some of the details of this post


This article is part of the GWB Archives. Original Author: darren gosbel

Related Posts