Blog Stats
  • Posts - 48
  • Articles - 0
  • Comments - 22
  • Trackbacks - 0


Monday, April 6, 2015

Aurelia comes to Montreal !

On monday April 13th if you are in the Montreal region you have the opportunity to see the next generation of Web Apps in action with Rob Eisenberg and his new product: Aurelia. This event already has over 125 registered participants, hurry up, space is limited.

Sunday, July 6, 2014

.NET hacks you shouldn’t be using in your code


So I’m looking at an application that I need to update for a client and I stumble on a piece of code that works (no denial there it does work) but shouldn’t be so obscure… I know we all have written code that was sub-par at some point in our life and to some extend we sometimes still write some for various reasons, laziness being the most common IMO. This functionality is a classic… being able to tell if a description starts with a particular string, more precisely in this case there is a list of potential search patterns to validate. Now there is a catch to this… if a certain flag is raised then the search must return TRUE even if the description does not start with any of the search patterns in the list…. I know it’s weird but that’s the business of this app… So most devs would have either used LinQ or wrote a simple loop that iterates through the list of search patterns and do a StartsWith() on the description… Don’t forget the special flag tho! Easy we’ll simply skip the whole loop and return true if the flag is raised… It’s exactly what they did, a loop but I couldn’t find where the flag was managed… It wasn’t inside the function around the loop and it wasn’t around the only call to the function to skip it… At first I thought the dev forgot the flag but when I tested the app I saw it really behaved correctly when I clicked the checkbox associated with that flag…. So I searched for that flag (find all usage). Sure enough I found it a few hundred lines of code north of the call to the search method and it was the only place in the code base where the flag was actually used. I knew I was at the right place… but what I saw baffled me… Remember that list of search patterns? Well, when the flag was raised, the dev simply added an empty string to the list of search patterns… Yall should have seen my face at that moment, I was flabbergasted. There was no comments in the code so instinctively jumped to msdn on the StartsWith method and sure enough, in the remark section there was this piece of information stating that StartsWith would return true if an Empty string was used... I’m still in shock as I’m writing this… Being a big fan of the “If it ain’t broke, don’t fix it” principle, I did not correct that “intentional code obfuscation” but I commented it and later read a nice piece of info on SO regarding this

Happy coding all and remember, when that urge to use an obscure coding hack arise in the future, think about poor ol’Vince having to debug this code a few years from now and…. resist!

Saturday, June 7, 2014

Tricky situation with EF and Include while projecting (Select / SelectMany)


the other day I stumbled on a problem I had a while back with EF and Include method calls and decided this was it and I was going to blog about it…  This would sort of to pin it inside my head and maybe help others too !  So I was using DBContext and wanted to query a DBSet and include some of it’s associations and in the end, do a projection to get a list of Ids…   At first it seems easy…  Query your DBSet, call Include right afterward, then code the rest of your statement with the appropriate where clause and then, do the projection…   Well it wasn’t that easy as my query required I code my where on some entities a few degree further in the association chain and most of these links where “Many”…  I had to do my projection right away with the SelectMany method.  So I did my stuff and tested the query….  no association where loaded…  My Include statement was simply ignored !  Then I remembered this behavior and how to get it to work…  You need to move the Include AFTER your first projection (Select or SelectMany).  So my sequence became:   Query the DBSet, do the projection with SelectMany, Include the associations, code the where clause and do the final projection…. but it wouldn’t compile…   It kept saying that it could not find an “Include” method on an IQueryable… which is perfectly true!  I knew this should work so I went to the definition of the DBset and saw it inherited DBQuery and sure enough the include method was there…  So I had to cast my statement from start until the end of the first projection in a DBQuery then do the Includes and then the rest of my query….   Bottom line is, whenever your Include statement seem to be ignored, then maybe you will need to move them further down in your query and cast your statement in whatever class gives you access to the Include…


Happy coding all !

Sunday, May 4, 2014

So you want to move your TFS collection… read this first!

At my former client we needed to move the TFS collection from one TFS server to another.  If you want to do the same thing, you need to start looking at the official procedure for doing so.  If you are using the LAB Manager, then you will need to re-create all your VMs into the Lab (they might still be on SCVMM but not inside Hyper-v at least mine weren’t there anymore).  If you are changing domain at the same time, the current users in the TFS collection are not going to be accessible in the other domain BUT your work items will keep their fields (Assigned to) with their current value.  You will only lose that value if you change it to something else, then the old domain user will not be usable from the pick list anymore.  Your code base history will still be using the old username / domain and keep functioning like it should.  If like me you are using Team Build 2010, then you will have to re-create your build server / controllers / Agents for the new collection.  Another side note is if like me you did a “Dry Run” before you actually did the move, then make sure that after this dry run you delete the collection AND RECYCLE THE IIS APPPOOL for this Site…  I forgot to recycle the AppPool and when I finished moving the collection for good the second time, the Web Access for Work items would crash with a COM Exception of some sort which I couldn’t figure out.  Then one of the support person at the client decided to recycle the AppPool and boom, everything worked great…  There must have been something in the AppPool memory that pointed to the dry run collection and the new one probably had other ids or something like that…  Anyways, Run the procedure and Reset the AppPool !


Happy TFS-ing all !

Wednesday, April 30, 2014

Optimizing your Self Tracking Entities


Here’s a quick tip on how to optimize your .NET 4.0 Self Tracking Entities.  On our last project at Fujitsu we ended up having to load hundreds of thousands of entities in collections of some of our STEs.  Don’t ask us why we did it because the answer is “this is how the client wanted it”.  If you did that in the past, you know that loading that many entities in STEs can literally take many many minutes…  I mean, how can populating a collection take that much time, or even seconds for that matter?  When you look at this process and start profiling it, you can see where the STEs fail to cope with increasing load.  That’s because they use the Contains method all the time…  The Contains method on a Collection<T> is rather slow compared to that of other containers like, say, a Hashset.  So we added a private “non serialized” Hashset<T> to the  TrackableCollection<T> class defined in the T4 file for the generated entities.  Then we added a function to that class to check whether an entity was already in the Hashset and this is the function we used everywhere in that T4 template instead of the TrackableCollection<T> Contains method.  Now every time an entity is either added or removed inside the TrackableCollection<T> we also do this same operation to the Hashset which is super fast and doesn’t affect performance much.  So we ended up saving a ton of time duplicating entities so yes it’s a high memory footprint but since it’s not serialized, it didn’t affect our performance either when transfering it.  The hashset is simply repopulated just like the normal TrackableCollection<T> is when deserializing and ignored upon serialization.  That was step one…  Here’s how our TrackableCollection<T> now looks like:


public class TrackableCollection<T> : ObservableCollection<T>
    private readonly HashSet<T> hashSet = new HashSet<T>();

    internal bool HashSetContains(T element)
        return hashSet.Contains(element);

    protected override void ClearItems()
        new List<T>(this).ForEach(t => Remove(t));
    protected override void InsertItem(int index, T item)
        if (hashSet.Add(item))
            base.InsertItem(index, item);

    protected override void RemoveItem(int index)
        var element = this[index];

    protected override void SetItem(int index, T item)
        var ancienElement = this[index];
        base.SetItem(index, item);




Step two was to stop using the Enumerable.Contains extension method instead and use the Hashset.Contains method like we should be.  Wait… what?  Now I’m sure you’re saying,  “ I’d never use the Enumerable.Contains method over the Hashset.Contains method because I know the Enumerable.Contains method is O(n) and Hashset is O(1) ”…   Yes… we know that too…  but if you open the T4 that generates your Entity Framework Context, you will find a method in there just like this one : 

public bool Contains(object entity)
      return _allEntities.Contains(entity);

Now notice that entity is declared as an object in that Contains method.

Also , just a few lines above that Contains method you will see how the _allEntities is declared :

private readonly HashSet<IObjectWithChangeTracker> _allEntities;


So basically you’d think that since the code is calling Hashset.Contains, the CLR would actually call the Hashset’s contains method and not the extension method with the same name but…. you’d be wrong Smile   The overload resolution looks at the type of the parameter and when we profiled this code we saw it go to the extension method all the time because “entity” is defined as object and not as “IObjectWithStateTracker”…  So what did we do?  We simply casted the entity to an “IObjectWithStateTracker” and bam, on our next run we hit the Hashset’s contain method and man was it fast this time Smile 

        public bool Contains(object entity)


            return _allEntities.Contains(entity as IObjectWithChangeTracker);



Try this for yourself, it’s simple and T4s are great for this kind of poking around Smile

I hope this leads to lots of improvements in your large collections inside STEs…

Happy coding all !

Monday, April 28, 2014

Are your WPF Apps leaking memory? Maybe it’s a feature of WPF!


So our WPF app that we’ve been building for the past 18 months had memory leaks and we needed to find and fix them. I’m not going to talk about how to find them (aside from mentioning you should get a good 3rd party memory profiler tool) and will focus on a rule we ended up checking throughout our entire application to avoid memory leaks. First off I want to tell everyone that this had nothing to do with simple references not released or a cache that wasn’t cleared or anything like such… This was entirely WPF related and maybe some of you WPF gurus would have found this fast but it wasn’t so obvious to us what we were doing wrong. Apparently this is a known situation at MS and by design, so instead of saying it’s a bug, we’ll be politically correct and say it’s a “feature”. Smile

So to avoid wrongfully using that “feature” whenever you create a binding whether via XAML or by code (C# or VB.NET) you must fulfill at least 1 of those 3 conditions:

- The binding target must implement INotifyPropertyChanged

- The target property must be a Dependency Property

- The binding mode is OneTime

If you fail to comply with all 3 rules (meaning not at least 1 is followed), then the “feature” will be “activated” and I’m pretty sure you need that “feature” active as much as you need to have a tooth pulled without anesthesia. I did a little humor there with the feature thing but really it’s something that could bite you big time if you do not pay attention to it… We ended up building a tool that would tell us if there are bindings that do not match this rule. The tool is activated by a key combo in our application and sends the binding information to the clipboard for the devs to checkout. The best thing would have been a custom Code Analysis Rule that would generate a compile time error but we didn’t have time for this…

Here’s the code and it’s coming from the web…

Thanks to my co-workers Francois Cantin and Gabriel Létourneau at Fujitsu for digging and finding this stuff Smile

Happy coding all.


private static void GetReflectPropertyDescriptorInfo()


List<ReflectPropertyDescriptorInfo> listInfo = new List<ReflectPropertyDescriptorInfo>();

// get the ReflectTypeDescriptionProvider._propertyCache field

Type typeRtdp = typeof(PropertyDescriptor).Module.


FieldInfo propertyCacheFieldInfo = typeRtdp.GetField("_propertyCache",

BindingFlags.Static | BindingFlags.NonPublic);

Hashtable propertyCache = (Hashtable)propertyCacheFieldInfo.GetValue(null);

if (propertyCache != null)


// try to make a copy of the hashtable as quickly as possible (this object can be accessed by other threads)

DictionaryEntry[] entries = new DictionaryEntry[propertyCache.Count];

propertyCache.CopyTo(entries, 0);

FieldInfo valueChangedHandlersFieldInfo = typeof(PropertyDescriptor).GetField("valueChangedHandlers",

BindingFlags.Instance | BindingFlags.NonPublic);

// count the "value changed" handlers for each type

foreach (DictionaryEntry entry in entries)


PropertyDescriptor[] pds = (PropertyDescriptor[])entry.Value;

if (pds != null)


foreach (PropertyDescriptor pd in pds)


Hashtable valueChangedHandlers = (Hashtable)valueChangedHandlersFieldInfo.GetValue(pd);

if (valueChangedHandlers != null && valueChangedHandlers.Count != 0)

listInfo.Add(new ReflectPropertyDescriptorInfo(entry.Key.ToString(), pd.Name,







Clipboard.SetText(string.Join(Environment.NewLine, listInfo.Select(i => string.Format("{0} - {1}", i.TypeName, i.PropertyName))));


public sealed class ReflectPropertyDescriptorInfo : IComparable<ReflectPropertyDescriptorInfo>


public ReflectPropertyDescriptorInfo(string typeName, string propertyName, int handlerCount)


m_typeName = typeName;

m_propertyName = propertyName;

m_handlerCount = handlerCount;


public string TypeName


get { return m_typeName; }


public string PropertyName


get { return m_propertyName; }


public int HandlerCount


get { return m_handlerCount; }


public string DisplayHandlerCount




return m_handlerCount == 1 ? "" : string.Format(CultureInfo.InvariantCulture,

" ({0:n0} handlers)", m_handlerCount);



public int CompareTo(ReflectPropertyDescriptorInfo other)


if (object.ReferenceEquals(other, null))

return 1;

int compareResult = m_typeName.CompareTo(other.m_typeName);

if (compareResult == 0)

compareResult = m_propertyName.CompareTo(other.m_propertyName);

if (compareResult == 0)

compareResult = m_handlerCount.CompareTo(other.m_handlerCount);

return compareResult;


readonly string m_typeName;

readonly string m_propertyName;

readonly int m_handlerCount;


Sunday, April 27, 2014

How I implement the Singleton pattern in .NET


I thought I'd give my point of view regarding the singleton pattern
and how I prefer creating the pattern in .NET.   Some pepole will argue
that the way I do it is not the "way it should be done" but again, I will
point to:   this is only my way of doing it, not THE only way everyone
should do it.   In this post, I'm not talking about a Singleton in a system like
when using WCF's InstanceContextMode = Singleton or like we did back in the old
days of .NET with Remoting, I'm talking about a simple but usefull AppDomain Singleton.

In my view, an AppDomain singleton simply means "a class that holds states shared
throught all components of this AppDomain".  I know normally you should not be able
to instanciate this class more than once and all that yadi-yada stuff.  The thing is
what if you didn't have to instanciate the class at all?  Hmmm...  This would mean
no need to create a class that has no public constructor and only a private one which
is a little awkward for junior devs.  This would also mean not having to explain how
a class can expose an instance of itsef.  Interesting!  Why not simply use a static class?

Static classes are AppDomain scope classes.  They can hold states (static of course), are
easy to use and simple to understand.  Do you know what I love best about this usage?  It
doesn't require all the locking and perhaps double locking mecanism you normally see to
ensure a singleton isn't instanciated twice per AppDomain. Since the class is static, the only way to initialize it is not through your code or rather not something the dev can do.  Initializing a static class is done by the CLR and only it can do this... there's nothing you can do  to prevent it or alter it in any way.  Since the CLR is initializing the class, it
automatically prevents the class from being loaded in memory twice inside a single AppDomain. Brilliant !  Less plubming, same functionalities, simple to design and use.  So my way of designing singletons is to create a static class with states inside.  Now what if I want to run a bit of code before anyone can use my singleton?  For example, what if my singleton is in facts a cache I want to initialize when my app starts... I need to fill that cache before it can be used...  Then I use the static constructor of that class as the place to put my cache loading code...  Pundits of this way of implementing the singleton
pattern will tell you that this means you never know when your cache will be filled for
the first time.  They will say the CLR calls the static constructor on the static class
the first time a component tries to "touch" the static class... and they are right. To counter
that, if I require to load my cache at a specific point in time and not the first time it's
used, I simply create a static "Initialize" method that does NOTHING.  I comment it quite a bit and tell other devs not to delete this method because calling this method actually kicks off the static constructor, effectively populating my cache when I want it.

So there you have it, a simple singleton that takes about 10 seconds to implement. 

Happy coding folks !!!(sorry for the previous hard to read posts, I just discovered Live Writer... at last the pain is gone !)

Meeting .NET: The missing link – hypermedia in Web API.

In Montreal on may the 26th Darrel Miller will come to the .net Montreal usergroup to talk about Hypermedia in Web API. Here's a bit more info about this event : Web API provides a foundation for building HTTP based distributed systems. However the support for generating hypermedia based responses is minimal. This talk will try and fill the gap by providing a practical example of building a real world hypermedia driven API in ASP.NET Web API.Additionally we will discuss some of the recent changes in Web API 2 and upcoming changes in Web API 2.1 including the significance of Owin and Katana in the Web API world.

Friday, April 18, 2014

A full day of architecture conferences in Montreal !

On Saturday May 10th, our usergroup is offering it's members a full day of conferences on emerging architectures including architectures for cloud based solutions! You have to be a member of .NET Montreal to attend and there is a 10$ fee. All conferences will be in French.

Sunday, April 13, 2014

Meeting Azure - Top Azure features every ASP.NET developer should know about

On Monday April the 28th in Montreal comes the father of the first Azure UserGroup, Bill Wilder. Bill comes from Boston for this event and we know this is going to be a meeting the montreal devs won't want to miss...

Friday, March 21, 2014

SQL Server 2014 launch event in Montreal !

On Saturday April the 12th, in the Microsoft Office in Montreal will be a full day of SQL Server 2014 presentation to mark the launch of this great product. The product is actually available on April 1st (no April fool here). For more information checkout:

Wednesday, March 12, 2014

Surface 2 replaced !!!!

Hi all, if you have your receipt either paper or hopefully the electronic one sent to your email address, you should be able to walk up to the company store nearest to you and have your Surface 2 replaced for free. Mine got replaced but I had to pull some strings from the MVP program and I tend to believe not every MS Customer is an MVP... It took roughly 3 months to get it all sorted out but in the end, I have a brand new Surface 2 and it works like a charm :)

Wednesday, December 18, 2013

No serial number on my brand new surface...

Hi everyone, I thought I'd share my current experience for those of you who might be in the same position as I am right now being : I own a Microsoft Surface 2 (RT) and cannot find the serial number on it.... So before you reply and tell me how dumb I am for not being able to see the serial number under the stand, please take a deep breath and read below.... My Surface currently reboots constantly. I push the button on top, then half a second afterwards the SURFACE word appears for another half a second and bam.... reboot... and one second and reboot and on and on and on... So I said ok, I got a problem but it's no big deal since the Surface comes with a one year warranty. All I need to do is find my serial number... It's either on the original box which I'm sure everyone keeps just in case or under the stand between the "Surface" and "32GB" or "64GB" writing... which apparently I do not have... How can that be? Alright so I need support for my surface but the number one step in registering for support is providing your serial number... I don't have one... I may be out of luck but I still reached out to a few other MVPs and together we reached out to Microsoft through our direct links there (not the support line) and they apparently will keep us posted when they figure out why some Surfaces don't have a serial number... Another MVP in Montreal has the same problem as I have with having no serial number on his Surface so this might help. I will post here as soon as I have news of what should one do if their surface has no serial number and they need support but for now, we wait. Alright that's it for today, hope I can post a solution to this problem here soon ! Happy "Surfacing" all!

Tuesday, November 26, 2013

Creating your own dispatcher thread

I haven't posted here in a while and see the final result is ridiculous... Guess I'll have to learn those fancy posting tags.... Alright I know there are a lot of tools out there built for integration test purpose but I figured I would post what I think could be considered a poor man’s solution for integration testing but which works amazingly well… I’m using MSTest as the harness for starting my integration test and the solution I’m building is a WPF 3 tiered app built using the MVVM pattern. So basically I’m supposed to instantiate ViewModel classes and populate their fields and collections so I can effectively fire up the appropriate commands and get my test to actually do something. My main problem is that these ViewModels are full of async and await calls which means those commands have to be fired on a different thread than the thread on which the unit test runs, otherwise, my unit test end before the async code completes. This also means I need to wait for that other thread to finish before I can go on with the rest of my integration test. Add to that a bit more complexity to the mix, our homemade set of tools to fire commands using async and await do force that call to be made on the dispatcher thread of WPF… Is there such a thing as a dispatcher thread when running tests using MSTest???? Nope… So you need to create your own dispatcher thread and tell the .net framework that this new thread is the one that should be considered the dispatcher thread… Here’s the code I used for that and a few justifications as of why I did this and that : internal Thread CreateDispatcher() { ManualResetEvent dispatcherReadyEvent = new ManualResetEvent(false); var dispatcherThread = new Thread(() => { // This is here just to force the dispatcher infrastructure to be setup on this thread Dispatcher.CurrentDispatcher.BeginInvoke(new Action(() => { })); // Run the dispatcher so it starts processing the message loop dispatcherReadyEvent.Set(); Dispatcher.Run(); }); dispatcherThread.SetApartmentState(ApartmentState.STA); dispatcherThread.IsBackground = true; dispatcherThread.Start(); dispatcherReadyEvent.WaitOne(); SynchronizationContext.SetSynchronizationContext(new DispatcherSynchronizationContext()); return dispatcherThread; } Now a few comments on the above… Why do I need to synchronize my code with a ManualResetEvent? Because I need to make sur the SetSynchronizationContext call is executed after the threadstart action executer this very important line Dispatcher.CurrentDispatcher. Note that the action inside that call does not need to be done executing, in fact in my case the action does nothing! Simply touching the Dispatcher.CurrentDispatcher method is enough to set the current thread as the dispatcher thread. Another very important thing for me was to set the current apartment of the new dispatcher thread to STA so it would reflect the real apartment of a UI thread in .net. Now all I need to do going forward is to pass this thread around to this method when invoking async actions: Dispatcher.FromThread(myDispatcher).Invoke(someAction). Of course, you’ll need a way to wait for this newly fired action to complete but that I cannot help you with and your code should already provide sufficient ways to do so… Hope this will help someone out there! Happy coding all.

Tuesday, April 9, 2013

Ottawa Code Camp Approaching even FASTER !

Hello all,


 just wanted to point out that the Ottawa Code Camp will be held on May 4th 2013 at Algonquin College.

My session will be on "Optimizing your 3 tiers apps with current technologies" and we'll take a look at WCF Threading, Tasks Parallel Library and Async-Await pattern.... all that in VS2012...   Please note that all this can be done in VS2010 if you apply the Async CTP 3 in your environment.

Here are the details:

Thursday, April 4, 2013

Dev Teach Toronto coming FAST !

Announcing DevTeach Toronto / Mississauga, May 27-31, 2013 !

Lots of great conferences during the main event and also many pre/post conferences, don't miss out on a training you can't get anywhere else !





Saturday, March 30, 2013

Exception handling when using Tasks

Tasks are very impressive once you manage to wrap your head around a few concepts.  One subject I’d like to cover in this post is how to deal with exceptions in tasks.  There are a few pitfalls one must not fall into when dealing with exception handling in tasks.  First you must remember that each task is responsible for its own error checking or error handling.  Tasks that do not handle their exceptions will crash your application.  How can you avoid that?  Continuations to the rescue!  There is a concept called Continuations in Tasks and they can easily be implemented to help you deal with exactly that type of problem.  All you have to do is “Continue” your task once it's done executing.  The continuation code (typically an Action<Task>) is the right place to check if an exception happened during the execution of the original task.  There comes the concept of “Observing” a task.  A task is deemed “Observed” if you check for it’s Task.Exception property.  For example,  look at the code below where task1 is continued so that exceptions can be caught.  Inside the continuation, you check to see if an exception occurred in task1 like follows:


var task1 = Task.Factory.StartNew(() =>


    throw new MyCustomException("Task1 faulted.");


.ContinueWith( (originalTask) =>


        if(originalTask.Exception != null)


                                Console.WriteLine("I have observed a " + originalTask.Exception.Tostring());




Task1 is now Observed and will not crash your application when it throws the exception.  Would there be other alternatives to this?  Yes, a better way of handling the above would be to pass in the OnlyOnFaulted option to the continuation task (and skip the null check) so the continuation would only happen when the original task is faulted.  Another way would be to NOT use the Continuation at all and having previously hooked your code to the TaskScheduler.UnobservedTaskException global event hanler .  This event is your last chance to log the exception before your application possibly crashes.  Of course, for the application not to crash you'll need to call the SetObserved method on the UnobservedTaskExceptionEventArgs parameter of the event.  Typically I recommand that you use Continuations on each tasks AND hook up to this "Global Task Handler" as an additionnal safety net.

Happy coding all !


Saturday, January 5, 2013

How I got burned with Automation Ids and virtualized content in XAML controls...

Not so long ago on our project at work we had to create shared steps in Microsoft Test Manager
for playback later on.  The screen we used contained 2 instances of the same custom made combobox
both displaying a list of countries.  One was located at the top of the screen and the other one at the bottom.
The combobox already supported Automation IDs and could " auto-magically " generate the right
automation id for each entry to be displayed in the list portion of the combobox according
to the key of each element to display, in this case, the country name.  Remember that this portion
of the control is VIRTUALIZED.

Now, the interesting part... We record our test in MTM and in this simple test, we pick a country from
the list of countries at the bottom of the screen.  We had done enough recording and coded ui tests
on this control to know it worked great... When came the playback time, we saw our recording do exactly
what it was supposed to do, pick the country we had selected EXCEPT, it did in in the upper combobox instead
of the bottom one!  HMMM.... How weird!  Both controls had different Automation IDs but had the same automation
ids for the virtualized content because they both displayed the same kind of information being, countries...

OK so the solution was simple, concatenate the controls unique Automation ID with the
unique content for each virtualized row...

The code behind for the control overrides PrepareContainerForItemOverride like so:

 //Declare this property inside your control and initialize it inside your constructor
 public BindingBase BindingAutomationId { get; set; }

 protected override void PrepareContainerForItemOverride(DependencyObject element, Object item)
            base.PrepareContainerForItemOverride(element, item);
            DataGridRow row = element as DataGridRow;

            //Can't put these lines in the constructor because the GetAutomationId call returns NULL consistantly from inside it
  //The magic is here
                this.BindingAutomationId.StringFormat = AutomationProperties.GetAutomationId(this) + "_{0}";
            row.SetBinding(AutomationProperties.AutomationIdProperty, BindingAutomationId);

So now each virtualized content inside this control has a unique automation id and the playback works perfectly.

Hope this saves you a ton of time trying to figure out why your playback won't pick the control you selected during the recording phase.


Wednesday, September 26, 2012

Files for .NET Montreal and VTCC4 conference


 here are the files for both the .NET Montreal presentation made Sept the 24th and at the Vermont Code Camp #4 on Sept the 22nd regarding Architecture problems and solutions linked to EF4.0, Async-await keywords and the Task Parallel Library.

This zip file includes both power points in french and english and the DemoApplication which is I REMIND YOU VERY DEMO-WARE and doesn't handle task level exception and context switching. 




Tuesday, July 17, 2012

Learning the hard way: Uninstalling .NET Framework 4.5RC

Uninstalling the .NET Framework 4.5RC can be a real mess, let me explain…

I had a perfectly functional VM on which I tried to install the 4.5RC version of the .NET framework to test out some of the new features of EF.  Since what I wanted to test didn’t work and since I THOUGHT 4.5 and 4 where side by side, I decided to go back to simply 4 and uninstall the 4.5RC from the VM.  Big mistake….  Now my Visual Studio would not work at all and kept saying “Unknown Error” when I started it…  After further investigation, I saw this post and as you can see in here

when you uninstall the 4.5 framework it automatically uninstalls the 4.0 framework and ANYTHING related to it!  D’oh !!!


I re-installed the 4.0 framework and SP1 and now Visual Studio 2010 started working again.  But I wasn’t done fixing issues yet…  My application was using EF4 and ODP.NET and now, when I open up the EDMX I would see the following error:

error 175 the specified store provider cannot be found in the configuration or is not valid


I was quite annoyed by this so I decided to simply try to regenerate a new EDMX file from database… but I couldn’t !!!  In the drop down box where you choose your provider for the EF connection, Oracle ODP.NET had disappeared !!!!  *me throws holy water all around*  !!!   After much digging around, I found out that the MACHINE.CONFIG file had been modified by the uninstall process and that a very important line had been removed…

If you have this error and you use EF with a particular provider other than SQL Server, check out your machine.config file and see if this section contains the reference to your provider:







Mine needed to look like this:



<add name="Oracle Data Provider for .NET" invariant="Oracle.DataAccess.Client" description="Oracle Data Provider for .NET" type="Oracle.DataAccess.Client.OracleClientFactory, Oracle.DataAccess, Version=, Culture=neutral, PublicKeyToken=89b483f429c47342" />





Restart your VS and your original EDMX will not complain about error 175 anymore…

But that wasn’t all, I ended up re-installing ODP.NET because my machine.config file was really messed up and missed many entries it previously had…  So a word of wisdom (which I didn’t follow, stupid me) is take a snapshot of your VMs before “trying out” 4.5RC and maybe uninstall it later or backup your PC….

Hope this saves you some time…


Monday, July 9, 2012

Multiple instances of Intellitrace.exe process

Not so long ago I was confronted with a very bizarre problem… I was using visual studio 2010 and whenever I opened up the Test Impact view I would suddenly see my pc perf go down drastically…  Investigating this problem, I found out that hundreds of “Intellitrace.exe” processes had been started on my system and I could not close them as they would re-start as soon as I would close one.  That was very weird.  So I knew it had something to do with the Test Impact but how can this feature and Intellitrace.exe going crazy be related?  After a bit of thinking I remembered that a teammate (Etienne Tremblay, ALM MVP) had told me once that he had seen this issue before just after installing a MOCKING FRAMEWORK that uses the .NET Profiler API…  Apparently there’s a conflict between the test impact features of Visual Studio and some mocking products using the .NET profiler API…  Maybe because VS 2010 also uses this feature for Test Impact purposes, I don’t know…

Anyways, here’s the fix…  Go to your VS 2010 and click the “Test” menu.  Then go to the “Edit Test Settings” and choose EACH test setting file applying the following actions (normally 2 files being “Local” and TraceAndTestImpact”:

-          Select the Data And Diagnostic option on the left

-          Make sure that the ASP.NET Client Proxy for Intellitrace and Test Impact option is NOT SELECTED

-          Make sure that the Test Impact option is NOT SELECTED

-          Save and close


Edit Test Settings


Problem solved…  For me having to choose between the “Test Impact” features and the “Mocking Framework” was a no brainer, bye bye test impact…  I did not investigate much on this subject but I feel there might be a way to have them both working by enabling one after the other in a precise sequence…  Feel free to leave a comment if you know how to make them both work at the same time!


Hope this helps someone out there !




Tuesday, April 17, 2012

DevTeach Vancouver approaching fast !

Just a friendly reminder for people in the Vancouver area that DevTeach Vancouver is just a few weeks away !  Registration is open and I can't help but promoting a full day of TFS 2010 workshop given by Etienne Tremblay and myself plus we will most likely add extra material to cover for TFS vNext...  The four added topics will be:

Moving from TFS 2010 to TFS vNext

The Storyboarding addin for PowerPoint

Intellitrace in a Production Environment

Exploratory Testing


Also I'll be presenting a 1h session on mocking and mocking frameworks during the main event.  We'll compare Isolator, Justmock and Moq....

See you in Vancouver !

Monday, April 16, 2012

Coded UI Test Builder Visual Cues Offset

Wow it's been a long time since I posted anything in here.....

Today I'll be very brief because the subject is quite easy to cover but can be quite puzzling when it happens to you...  These days at work I'm exploring Coded UI Test in VS2010, Microsoft Test Manager 2010 and Microsoft Test Runner 2010 which is cool because I've been digging aroung VS2010 testing tools on my own since a year now and also started focusing on VNext's testing tools...  So when you automate a test you will most likely end up having to use the Coded UI Test Builder shown here  

When inspecting controls on your app with the little "Target" tool, you could be confronted to your controls being highlighted "in the wrong place" on your screen.  Kind of like there would be some sort of offset between the control you are pointing to and the visual rectangle cue created by the tool to say "here's the control I think you're pointing to"... Looking at the picture here you can see it's pretty anoying to point at a control, see the tool inspecting the right object but highlighting it lower and to the right of where the control actually is...  I have no clue if this only happens in WPF but here's the solution or at least what worked for me...  In my case I was using the "Medium - 125%" display setting in the personalization of my Windows 7 laptop...  The Coded UI Test Builder only works well when your display is set to 100% (smallest in my case).  Change that option to 100% and everything will start being highlighted at the right place in the tool...  I do think that this was intentionnal and that the tool was built to work only when using 100%...  What a shame but now you know so stop reading and go back to work !

Happy automating :)

Sunday, September 25, 2011

A full day of Azure conferences...

On October the 15th, the Montreal .NET User Group will hold a special event... a full day of conferences and workshops on Azure !  The speakers for the special event will be our very own Guy Barrette, Azure MVP, Sébastien Warin also and Azure MVP and Cory Fowler who just happens to be yet another Azure MVP !  Ain't that just amazing to see how many of them Azure MVPs we managed to pack in the same room for you to learn from?  All this for one low price... 10$....  and you have to be a registered and paid for member of the .NET Montreal User Group...


Circle the date on your calendar, Saturday October 15th, from 9am to 16h30pm at the UQAM university, room R-M110.


Cheers !



Copyright © Vincent Grondin