Comparing WFE servers
One of the SharePoint farms I am involved with has a load balanced pair of WFE. One of the web parts had acted sporadically – at times working well, at others not at all and I suspected that it might be because it was set wrongly on one of the servers and correctly on the other. Well, I went and checked the GACs on both servers. I did this by writing a PowerShell script that garners the assemblies and outputs them into a spreadsheet. Now with two spreadsheets in my hand I proceeded to examine them for differences.
NB the item count itself it was obvious that the twain were not similar, but I needed detail and with hundreds of items in each and but a pair of eyes – tired eyes – the task was daunting.
Well, I obviously assumed that like we do with Word, I could use the Diff. Alas Microsoft did not build this into the system. A variety of packages that I found on the web did not do such a bright job either. I did not need to see the differences in editing in cells, or whether one cell had a macro and another did not. All I needed was to find which assemblies are common to the two servers and which are unique to each.
Again PowerShell came to the rescue. I love this thing and I enjoy learning new thing about it, so I wrote a little PowerShell script that highlights the differences.
I tested my script using the two sheets below. The asterisks in column E identify the unique entries in each. Entries lacking this asterisk are common. Note that the sheets are also of different lengths.
Finding common and unique rows is an easy enough task. The spreadsheets have to be sorted (both ascending) and compared.
Here is a piece of history. Before personal computers and local area networks, we had central computing. There were Mainframes in the very beginning and then minicomputers. The PC is just about 36 years young. In the very beginning we did not even have dumb terminals by which people could get or feed information into the beast. Instead we used punched tape, punched cards, magnetic tape and disk (much bigger in size, much more expensive and much smaller in capacity). Jobs were run as batches. Still, data processing needed to be done and the sorting and merging of files was a major part of the effort.
I have taken the old IBM mainframe sort-merge algorithm, actually the merge part, and twisted it a little to match the task of comparing files. The compare part is the major ingredient in the merge algorithm, but here I used it for reporting rather than merging.
Enough with history. How is it done? You read the 1st line from both files. If A is less than B, you report A as unique and read A again (and compare again). If B is less than A, you report B as unique and read B. whoever is smaller is reported as unique and its next line is read. If the lines are equal, report them as common and read from both.
I used these two sheets and ran the script. Below are the results. Notice the use of color to accentuate and make it easier to read. The script also produces the same report in rtf format.
I also ran the report on a real set of two GAC lists from two WFE. The screenshot below shows the result of their comparison. The report has hundreds of lines in it so I only showed the end.
Finally the code.
There are two scripts involved. Find them in the following links:
Garnering the GAC assembly list in: http://www.mgsltns.com/GacListToCsv.txt
Comparing the Csvs in: http://www.mgsltns.com/CompareCsv.txt
Before you run them, change the extensions from ‘txt’ to ‘ps1’
Also note that because my site is hosted on a Unix system, the links are case sensitive. You may be better off just clicking on them.
That’s All Folks
Oh, it is best to view code in a smart editor, so change the extension and view the code in Notepad++ or PowerGui Script Editor (or another good editor of your choice)