Do Some SOLR Searching…at CMAP

I spoke at CMAP’s monthly meeting last night on using the SOLR search service with your .NET application.  While a somewhat niche topic, a good crowd showed up and had a number of great questions.  I really enjoyed it and look forward to coming back to CMAP (probably their next code camp) and present again!

Slides

I’ve uploaded my slide deck to SlideShare.  Here’s the embedded slides and you can download the deck from there.

 

Sample Code

I’ve uploaded my code to GitHub.  You can get it from https://github.com/DavidHoerster/solrsearching .
I’ll also upload my configurations for the two collections (historicalQuotes and baseball) shortly.

 

Oops…Adding a Core

So in my excitement to present (or maybe it was all the caffeine), I forgot a simple step in setting up your Solr environment – creating a new collection.  It’s pretty simple to do, especially if you base your new collection on an existing one.

  • Let’s start with the default collection1 that’s present when you extract Solr.
  • Create a new directory at the same level as collection1 (probably under c:\solr\solr).  Call it baseball.
  • Copy the c:\solr\solr\collection1\conf directory to c:\solr\solr\baseball
  • Open c:\solr\solr\baseball\schema.xml in your favorite text editor and set the fields, dynamic fields, types and copyFields that you want baseball to have.  Don’t forget to set the <uniquekey> element, too!
  • Save the schema.xml file
  • Go to the Solr Admin UI and go to Core Admin
  • Select ‘Add Core’ and enter the relative path (under c:\solr\solr) for your new collection.  In our example, it’s ‘baseball’.  Also, name your collection (maybe baseballStats).  The other three fields should default fine (assuming you haven’t renamed your schema or config files).  Save it and the Admin UI should add the collection.  You can double-check everything went fine by refreshing your screen and selecting the baseballStats core from the dropdown and then select Schema Browser.  Make sure your fields are there. 
  • Now you can start adding data!

Overall I had a great time and I hope everyone got something out of some Solr searching at CMAP!  Thanks!!

IIS 7’s Sneaky Secret to Get COM-InterOp to Run

If you’re like me, you don’t really do a lot with COM components these days.  For me, I’ve been ‘lucky’ to stay in the managed world for the past 6 or 7 years.

Until last week.

I’m running a project to upgrade a web interface to an older COM-based application.  The old web interface is all classic ASP and lots of tables, in-line styles and a bunch of other late 90’s and early 2000’s goodies.  So in addition to updating the UI to be more modern looking and responsive, I decided to give the server side an update, too.  So I built some COM-InterOp DLL’s (easily through VS2012’s Add Reference feature…nothing new here) and built a test console line app to make sure the COM DLL’s were actually built according to the COM spec.  There’s a document management system that I’m thinking of whose COM DLLs were not proper COM DLLs and crashed and burned every time .NET tried to call them through a COM-InterOp layer.

Anyway, my test app worked like a champ and I felt confident that I could build a nice façade around the COM DLL’s and wrap some functionality internally and only expose to my users/clients what they really needed.

So I did this, built some tests and also built a test web app to make sure everything worked great.  It did.  It ran fine in IIS Express via Visual Studio 2012, and the timings were very close to the pure Classic ASP calls, so there wasn’t much overhead involved going through the COM-InterOp layer.

You know where this is going, don’t you?

So I deployed my test app to a DEV server running IIS 7.5.  When I went to my first test page that called the COM-InterOp layer, I got this pretty message:

Retrieving the COM class factory for component with CLSID {81C08CAE-1453-11D4-BEBC-00500457076D} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).

It worked as a console app and while running under IIS Express, so it must be permissions, right?  I gave every account I could think of all sorts of COM+ rights and nothing, nada, zilch!

Then I came across this question on Experts Exchange, and at the bottom of the page, someone mentioned that the app pool should be running to allow 32-bit apps to run.  Oh yeah, my machine is 64-bit; these COM DLL’s I’m using are old and are definitely 32-bit.  I didn’t check for that and didn’t even think about that.  But I went ahead and looked at the app pool that my web site was running under and what did I see?  Yep, select your app pool in IIS 7.x, click on Advanced Settings and check for “Enable 32-bit Applications”.

image

I went ahead and set it to True and my test application suddenly worked.

Hope this helps somebody out there from pulling out your hair.

Building a Mafia…TechFest Style

It’s been a few months since I last blogged (not that I blog much to begin with), but things have been busy.  We all have a lot going on in our lives, but I’ve had one item that has taken up a surprising amount of time – Pittsburgh TechFest 2012.  After the event, I went through some minutes of the first meetings for TechFest, and I started to think about how it all came together.  I think what inspired me the most about TechFest was how people from various technical communities were able to come together and build and promote a common event.  As a result, I wanted to blog about this to show that people from different communities can work together to build something that benefits all communities.  (Hopefully I've got all my facts straight.) 

TechFest started as an idea Eric Kepes and myself had when we were planning our next Pittsburgh Code Camp, probably in the summer of 2011.  Our Spring 2011 Code Camp was a little different because we had a great infusion of some folks from the Pittsburgh Agile group (especially with a few speakers from LeanDog).  The line-up was great, but we felt our audience wasn’t as broad as it should have been.  We thought it would be great to somehow attract other user groups around town and have a big, polyglot conference.

We started contacting leaders from Pittsburgh’s various user groups.  Eric and I split up the ones that we knew about, and we just started making contacts.  Most of the people we started contacting never heard of us, nor we them.  But we all had one thing in common – we ran user groups who’s primary goal is educating our members to make them better at what they do.

Amazingly, and I say this because I wasn’t sure what to expect, we started getting some interest from the various leaders.  One leader, Greg Akins, is, in my opinion, Pittsburgh’s poster boy for the polyglot programmer.  He’s helped us in the past with .NET Code Camps, is a Java developer (and leader in Pittsburgh’s Java User Group), works with Ruby and I’m sure a handful of other languages.  He helped make some e-introductions to other user group leaders, and the whole thing just started to snowball.

Once we realized we had enough interest with the user group leaders, we decided to not have a Fall Code Camp and instead focus on this new entity.

Flash-forward to October of 2011.  I set up a meeting, with the help of Jeremy Jarrell (Pittsburgh Agile leader) to hold a meeting with the leaders of many of Pittsburgh technical user groups.  We had representatives from 12 technical user groups (Python, JavaScript, Clojure, Ruby, PittAgile, jQuery, PHP, Perl, SQL, .NET, Java and PowerShell) – 14 people.  We likened it to a scene from a Godfather movie where the heads of all the families come together to make some deal.  As a result, the name “TechFest Mafia” was born and kind of stuck.

Over the next 7 months or so, we had our starts and stops.  There were moments where I thought this event would not happen either because we wouldn’t have the right mix of topics (was I off there!), or enough people register (OK, I was wrong there, too!) or find an appropriate venue (hmm…wrong there, too) or find enough sponsors to help support the event (wow…not doing so well).  Overall, everything fell into place with a lot of hard work from Eric, Jen, Greg, Jeremy, Sean, Nicholas, Gina and probably a few others that I’m forgetting.  We also had a bit of luck, too.  But in the end, the passion that we had to put together an event that was really about making ourselves better at what we do really paid off.

I’ve never been more excited about a project coming together than I have been with Pittsburgh TechFest 2012.  From the moment the first person arrived at the event to the final minutes of my closing remarks (where I almost lost my voice – I ended up being diagnosed with bronchitis the next day!), it was an awesome event.  I’m glad to have been part of bringing something like this to Pittsburgh…and I’m looking forward to Pittsburgh TechFest 2013.  See you there!

Thoughts on Windows Phone 7…

So, first off, I’m by no means a mobile developer.  I’ve played around with some samples here and there, but I have not built anything significant to date.  So I’m not that familiar with the publishing process for iPhone, Android and WP7 applications.

OK, with that out of the way, I will say that I am an owner of 2 WP7 devices (one for me, and one for my wife).  We both like them a lot, think the UI is great and intuitive, and like how the device is laid out.  Honestly, when I go back to my iPod Touch device, I find it a bit clunky compared to the WP7 devices.

I have about the same number of apps on both devices – my iPod Touch has a few more financial and personal applications than my WP7 device does because of the sheer number of more apps in the App Store.  But I have a good number of apps that exist on both platforms (e.g. Netflix, Amazon, IMDB, Evernote, etc.) and I think a good number of them look and perform better on the WP7 device.

But what bothers me is that the activity in the App Store dwarfs the activity in the WP7 Marketplace.  The apps on my iPod Touch seem to have updates every week or so – updates that provide fixes and functionality.  The apps from the WP7 Marketplace are few and far between, and usually don’t provide a lot of functionality.  Granted, this is not scientific – but more of an informal observation.

But what this tells me is that companies aren’t investing a lot in WP7 applications.  Looking through the WP7 Marketplace, while there are a large number of apps, there aren’t a lot of “professional” applications.  There’s no E-Trade, Schwab or an app for my personal bank; few productivity apps that are free; and just a bunch of odd apps scattered throughout the marketplace.  My guess is that a lot of developers are building apps for personal gain; but few companies are building apps.

All of this makes me concerned for the future of WP7.  I think it’s a great OS that has a lot of potential, but I’m afraid that it’s going to be a niche OS at best.  I think Microsoft should pay attention to some of the items in this blog post about how they can get the WP7 out there.  If they don’t react and try to improve the reach of WP7, I think Microsoft’s future in the mobile space is over.

JSON.net and Deserializing Anonymous Types

I had a situation where I had to deserialize a small chunk of JSON-formatted data and I didn’t want to create a class for it since it was a very specific use and I was confident there wasn’t a need to reuse it elsewhere in the application.

 

If I did have a class defined for the JSON data, I could easily use JSON.NET’s JsonConvert.DeserializeObject method:

 

   1: string myJson = "[{id: 10, typeID: 4},{id: 100, typeID: 3}]";
   2: MyObject obj = JsonConvert.DeserializeObject<MyObject>(myJson);
   3: Console.WriteLine(obj[0].id);

 

So this is easy to do, but I really didn’t want to define a MyObject class for a one-time use.  So I thought I’d go the route of an anonymous type and use JSON.NET’s JsonConvert.DeserializeAnonymousType method.  I thought it was a bit vague how to use this since it asks for a type parameter – but since my output will be an anonymous type, what would my type parameter be?

Well, the best way that I came up with in a short period of time was to define a dummy anonymous type and pass it to the JsonConvert method.  It’s a little bit of overhead and an extra line of code, but it does work.

 

   1: string myJson = "[{id: 10, typeID: 4},{id: 100, typeID: 3}]";
   2: var dummyObject = new[] { new { id = 0, typeID = 0 } };
   3: var myObjects = JsonConvert.DeserializeAnonymousType(myJson, dummyObject);
   4: Console.WriteLine(myObjects[0].id);

 

The slightly confusing part was the Intellisense provided by JSON.net:

 

JSON_NET_intellisense


Seeing the ‘T’ type parameter makes you think you need a call to typeof() or something similar.  But in the case of deserializing to an anonymous type, you just need an instance of the anonymous type.

I thought I may have been missing something, but a little Bing research showed that there were similar approaches taken (see here and here).

My Two DEVLINK 2011 Talks Uploaded

After an incredible trip to Chattanooga, TN (my first trip to Tennessee, by the way), I finally made it back home to Pittsburgh and found some time to upload my two DEVLINK talks (slides and code).  I tried something a little different this time by adding my code to GitHub and my slides to SlideShare.  We’ll see how that works out – but I’m optimistic.  So without further adieu, I present to you my 2011 DEVLINK talks.

 

Talk #1: Greenfield Development with CQRS (and Azure, and MVC, and a bunch of other stuff….)

Admittedly a bit of an overly achieving topic, but the talk went well.  I did not do a very good job of managing time, due to the fact that the discussions in the session were that engaging and interesting.  But I think the talk went well and introduced the basics of a CQRS application being hosted in Azure.

GitHub Repo: https://github.com/DavidHoerster/BuzzyGo

Talk #2: jQuery and OData – Perfect Together

I really enjoyed this talk.  I’ve given WCF Data Services / OData talks before, but this one had a great dynamic in the room – the questions were great, the interaction was awesome.  Overall, I had a great time.

GitHub Repo: https://github.com/DavidHoerster/AgileBaseball

Again, thanks to everyone that attended my talks.  I had a great time and I’m looking forward to next year’s DEVLINK!!

HTML / CSS / JavaScript – We Need An Acronym

I’m a big fan of client-side development using HTML / CSS / JavaScript for about the past 2 or 3 years – really ever since I dug into jQuery and really began to appreciate the power that the technologies on the client-side possess.  Nowadays, everything is HTML / CSS / JavaScript – it’s in all the articles, all the techies are talking about it, and it’s on everyones’ resumes.

In looking at candidate resumes or listening to presenters speak about web technologies, we constantly hear “HTML / CSS / JavaScript”.  They’re presented in different order, but regardless, it’s still quite a mouthful to say or read over and over.  It’s 10 syllables – way too much for these OCD days.  And especially since we live in a 140 character world, we need an acronym or a shortening for this term.  I’ve given this much thought and have arrived at my contribution to this serious, serious problem.

 

JACSHT

 

That’s right.  We’ll just take the first two characters of JAvascript, CSs, and HTml, and voila!  A new word!  I mean, we have:

  • AJAX (asynchronous JavaScript and XML – way too long and it’s not even XML all the time anymore!);
  • JSON (JavaScript Object Notation – and is it JAY-son or jay-SOHN?)
  • and so on.

So how could this work?  I think this could lend so much truth to many conversations:

 

HIRING MANAGER: So, do you know JACSHT?

CANDIDATE: No, sir, I don’t know JACSHT.

HIRING MANAGER: How can you expect me to employ you if you don’t know JACSHT?  The whole world is moving towards JACSHT.

CANDIDATE: Sir, I wish I knew as much JACSHT as you.

 

DEV1: I just got back from BUILD and now I know all about JACSHT.

DEV2: Oooh, I’m not doing JACSHT in my job.  I wonder if anyone will care if I’m not doing JACSHT?

 

See how easy this is?  And it really conveys a ton of meaning when a DEV doesn’t know JACSHT.

 

So I’m proposing that we begin to refer to the client side technologies of HTML, CSS and JavaScript as JACSHT.  It’s easy to say (heck, even a little funny) and communicates effectively when a DEV doesn’t understand that technology.

Creating a PHP MD5() Extension Method for C#

In the project that I’m working on (let’s call it BrainCredits v2.0), I wanted to integrate a user’s Gravatar into the system (like how StackOverflow does it).

To Gravatar’s credit, they make it very easy to incorporate a user’s image – just hash the user’s email address (which I have) to the end of their URL and you’ll get the image back.  The email address just has to be hashed using the MD5 algorithm, and they provide examples of how to do that in PHP.  Oh, it looks so easy:

echo md5( "MyEmailAddress@example.com " );

It’s a one-liner in PHP – just call md5() passing in the string you want to hash and you’ll get your hash back.  There must be a similar method in C#, right?  As far as I can tell, there isn’t.  You have to:

  1. instantiate the MD5CryptoServiceProvider,
  2. convert your string to a byte array,
  3. run that byte array through the crypto provider to compute the hash (and get a byte array back)
  4. and instead of just converting that byte array back to a string, you need to run through each byte and convert it via this:
myByte.ToString("x2").ToLower()

Now you have your string.  Lots of work, especially compared to PHP.  Perhaps this is why so many developers go towards PHP?  Anyway, I need to generate this hash, and decided to create an Extension Method for it.  My full code looks like this:

using System.Text;

namespace System
{
    public static class StringExtensions
    {
        public static String MD5(this String stringToHash)
        {
            var md5 = new System.Security.Cryptography.MD5CryptoServiceProvider();
            byte[] emailBytes = System.Text.Encoding.UTF8.GetBytes(stringToHash.ToLower());
            byte[] hashedEmailBytes = md5.ComputeHash(emailBytes);
            System.Text.StringBuilder sb = new StringBuilder();
            foreach (var b in hashedEmailBytes)
            {
                sb.Append(b.ToString("x2").ToLower());
            }
            return sb.ToString();
        }
    }
}

 

You would call it like this:

String emailToHash = "dhoerster@gmail.com";
String hashedEmail = emailToHash.MD5();

 

Oh, that’s easy!  And it makes sense.

Well, I hope this helps someone, and maybe there will be some suggestions on how to improve this.  Good luck!

JSON.NET Custom Converters–A Quick Tour

I have to admit that I’m a basic user when it comes to JSON serialization/deserialization.  I’ve used JSON.NET and the DataContractJsonSerializer.  I’ve read that JSON.NET is faster and more efficient than the built-in .NET serializer, but I haven’t had to build a system that is dependent on squeezing microseconds out of my serialization routines.  That said, I do prefer JSON.NET because it is more flexible when it comes to using DataContractAttribute and DataMemberAttribute for customizing your JSON output.

So I came across an interesting question on StackOverflow today, asking how a json string like:

{‘one’: 1, ‘two’: 2, ‘three’: 3, ‘four’: 4, ‘blah’: 100}

would have its “one” and “two” properties deserialized to an object’s One and Two properties (easy) and anything else in the json string would be dumped into a Dictionary<string,object> (hmmm…not so easy).  So the resulting object would look like:

Mapped mappedObj = { 
  One = 1; 
  Two = 2; 
  TheRest = [{three=3}, {four=4}, {blah=100}]'; 
}

I’ve read about custom JSON.NET converters, but had never written one.  So I decided to give it a shot and discovered that it’s really not too bad.  Here’s my sample code:

using System;
using System.Collections.Generic;
using System.Linq;

using Newtonsoft.Json;
using Newtonsoft.Json.Converters;
using System.Reflection;

namespace JsonConverterTest1
{
    public class Mapped
    {
        private Dictionary<string, object> _theRest = new Dictionary<string, object>();
        public int One { get; set; }
        public int Two { get; set; }
        public Dictionary<string, object> TheRest { get { return _theRest; } }
    }

    public class MappedConverter : CustomCreationConverter<Mapped>
    {
        public override Mapped Create(Type objectType)
        {
            return new Mapped();
        }

        public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
        {
            var mappedObj = new Mapped();
            //get an array of the object's props so I can check if the JSON prop s/b mapped to it
            var objProps = objectType.GetProperties().Select(p => p.Name.ToLower()).ToArray();

            //loop through my JSON string
            while (reader.Read())
            {
                //if I'm at a property...
                if (reader.TokenType == JsonToken.PropertyName)
                {
                    //convert the property to lower case
                    string readerValue = reader.Value.ToString().ToLower();
                    if (reader.Read())  //read in the prop value
                    {
                        //is this a mapped prop?
                        if (objProps.Contains(readerValue))
                        {
                            //get the property info and set the Mapped object's property value
                            PropertyInfo pi = mappedObj.GetType().GetProperty(readerValue, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance);
                            var convertedValue = Convert.ChangeType(reader.Value, pi.PropertyType);
                            pi.SetValue(mappedObj, convertedValue, null);
                        }
                        else
                        {
                            //otherwise, stuff it into the Dictionary
                            mappedObj.TheRest.Add(readerValue, reader.Value);
                        }
                    }
                }
            }
            return mappedObj;
        }
    }

    public class Program
    {
        static void Main(string[] args)
        {
            //a sample JSON string to deserialize
            string json = "{'one':1, 'two':2, 'three':3, 'four':4}";

            //call DeserializeObject, passing in my custom converter
            Mapped mappedObj = JsonConvert.DeserializeObject<Mapped>(json, new MappedConverter());

            //output some of the properties that were stuffed into the Dictionary
            Console.WriteLine(mappedObj.TheRest["three"].ToString());
            Console.WriteLine(mappedObj.TheRest["four"].ToString());
        }
    }
}

It’s pretty simple to create a custom converter and it’s almost limitless as to what you can do with it.

Of course, my sample code above is pretty simple and doesn’t take into account arrays or nested objects in the JSON string; but, that can be accounted for by using the JsonToken enumeration (which I do above in detecting a property) and checking for the start of a nested object or an array.

I found this an interesting exercise and gave me an opportunity to take a tour of a feature in JSON.NET that I’ve read about but never used.  I hope you find it interesting.

Things I Didn’t Know – Stacking Your Using Statements

One of the things that I love about the .NET framework is that I am constantly learning new things about it, finding new jewels that save me time, and just some really interesting bits.  This latest item I found falls into the “interesting bits” category.

Probably like many of you, I’ve written a number of console applications to do some tasks quickly and with little fanfare.  Usually these tasks are one-time tasks, and the console application is almost a throw-away project once I’m done with it.  Because of this, I usually attempt to get things done quickly and sometimes my code isn’t the neatest – shocking, I know!

Well, for this one task, I had to read information from a CSV file, write a status to a TXT file and also write some updates to our data layer.  These tasks involve the use of IDisposable objects; and as a result, I usually wrap these in a using block.

Here’s what my code usually looks like:

   1: using (StreamWriter sw = new StreamWriter(statusFile, true))
   2: {
   3:     using (StreamReader sr = new StreamReader(filename, Encoding.Default))
   4:     {
   5:         using (CsvReader csv = new CsvReader(sr, true))
   6:         {
   7:             //some work is done here
   8:         }
   9:     }
  10: }

 

So not too bad, but most of the ‘real’ code in my project tends to be in the middle of the IDE due to all of the indenting.

I can stack my using statements, instead, on top of each other, and have them all be contained with a single block, like so:

   1: using (StreamWriter sw = new StreamWriter(statusFile, true))
   2: using (StreamReader sr = new StreamReader(filename, Encoding.Default))
   3: using (CsvReader csv = new CsvReader(sr, true))
   4: {
   5:     //some work is done here
   6: }

 

Much cleaner, much more condense, a little easier to read (IMHO) and it saves a few lines.  It’s nice when you find out something new in a language you’ve been using for years.  I’m a little embarrassed that I didn’t know you could do this, but I’m glad I know now.

«September»
SunMonTueWedThuFriSat
31123456
78910111213
14151617181920
21222324252627
2829301234
567891011