Tim Murphy's .NET Software Architecture Blog

May 2016 Entries

A TFS Developer In A GitHub World

Git and GitHub have been around for a few years now.  They are becoming more popular by the day.  I finally got around to looking at them more closely over the last few months and decided to summarize the experiences.

My first experience with GitHub was not the most pleasant.  I was using Visual Studio 2013 which doesn’t seem to have the best integration story (or at least didn’t when I tried it).  The fact that it required that an existing repository be cloned via the GitHub desktop before Visual Studio knew anything about it was the biggest pain.

Visual Studio 2015 on the other hand has a much better use story.  You are able to log into your repository and get a clone of the repository without breaking out to another tool.  The commit process is pretty similar to that of TFS from that point on.

From my trials I have found that GitHub works well as a source control repository.  It is hard at first getting used to the non-Microsoft verbs that are used.  Retraining yourself that you have to do a commit and push before something is actually checked in instead of just doing a

As for working as a team I think that TFS still has the better features.  This may just be because it isn’t as well integrated with Visual Studio.  Having customizable work items in TFS comes in very handy, especially on larger enterprise projects.

The wiki gives a good place to put documentation, but it doesn’t give you a place to manage Word, Excel, PowerPoint, PDF and Visio documents that might have important information about your project. This is where TFS and SharePoint really shine.

Another drawback I see over TFS is that GitHub repositories are public and can’t be private unless you have a paid account.  I can create a free TFS Online account that gives me private repositories and access for up to five users.  This makes it better for the individual developer to the small team.

Of course there is a third option that you can use Git in TFS.  This gives you the source control of Git with the project management features of TFS.  It took a little bit to get may existing code into the new Git-TFS project repository.  Then came the realization that the only source control viewer for Git repositories is in the portal.  The growing pains continue.


I am sure that the story around GitHub will improve over time, but right now it just seems like people are using it because it is what the cool kids are doing or they are working on open source projects.  If I have to advise a client I am going to suggest they go with the product with the best and most complete story with the best integration to their current toolset.  For now that is TFS, especially if you are a Microsoft development shop.

As for my GitHub experiment, it goes on but I deleted the repository I had created for security reason.  Stay tuned and see what else develops.  The next step is probably Git in TFS.

Increase Cloud Application Responsiveness With Azure Queues and WebJobs

This post is based on the presentation I gave at Cloud Saturday Chicago.

In a mobile first world services need to be able to process high volumes of traffic as quickly as possible. With the certification and deployment process which native mobile apps have to go through being long and sometimes uncertain, we find ourselves looking for ways to make improvements without changing the device code. The flexibility of Azure and other cloud platforms gives developers the ability to easily implement changes to solutions using that can be deployed quickly with nearly infinite resources.

The solutions described here can also help the throughput of you web applications as easily as mobile apps. This article will demonstrate the capabilities of Azure WebJobs and Queues improve application responsiveness by deferring processing.

Let’s start with what a normal mobile app infrastructure design may look like. We have a mobile app which is sending data to a web service to be processed. This service may have a whole workflow of operations to perform on that data once it is received. Once the processing is complete the data is stored in an Azure SQL Database.  The problem that we are facing is that you don't want the mobile app to continue to wait while the processing occurs as this may cause timeouts or the delays in the UI while.


To reduce the delays we can make a couple of minor changes using features of Azure. We remove the main workload from the web service so that it simply puts the incoming message on an Azure queue. The work that used to be in the service is now moved to a continuous Azure WebJob along with code to read from the queue. This give the service the ability to return an acknowlegement message to the mobile app almost immediately. The WebJob can pull from the queue at its own speed and since we are in the cloud we can easily add new instances of the WebJob to scale out if needed.


What are the actual performance differences? That will depend greatly on how much work your service was doing to begin with. If it was only a simple table insert there may not be a significant improvement or possibly even a loss due to the serialization to the queue. If you have to reach out to several different resources or perform a strong of operations this will off load the real work.

The Code

The first thing that we need to do is add the code to insert into the queue from the service.

string connectionString = ConfigurationManager.ConnectionStrings["AzureWebJobsStorage"].ToString();

CloudStorageAccount storageAccount = CloudStorageAccount.Parse(

CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();

CloudQueue queue = queueClient.GetQueueReference("yourrequestqueue");


var tempMessage = JsonConvert.SerializeObject(request);
CloudQueueMessage message = new CloudQueueMessage(tempMessage);


In order for this code to work we need to setup configuration connection strings to the storage account which will contain the queue.  In order to get the keys for the storage account open the Azure portal and go to [your storage account] –> Keys.


Below are the connection string entries that you should use.

<add name="AzureWebJobsDashboard" connectionString="DefaultEndpointsProtocol=https;AccountName=yourstore;AccountKey=your secondary access key" />
<add name="AzureWebJobsStorage" connectionString="DefaultEndpointsProtocol=https;AccountName=yourstore;AccountKey=your secondary access key" />

Now we add a new Azure WebJob project to our solution.  You will find the Azure WebJob template under the Cloud template section.


Once we have the project add the following code to a method in the Functions class specifying the name of the queue in the QueueTrigger parameter.  In this instance I have used the JavaScriptSerializer to deserialize the message from the queue.  You can then pass the object on to the processing methods as you originally had in the service.

public static void ProcessQueueMessage([QueueTrigger("your queue")] string message, TextWriter log)
    var serializer = new JavaScriptSerializer();
    YourType tempRequest = serializer.Deserialize<YourType>(message);
    // Do your work here

Once the storage key is added to the connection strings for the WebJob as we did for the service we will be ready to deploy and test.

Deploying the WebJob is fairly straight forward.  If you zip up the contents of your bin folder including the config file you can then upload it through the App Service –> [Your App Service] –> Settings –> WebJobs blade of the Azure portal.


Give the WebJob a name, leave the other settings and select the file for your zipped bin folder and you are all set to go.  I will cover debugging in a future post.


Creating WebJobs and manipulating Azure queues is not rocket surgery.  If you have kept your code as loosely coupled as possible then moving to a continuous WebJob solution will be fairly quick and painless.  With just a little effort your application will seem more responsive and will be able to handle much larger request loads.