Shaun Xu

The Sheep-Pen of the Shaun



Shaun, the author of this blog is a semi-geek, clumsy developer, passionate speaker and incapable architect with about 10 years experience in .NET. He hopes to prove that software development is art rather than manufacturing. He's into cloud computing platform and technologies (Windows Azure, Aliyun) as well as WCF and ASP.NET MVC. Recently he's falling in love with JavaScript and Node.js.

Currently Shaun is working at IGT Technology Development (Beijing) Co., Ltd. as the architect responsible for product framework design and development.


My Stats

  • Posts - 94
  • Comments - 337
  • Trackbacks - 0

Tag Cloud

Recent Comments

Recent Posts


Post Categories

There are many advantages build our own proxy server on the cloud. For instance, in Microsoft Azure, the price is pay-as-you-go, which means we only need to pay when we need a proxy server and turned it on. Second, it's very easy to scale up and down. If the proxy is just used by myself, I can create a minimum virtual machine with small CPU, memory, disk and network bandwidth. But we can scale it up if we need, for example when we need to watch World Cup videos. Last, there are many Azure data centers around the world. This means we can create proxy server in US, Euro, HK, Japan or Brazil, etc..


Create a proxy server in Microsoft Azure is very easy. First of all we need to create virtual machine in Microsoft Azure. In this case I'm going to use Ubuntu.

Screen Shot 2014-06-28 at 22.00.50

Next, specify the name, size and authentication of the machine. Since this proxy server will be use by myself, I specified a small size which should be OK to view web pages. And in order to make it simple I created a user with password rather than upload a certificate for authentication.

Screen Shot 2014-06-28 at 22.01.24

Next, we need to select a region where our proxy machine will be hosted. In the screenshot below we can find there are 11 data centers in the world. And if you have account in Azure China there will be two more, Beijing and Shanghai. I selected Japan West which is close to me.

Then I need to specify an endpoint for the proxy. Just create a new endpoint at the bottom of this page, specify the number you like with TCP protocol.

Recommend to specify endpoint number larger than 1024 since in Linux, you must use "sudo" to start an application listening port less than 1024.

Screen Shot 2014-06-28 at 22.02.32

Now let's click OK to provision our virtual machine. After several minutes when the machine was ready, go to its details page and copy its public IP address. This is our proxy address and it will NOT be change until we stop the machine.

Screen Shot 2014-06-28 at 22.06.30

Next, we need to login to our virtual machine and install the proxy software. In Windows we can use Putty, an SSH and telnet client. In Mac or Linux there is a build-in SHH command line, that just need to type the command as below. The argument is "ssh [your login]@[virtual machine public IP]", which are the login I specified when created the virtual machine and the public IP I copied previously.

Screen Shot 2014-06-28 at 22.06.58

Type the password I specified when created the virtual machine and now we logged into the Ubuntu in Azure. Now I'm going to install the proxy software Squid through "apt-get".

   1: sudo apt-get install squid

After the Squid was installed we will modify its configuration. Go to the configuration folder, backup the original one and create an empty configuration file. Then launch "vim" to edit. Just follow the command below.

   1: cd /etc/squid3
   2: sudo cp squid.conf squid.conf.bak
   3: sudo rm squid.conf
   4: sudo touch sqiud.conf
   5: sudo vim squid.conf

Then in "vim" I will use the simplest configuration, which allows all clients to connect and allows all destinations to communicate. And specify the port Squid is listening, which must be the same one as what we specified when created machine and save it.

If you are not familiar with "vim", you need to type "a" to enter the append mode and paste the configuration below. Then press ESC to back to the command mode and press ":wq" to save and quit.

   1: http_access allow all
   2: http_port 21777

Next, restart Squid service to apply our configuration.

   1: sudo service squid3 restart

Then you will see the process ID of Squid.

Screen Shot 2014-06-28 at 22.11.45

To test our proxy, just back to my laptop and connect the proxy endpoint through "telnet" as below.

Screen Shot 2014-06-28 at 22.12.14


If you see the message in terminal as above your proxy is up and running. If you are using Chrome there is a awesome extension for smart proxy configuration named SwitchySharp. In the screenshot below I specified the proxy setting to my server in Azure Japan. Just copied the virtual machine public IP as the HTTP proxy and the Squid endpoint as the proxy port.

Screen Shot 2014-06-28 at 22.13.18

Below is the IP detection result. As you can see I'm not at Japan with Microsoft Network.

Screen Shot 2014-06-28 at 22.35.02


Hope this helps,


All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

Microsoft had announced ASP.NET vNext in BUILD and TechED recently and as a developer, I found that we can add features into one ASP.NET vNext application such as MVC, WebAPI, SignalR, etc.. Also it's cross platform which means I can host ASP.NET on Windows, Linux and OS X.


If you are following my blog you should knew that I'm currently working on a project which uses ASP.NET WebAPI, SignalR and AngularJS. Currently the AngularJS part is hosted by Express in Node.js while WebAPI and SignalR are hosted in ASP.NET. I was looking for a solution to host all of them in one platform so that my SignalR can utilize WebSocket.

Currently AngularJS and SignalR are hosted in the same domain but different port so it has to use ServerSendEvent. It can be upgraded to WebSocket if I host both of them in the same port.


Host AngularJS in ASP.NET vNext Static File Middleware

ASP.NET vNext utilizes middleware pattern to register feature it uses, which is very similar as Express in Node.js. Since AngularJS is a pure client side framework in theory what I need to do is to use ASP.NET vNext as a static file server. This is very easy as there's a build-in middleware shipped alone with ASP.NET vNext.

Assuming I have "index.html" as below.

   1: <html data-ng-app="demo">
   2:     <head>
   3:         <script type="text/javascript" src="angular.js" />
   4:         <script type="text/javascript" src="angular-ui-router.js" />
   5:         <script type="text/javascript" src="app.js" />
   6:     </head>
   7:     <body>
   8:         <h1>ASP.NET vNext with AngularJS</h1>
   9:         <div>
  10:             <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 
  11:             <a href="javascript:void(0)" data-ui-sref="view2">View 2</a>
  12:         </div>
  13:         <div data-ui-view></div>
  14:     </body>
  15: </html>

And the AngularJS JavaScript file as below. Notices that I have two views which only contains one line literal indicates the view name.

   1: 'use strict';
   3: var app = angular.module('demo', ['ui.router']);
   5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) {
   6:     $stateProvider.state('view1', {
   7:         url: '/view1',
   8:         templateUrl: 'view1.html',
   9:         controller: 'View1Ctrl' });
  11:     $stateProvider.state('view2', {
  12:         url: '/view2',
  13:         templateUrl: 'view2.html',
  14:         controller: 'View2Ctrl' });
  15: }]);
  17: app.controller('View1Ctrl', function ($scope) {
  18: });
  20: app.controller('View2Ctrl', function ($scope) {
  21: });

All AngularJS files are located in "app" folder and my ASP.NET vNext files are besides it. The "project.json" contains all dependencies I need to host static file server.

   1: {
   2:     "dependencies": {
   3:         "Helios" : "0.1-alpha-*",
   4:         "Microsoft.AspNet.FileSystems": "0.1-alpha-*",
   5:         "Microsoft.AspNet.Http": "0.1-alpha-*",
   6:         "Microsoft.AspNet.StaticFiles": "0.1-alpha-*",
   7:         "Microsoft.AspNet.Hosting": "0.1-alpha-*",
   8:         "Microsoft.AspNet.Server.WebListener": "0.1-alpha-*"
   9:     },
  10:     "commands": {
  11:         "web": "Microsoft.AspNet.Hosting server=Microsoft.AspNet.Server.WebListener server.urls=http://localhost:22222"
  12:     },
  13:     "configurations" : {
  14:         "net45" : {
  15:         },
  16:         "k10" : {
  17:             "System.Diagnostics.Contracts": "",
  18:             "System.Security.Claims" :  "0.1-alpha-*"
  19:         }
  20:     }
  21: }

Below is "Startup.cs" which is the entry file of my ASP.NET vNext. What I need to do is to let my application use FileServerMiddleware.

   1: using System;
   2: using Microsoft.AspNet.Builder;
   3: using Microsoft.AspNet.FileSystems;
   4: using Microsoft.AspNet.StaticFiles;
   6: namespace Shaun.AspNet.Plugins.AngularServer.Demo
   7: {
   8:     public class Startup
   9:     {
  10:         public void Configure(IBuilder app)
  11:         {
  12:             app.UseFileServer(new FileServerOptions() {
  13:                 EnableDirectoryBrowsing = true,
  14:                 FileSystem = new PhysicalFileSystem(System.IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "app"))
  15:             });
  16:         }
  17:     }
  18: }

Next, I need to create "NuGet.Config" file in the PARENT folder so that when I run "kpm restore" command later it can find ASP.NET vNext NuGet package successfully.

   1: <?xml version="1.0" encoding="utf-8"?>
   2: <configuration>
   3:   <packageSources>
   4:     <add key="AspNetVNext" value="" />
   5:     <add key="" value="" />
   6:   </packageSources>
   7:   <packageSourceCredentials>
   8:     <AspNetVNext>
   9:       <add key="Username" value="aspnetreadonly" />
  10:       <add key="ClearTextPassword" value="4d8a2d9c-7b80-4162-9978-47e918c9658c" />
  11:     </AspNetVNext>
  12:   </packageSourceCredentials>
  13: </configuration>

Now I need to run "kpm restore" to resolve all dependencies of my application.


Finally, use "k web" to start the application which will be a static file server on "app" sub folder in the local 22222 port.



Support AngularJS Html5Mode

AngularJS works well in previous demo. But you will note that there is a "#" in the browser address. This is because by default AngularJS adds "#" next to its entry page so ensure all request will be handled by this entry page.

For example, in this case my entry page is "index.html", so when I clicked "View 1" in the page the address will be changed to "/#/view1" which means it still tell the web server I'm still looking for "index.html".

This works, but makes the address looks ugly. Hence AngularJS introduces a feature called Html5Mode, which will get rid off the annoying "#" from the address bar. Below is the "app.js" with Html5Mode enabled, just one line of code.

   1: 'use strict';
   3: var app = angular.module('demo', ['ui.router']);
   5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) {
   6:     $stateProvider.state('view1', {
   7:         url: '/view1',
   8:         templateUrl: 'view1.html',
   9:         controller: 'View1Ctrl' });
  11:     $stateProvider.state('view2', {
  12:         url: '/view2',
  13:         templateUrl: 'view2.html',
  14:         controller: 'View2Ctrl' });
  16:     // enable html5mode
  17:     $locationProvider.html5Mode(true);
  18: }]);
  20: app.controller('View1Ctrl', function ($scope) {
  21: });
  23: app.controller('View2Ctrl', function ($scope) {
  24: });

Then let's went to the root path of our website and click "View 1" you will see there's no "#" in the address.


But the problem is, if we hit F5 the browser will be turn to blank. This is because in this mode the browser told the web server I want static file named "view1" but there's no file on the server. So underlying our web server, which is built by ASP.NET vNext, responded 404.


To fix this problem we need to create our own ASP.NET vNext middleware. What it needs to do is firstly try to respond the static file request with the default StaticFileMiddleware. If the response status code was 404 then change the request path value to the entry page and try again.

   1: public class AngularServerMiddleware
   2: {
   3:     private readonly AngularServerOptions _options;
   4:     private readonly RequestDelegate _next;
   5:     private readonly StaticFileMiddleware _innerMiddleware;
   7:     public AngularServerMiddleware(RequestDelegate next, AngularServerOptions options)
   8:     {
   9:         _next = next;
  10:         _options = options;
  12:         _innerMiddleware = new StaticFileMiddleware(next, options.FileServerOptions.StaticFileOptions);
  13:     }
  15:     public async Task Invoke(HttpContext context)
  16:     {
  17:         // try to resolve the request with default static file middleware
  18:         await _innerMiddleware.Invoke(context);
  19:         Console.WriteLine(context.Request.Path + ": " + context.Response.StatusCode);
  20:         // route to root path if the status code is 404
  21:         // and need support angular html5mode
  22:         if (context.Response.StatusCode == 404 && _options.Html5Mode)
  23:         {
  24:             context.Request.Path = _options.EntryPath;
  25:             await _innerMiddleware.Invoke(context);
  26:             Console.WriteLine(">> " + context.Request.Path + ": " + context.Response.StatusCode);
  27:         }
  28:     }
  29: }

We need an option class where user can specify the host root path and the entry page path.

   1: public class AngularServerOptions
   2: {
   3:     public FileServerOptions FileServerOptions { get; set; }
   5:     public PathString EntryPath { get; set; }
   7:     public bool Html5Mode
   8:     {
   9:         get
  10:         {
  11:             return EntryPath.HasValue;
  12:         }
  13:     }
  15:     public AngularServerOptions()
  16:     {
  17:         FileServerOptions = new FileServerOptions();
  18:         EntryPath = PathString.Empty;
  19:     }
  20: }

We also need an extension method so that user can append this feature in "Startup.cs" easily.

   1: public static class AngularServerExtension
   2: {
   3:     public static IBuilder UseAngularServer(this IBuilder builder, string rootPath, string entryPath)
   4:     {
   5:         var options = new AngularServerOptions()
   6:         {
   7:             FileServerOptions = new FileServerOptions()
   8:             {
   9:                 EnableDirectoryBrowsing = false,
  10:                 FileSystem = new PhysicalFileSystem(System.IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory, rootPath))
  11:             },
  12:             EntryPath = new PathString(entryPath)
  13:         };
  15:         builder.UseDefaultFiles(options.FileServerOptions.DefaultFilesOptions);
  17:         return builder.Use(next => new AngularServerMiddleware(next, options).Invoke);
  18:     }
  19: }

Now with these classes ready we will change our "Startup.cs", use this middleware replace the default one, tell the server try to load "index.html" file if it cannot find resource.

The code below is just for demo purpose. I just tried to load "index.html" in all cases once the StaticFileMiddleware returned 404. In fact we need to validation to make sure this is an AngularJS route request instead of a normal static file request.

   1: using System;
   2: using Microsoft.AspNet.Builder;
   3: using Microsoft.AspNet.FileSystems;
   4: using Microsoft.AspNet.StaticFiles;
   5: using Shaun.AspNet.Plugins.AngularServer;
   7: namespace Shaun.AspNet.Plugins.AngularServer.Demo
   8: {
   9:     public class Startup
  10:     {
  11:         public void Configure(IBuilder app)
  12:         {
  13:             app.UseAngularServer("app", "/index.html");
  14:         }
  15:     }
  16: }

Now let's run "k web" again and try to refresh our browser and we can see the page loaded successfully.


In the console window we can find the original request got 404 and we try to find "index.html" and return the correct result.




In this post I introduced how to use ASP.NET vNext to host AngularJS application as a static file server. I also demonstrated how to extend ASP.NET vNext, so that it supports AngularJS Html5Mode.

You can download the source code here.


Hope this helps,


All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

If we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here.

In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method.

But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS.

If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection.

   1: // wire the stop handler for when the user leaves the page
   2: _pageWindow.bind("unload", function () {
   3:     connection.log("Window unloading, stopping the connection.");
   5:     connection.stop(asyncAbort);
   6: });
   8: if (isFirefox11OrGreater) {
   9:     // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close.
  10:     // #2400
  11:     _pageWindow.bind("beforeunload", function () {
  12:         // If connection.stop() runs runs in beforeunload and fails, it will also fail
  13:         // in unload unless connection.stop() runs after a timeout.
  14:         window.setTimeout(function () {
  15:             connection.stop(asyncAbort);
  16:         }, 0);
  17:     });
  18: }


Problem Reproduce

In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code.

   1: public class GreetingHub : Hub
   2: {
   3:     public override Task OnConnected()
   4:     {
   5:         Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId));
   6:         return base.OnConnected();
   7:     }
   9:     public override Task OnDisconnected()
  10:     {
  11:         Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId));
  12:         return base.OnDisconnected();
  13:     }
  15:     public void Hello(string user)
  16:     {
  17:         Clients.All.hello(string.Format("Hello, {0}!", user));
  18:     }
  19: }

Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express.

   1: public class Startup
   2: {
   3:     public void Configuration(IAppBuilder app)
   4:     {
   5:         app.Map("/signalr", map =>
   6:             {
   7:                 map.UseCors(CorsOptions.AllowAll);
   8:                 map.RunSignalR(new HubConfiguration()
   9:                     {
  10:                         EnableJavaScriptProxies = false
  11:                     });
  12:             });
  13:     }
  14: }

Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above.

In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc..

Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js".

   1: <html data-ng-app="demo">
   2:     <head>
   3:         <script type="text/javascript" src="jquery-2.1.0.js"></script>
   2:         <script type="text/javascript" src="angular.js">
   1: </script>
   2:         <script type="text/javascript" src="angular-ui-router.js">
   1: </script>
   2:         <script type="text/javascript" src="jquery.signalR-2.0.3.js">
   1: </script>
   2:         <script type="text/javascript" src="app.js">
   4:     </head>
   5:     <body>
   6:         <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1>
   7:         <div>
   8:             <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 
   9:             <a href="javascript:void(0)" data-ui-sref="view2">View 2</a>
  10:         </div>
  11:         <div data-ui-view></div>
  12:     </body>
  13: </html>

Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR.

   1: 'use strict';
   3: var app = angular.module('demo', ['ui.router']);
   5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) {
   6:     $stateProvider.state('view1', {
   7:         url: '/view1',
   8:         templateUrl: 'view1.html',
   9:         controller: 'View1Ctrl' });
  11:     $stateProvider.state('view2', {
  12:         url: '/view2',
  13:         templateUrl: 'view2.html',
  14:         controller: 'View2Ctrl' });
  16:     $locationProvider.html5Mode(true);
  17: }]);
  19: app.value('$', $);
  20: app.value('endpoint', 'http://localhost:60448');
  21: app.value('hub', 'GreetingHub');
  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) {
  24:     $scope.user = '';
  25:     $scope.response = '';
  27:     $scope.greeting = function () {
  28:         proxy.invoke('Hello', $scope.user)
  29:             .done(function () {})
  30:             .fail(function (error) {
  31:                 console.log(error);
  32:             });
  33:     };
  35:     var connection = $.hubConnection(endpoint);
  36:     var proxy = connection.createHubProxy(hub);
  37:     proxy.on('hello', function (response) {
  38:         $scope.$apply(function () {
  39:             $scope.response = response;
  40:         });
  41:     });
  42:     connection.start()
  43:         .done(function () {
  44:             console.log('signlar connection established');
  45:         })
  46:         .fail(function (error) {
  47:             console.log(error);
  48:         });
  49: });
  51: app.controller('View2Ctrl', function ($scope, $) {
  52: });

When we went to View1 the server side "OnConnect" method will be invoked as below.


And in any page we send the message to server, all clients will got the response.


If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct.


But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.




Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected".

   1: var connection;
   2:['$rootScope', function ($rootScope) {
   3:     $rootScope.$on('$stateChangeStart', function () {
   4:         if (connection && connection.state && connection.state !== 4 /* disconnected */) {
   5:             console.log('signlar connection abort');
   6:             connection.stop();
   7:         }
   8:     });
   9: }]);

Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.




In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here.

Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.


Hope this helps,


All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

Currently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR.

SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1.

But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.


In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.


Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked.

   1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute
   2: {
   3:     public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request)
   4:     {
   5:     }
   7:     public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod)
   8:     {
   9:     }
  10: }

The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims.

   1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request)
   2: {
   3:     var dataProtectionProvider = new DpapiDataProtectionProvider();
   4:     var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create());
   5:     // authenticate by using bearer token in query string
   6:     var token = request.QueryString.Get(WebApiConfig.AuthenticationType);
   7:     var ticket = secureDataFormat.Unprotect(token);
   8:     if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated)
   9:     {
  10:         // set the authenticated user principal into environment so that it can be used in the future
  11:         request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity);
  12:         return true;
  13:     }
  14:     else
  15:     {
  16:         return false;
  17:     }
  18: }

In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future.

Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string.

This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser.

Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked.

   1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod)
   2: {
   3:     var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId;
   4:     // check the authenticated user principal from environment
   5:     var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment;
   6:     var principal = environment["server.User"] as ClaimsPrincipal;
   7:     if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated)
   8:     {
   9:         // create a new HubCallerContext instance with the principal generated from token
  10:         // and replace the current context so that in hubs we can retrieve current user identity
  11:         hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId);
  12:         return true;
  13:     }
  14:     else
  15:     {
  16:         return false;
  17:     }
  18: }

Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below.

   1: public class DefaultHub : Hub
   2: {
   3:     public object Initialize(string host, string service, JObject payload)
   4:     {
   5:         var connectionId = Context.ConnectionId;
   6:         ... ...
   7:         var domain = string.Empty;
   8:         var identity = Context.User.Identity as ClaimsIdentity;
   9:         if (identity != null)
  10:         {
  11:             var claim = identity.FindFirst("Domain");
  12:             if (claim != null)
  13:             {
  14:                 domain = claim.Value;
  15:             }
  16:         }
  17:         ... ...
  18:     }
  19: }

Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline.

   1: app.Map("/signalr", map =>
   2:     {
   3:         // Setup the CORS middleware to run before SignalR.
   4:         // By default this will allow all origins. You can 
   5:         // configure the set of origins and/or http verbs by
   6:         // providing a cors options with a different policy.
   7:         map.UseCors(CorsOptions.AllowAll);
   8:         var hubConfiguration = new HubConfiguration
   9:         {
  10:             // You can enable JSONP by uncommenting line below.
  11:             // JSONP requests are insecure but some older browsers (and some
  12:             // versions of IE) require JSONP to work cross domain
  13:             // EnableJSONP = true
  14:             EnableJavaScriptProxies = false
  15:         };
  16:         // Require authentication for all hubs
  17:         var authorizer = new QueryStringBearerAuthorizeAttribute();
  18:         var module = new AuthorizeModule(authorizer, authorizer);
  19:         GlobalHost.HubPipeline.AddModule(module);
  20:         // Run the SignalR pipeline. We're not using MapSignalR
  21:         // since this branch already runs under the "/signalr" path.
  22:         map.RunSignalR(hubConfiguration);
  23:     });

On the client side should pass the Bearer token through query string before I started the connection as below.

   1: self.connection = $.hubConnection(signalrEndpoint);
   2: self.proxy = self.connection.createHubProxy(hubName);
   3: self.proxy.on(notifyEventName, function (event, payload) {
   4:     options.handler(event, payload);
   5: });
   6: // add the authentication token to query string
   7: // we cannot use http headers since web socket protocol doesn't support
   8: self.connection.qs = { Bearer: AuthService.getToken() };
   9: // connection to hub
  10: self.connection.start();

Hope this helps,


All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

In the TechED North America Microsoft announced another cache service in Azure which is the Redis Cache Service. This is the 4th cache service Microsoft introduced in Azure. The first one is Shared Cache which is going to be retired in Sep as it has very critical performance issue. The second one is In-Role Cache, which is built on top of AppFabric engine, is high performance and dedicates to the role instances in the same cloud service. The third one is Managed Cache, which is based on AppFabric as well, but can be widely used by cloud service roles, virtual machines and web sites. And now we have another choice.


Create Redis Cache Service

Currently the Redis Cache can only be created from the new portal. Click "New" button and select "Redis Cache (Preview)" item.


Then I need to specify the endpoint and select a pricing tier. Currently there are 2 tiers available, basic and standard and each of them has 1GB and 250MB size sub-tiers. The different between basic and standard is that basic does support replication and SLA.


Next, select a resource group and the location where my Redis will be provisioned. As you can see currently there are only four regions I can select.


Finally click "Create" button, Microsoft Azure will start to provision a new Redis service to me. This took about 5 minutes which I'm not sure if it's normal, since other provision operations in Microsoft Azure are more faster.


When the Redis created I can view the status from "Browse", "Caches" menu item. As we can see the Redis endpoint and port when clicked "Properties" button. We need them when connecting to Redis from our application later.


Click "Key" button it will show two security keys of our Redis. We also need it to connect from our application. So we can just copy the endpoint, port and primary key in some place for later usage.


Now we have our Redis Cache ready and let's create an application to use it.


Use Redis Cache from C# (ASP.NET)

There are many client libraries for Redis where you can find here. I'd like to use the first one of C# client, which is recommended by Redis, ServiceStack.Redis. I used this library before and wrote another blog post in the April of 2012. Now let's use it again to build an ASP.NET web application in Microsoft Azure Web Site.

I created a new ASP.NET WebForm in Visual Studio. ServiceStack.Redis is available in NuGet so I can install it easily from the NuGet dialog as below. Just search for "Redis" and the first is it.

In the official document of Redis Cache, Microsoft utilizes another library to connect to Redis named "StachExchange.Redis", that can be found here. It's available in NuGet as well but still in prerelase so if you wanted to use it make sure selected "Include Prerelease" in NuGet dialog.


Next I changed the default page layout as below. So I can specify the key and value and press "Set" button to set it into Redis, while press "Get" value to retrieve from Redis.

   1: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="ShaunAzureRedisDemo1._Default" %>
   3: <asp:Content ID="BodyContent" ContentPlaceHolderID="MainContent" runat="server">
   5:     <div class="jumbotron">
   6:         <h1>ASP.NET</h1>
   7:         <p class="lead">ASP.NET is a free web framework for building great Web sites and Web applications using HTML, CSS, and JavaScript.</p>
   8:         <p><a href="" class="btn btn-primary btn-lg">Learn more &raquo;</a></p>
   9:     </div>
  11:     <div class="row">
  12:         <div class="col-md-4">
  13:             <h2>Set Value</h2>
  14:             <p>
  15:                 Key: <asp:TextBox ID="txtSetKey" runat="server"></asp:TextBox>
  16:             </p>
  17:             <p>
  18:                 Value: <asp:TextBox ID="txtSetValue" runat="server"></asp:TextBox>
  19:             </p>
  20:             <p>
  21:                 <asp:Button ID="btnSet" runat="server" Text="Set" OnClick="btnSet_Click" />
  22:             </p>
  23:         </div>
  24:         <div class="col-md-4">
  25:             <h2>Get Value</h2>
  26:             <p>
  27:                 Key: <asp:TextBox ID="txtGetKey" runat="server"></asp:TextBox>
  28:             </p>
  29:             <p>
  30:                 Value: <asp:TextBox ID="txtGetValue" runat="server" ReadOnly="true"></asp:TextBox>
  31:             </p>
  32:             <p>
  33:                 <asp:Button ID="btnGet" runat="server" Text="Get" OnClick="btnGet_Click" />
  34:             </p>
  35:         </div>
  36:         <div class="col-md-4">
  37:             <h2>Trace</h2>
  38:             <p>
  39:                 <asp:TextBox ID="txtTrace" runat="server" ReadOnly="true" TextMode="MultiLine" Height="200" Width="100%"></asp:TextBox>
  40:             </p>
  41:         </div>
  42:     </div>
  44: </asp:Content>

In the backend code I initialize an instance of ServiceStack.Redis.RedisClient with the endpoint, port and password specified, which are copied from the portal.

Yes the password is the real one. Please be nice, thank you.

   1: using ServiceStack.Redis;
   2: using System;
   3: using System.Collections.Generic;
   4: using System.Linq;
   5: using System.Web;
   6: using System.Web.UI;
   7: using System.Web.UI.WebControls;
   9: namespace ShaunAzureRedisDemo1
  10: {
  11:     public partial class _Default : Page
  12:     {
  13:         private static RedisClient _client = new RedisClient(
  14:             "", 
  15:             6379, 
  16:             "Kl7UaxeZiqA1QbclSsI02mDndOccxwD6AluliF1axmA=");
  18:         protected void Page_Load(object sender, EventArgs e)
  19:         {
  21:         }
  22:     }
  23: }

Then I implemented two button click event handlers to set and get item from Redis. The code is very simple as below.

   1: using ServiceStack.Redis;
   2: using System;
   3: using System.Collections.Generic;
   4: using System.Linq;
   5: using System.Web;
   6: using System.Web.UI;
   7: using System.Web.UI.WebControls;
   9: namespace ShaunAzureRedisDemo1
  10: {
  11:     public partial class _Default : Page
  12:     {
  13:         private static RedisClient _client = new RedisClient(
  14:             "", 
  15:             6379, 
  16:             "Kl7UaxeZiqA1QbclSsI02mDndOccxwD6AluliF1axmA=");
  18:         protected void Page_Load(object sender, EventArgs e)
  19:         {
  21:         }
  23:         protected void btnSet_Click(object sender, EventArgs e)
  24:         {
  25:             var key = txtSetKey.Text;
  26:             var value = txtSetValue.Text;
  28:             try
  29:             {
  30:                 _client.SetEntry(key, value);
  31:             }
  32:             catch (Exception ex)
  33:             {
  34:                 txtTrace.Text = ex.ToString();
  35:             }
  36:         }
  38:         protected void btnGet_Click(object sender, EventArgs e)
  39:         {
  40:             var key = txtGetKey.Text;
  42:             try
  43:             {
  44:                 var value = _client.GetEntry(key);
  45:                 txtGetValue.Text = value;
  46:             }
  47:             catch (Exception ex)
  48:             {
  49:                 txtTrace.Text = ex.ToString();
  50:             }
  51:         }
  52:     }
  53: }

Next, I created a new Azure Web Site. Make sure I selected the same location as the Redis so that the network transaction between them is free with the best performance. Then deploy my web application to Azure and we can test the Redis. As you can see I set an item and retrieve later.


If I specified a key that does not exist, it will just return NULL from the library.



Connect from Other Services and Subscriptions

A Redis Cache can be connected from other Azure services and other subscriptions, through vary types of clients. As the screenshot below I created another ASP.NET and deployed in a Cloud Service Web Role. It connected to the same Redis Cache by specifying the same endpoint, port and password so that I can retrieve the item here which was saved from the Web Site previously.


Also I can use this Redis Cache from another subscription in Node.js application. In this case I utilized another client library named "node_redis". In the code below I created a simple web service can user can set and get item from Redis.

In order to make it easy to test, I was using HTTP GET method for both set and get item to Redis. This is NOT a good solution. In production environment you should use HTTP POST to set item into Redis.

   1: (function () {
   2:     'use strict';
   4:     var express = require('express');
   5:     var bodyParser = require('body-parser');
   6:     var redis = require('redis');
   8:     var app = express();
   9:     app.use(bodyParser());
  11:     var client = redis.createClient(6379, '');
  12:     client.auth('Kl7UaxeZiqA1QbclSsI02mDndOccxwD6AluliF1axmA=');
  14:     app.get('/get/:key', function (req, res) {
  15:         var key = req.params.key;
  16:         client.get(key, function (error, reply) {
  17:             if (error) {
  18:                 res.send(500, error);
  19:             }
  20:             else {
  21:                 res.send(200, reply);
  22:             }
  23:         });
  24:     });
  26:     app.get('/set', function (req, res) {
  27:         var key = req.param('key');
  28:         var value = req.param('value');
  29:         client.set(key, value, function (error, reply) {
  30:             if (error) {
  31:                 res.send(500, error);
  32:             }
  33:             else {
  34:                 res.send(200, reply);
  35:             }
  36:         });
  37:     });
  39:     app.get('/ping', function (req, res) {
  40:         res.send(200, 'PONG!');
  41:     });
  43:     var server = app.listen(process.env.port || 3000, function () {
  44:         console.log('Listening on port %d', server.address().port);
  45:     });
  46: })();

Then I deployed it to Azure Web Site belongs to another Azure Subscription and as you can see I can retrieve the item successfully.


I can set new item into Redis from this web service.


And retrieve it from the web application in Cloud Service in another subscription.


And I can retrieve it from the Web Site as well.



Use Pub Sub Mode

Redis can be used as a distributed key-value cache, as what I demonstrated above. And it supports list and hash as well that we can save entities in a list or hash and retrieve them together. Besides, Redis support pub/sub mode that can be used as a message queue. Now let's try to change my application to use the pub/sub mode.

Firstly I need to modify the Web Site ASP.NET application so that user can publish message. I used the "About" page to do it. The layout will be changed as below that I can specify the channel name and message and publish to Redis.

   1: <%@ Page Title="About" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="About.aspx.cs" Inherits="ShaunAzureRedisDemo1.About" %>
   3: <asp:Content ID="BodyContent" ContentPlaceHolderID="MainContent" runat="server">
   4:     <h2><%
   1: : Title 
   5:     <h3>Publish</h3>
   6:     <p>
   7:         Channel: <asp:TextBox ID="txtChannel" runat="server" Text="shaun_channel"></asp:TextBox>
   8:     </p>
   9:     <p>
  10:         Message: <asp:TextBox ID="txtMessage" runat="server" Text=""></asp:TextBox>
  11:     </p>
  12:     <p>
  13:         <asp:Button ID="btnPublish" runat="server" Text="Publsih" OnClick="btnPublish_Click" />
  14:     </p>
  15:     <p>
  16:         <asp:TextBox ID="txtTrace" runat="server" ReadOnly="true" TextMode="MultiLine" Height="200" Width="100%"></asp:TextBox>
  17:     </p>
  18: </asp:Content>

The backend code was like below.

   1: using ServiceStack.Redis;
   2: using System;
   3: using System.Collections.Generic;
   4: using System.Linq;
   5: using System.Web;
   6: using System.Web.UI;
   7: using System.Web.UI.WebControls;
   9: namespace ShaunAzureRedisDemo1
  10: {
  11:     public partial class About : Page
  12:     {
  13:         private static RedisClient _client = new RedisClient(
  14:             "", 
  15:             6379, 
  16:             "Kl7UaxeZiqA1QbclSsI02mDndOccxwD6AluliF1axmA=");
  18:         protected void Page_Load(object sender, EventArgs e)
  19:         {
  20:         }
  22:         protected void btnPublish_Click(object sender, EventArgs e)
  23:         {
  24:             var channel = txtChannel.Text;
  25:             var message = txtMessage.Text;
  26:             try
  27:             {
  28:                 var id = _client.PublishMessage(channel, message);
  29:                 txtTrace.Text = string.Format("Sent! ({0})", id);
  30:             }
  31:             catch(Exception ex)
  32:             {
  33:                 txtTrace.Text = ex.ToString();
  34:             }
  35:         }
  36:     }
  37: }

Next I will create another Node.js application and deployed to Azure, which will subscribe the channel and print the message content when anything came. Below is the new Node.js file "app.js".

   1: (function () {
   2:     'use strict';
   4:     var redis = require('redis');
   6:     var client = redis.createClient(6379, '');
   7:     client.auth('Kl7UaxeZiqA1QbclSsI02mDndOccxwD6AluliF1axmA=');
   9:     client.on('subscribe', function (channel, count) {
  10:         console.log('subscribed to channel "' + channel + '"');
  11:     });
  13:     client.on('message', function (channel, message) {
  14:         console.log('[' + channel + ']: ' + message);
  15:     });
  17:     client.on('ready', function () {
  18:         client.incr('did something');
  20:         client.subscribe("shaun_channel");
  21:     });
  22: })();

Once I deployed both of them I can open the Kudu Console of the Node.js Web Site and start the "app.js" from the console page.

The Kudu Console is an administration for each Azure Web Site application.

If your website address is then the Kudu Console address would be

Then I published a message from the web site as below.


Back to the Kudu console as you can see the Node.js application received the message from my Redis.


And I can send more messages and the Node.js keep receiving them and print out.




In this post I demonstrated how to create the new Azure Redis Cache, how to use it from C# and Node.js, and how to use it as a message queue.

Redis is very powerful and popular in open source community. People uses in vary ways such as  distributed cache, NoSQL database and message queue. Previously we can install Redis server in our virtual machine, or use the virtual machine image which pre-configured with Redis installed. But both of them we need to deal with the configuration and maintenance. Now we can use Redis by creating a new Redis Cache Service and scale-up and down as we want, without any effort in installation, configuration, etc..


Sample code.


Hope this helps,


All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.