Shaun Xu

The Sheep-Pen of the Shaun



Shaun, the author of this blog is a semi-geek, clumsy developer, passionate speaker and incapable architect with about 10 years’ experience in .NET and JavaScript. He hopes to prove that software development is art rather than manufacturing. He's into cloud computing platform and technologies (Windows Azure, Amazon and Aliyun) and right now, Shaun is being attracted by JavaScript (Angular.js and Node.js) and he likes it.

Shaun is working at Worktile Inc. as the chief architect for overall design and develop worktile, a web-based collaboration and task management tool, and lesschat, a real-time communication aggregation tool.


My Stats

  • Posts - 122
  • Comments - 622
  • Trackbacks - 0

Tag Cloud

Recent Comments

Recent Posts


Post Categories



If we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here.

In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method.

But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS.

If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection.

   1: // wire the stop handler for when the user leaves the page
   2: _pageWindow.bind("unload", function () {
   3:     connection.log("Window unloading, stopping the connection.");
   5:     connection.stop(asyncAbort);
   6: });
   8: if (isFirefox11OrGreater) {
   9:     // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close.
  10:     // #2400
  11:     _pageWindow.bind("beforeunload", function () {
  12:         // If connection.stop() runs runs in beforeunload and fails, it will also fail
  13:         // in unload unless connection.stop() runs after a timeout.
  14:         window.setTimeout(function () {
  15:             connection.stop(asyncAbort);
  16:         }, 0);
  17:     });
  18: }


Problem Reproduce

In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code.

   1: public class GreetingHub : Hub
   2: {
   3:     public override Task OnConnected()
   4:     {
   5:         Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId));
   6:         return base.OnConnected();
   7:     }
   9:     public override Task OnDisconnected()
  10:     {
  11:         Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId));
  12:         return base.OnDisconnected();
  13:     }
  15:     public void Hello(string user)
  16:     {
  17:         Clients.All.hello(string.Format("Hello, {0}!", user));
  18:     }
  19: }

Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express.

   1: public class Startup
   2: {
   3:     public void Configuration(IAppBuilder app)
   4:     {
   5:         app.Map("/signalr", map =>
   6:             {
   7:                 map.UseCors(CorsOptions.AllowAll);
   8:                 map.RunSignalR(new HubConfiguration()
   9:                     {
  10:                         EnableJavaScriptProxies = false
  11:                     });
  12:             });
  13:     }
  14: }

Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above.

In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc..

Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js".

   1: <html data-ng-app="demo">
   2:     <head>
   3:         <script type="text/javascript" src="jquery-2.1.0.js"></script>
   2:         <script type="text/javascript" src="angular.js">
   1: </script>
   2:         <script type="text/javascript" src="angular-ui-router.js">
   1: </script>
   2:         <script type="text/javascript" src="jquery.signalR-2.0.3.js">
   1: </script>
   2:         <script type="text/javascript" src="app.js">
   4:     </head>
   5:     <body>
   6:         <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1>
   7:         <div>
   8:             <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 
   9:             <a href="javascript:void(0)" data-ui-sref="view2">View 2</a>
  10:         </div>
  11:         <div data-ui-view></div>
  12:     </body>
  13: </html>

Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR.

   1: 'use strict';
   3: var app = angular.module('demo', ['ui.router']);
   5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) {
   6:     $stateProvider.state('view1', {
   7:         url: '/view1',
   8:         templateUrl: 'view1.html',
   9:         controller: 'View1Ctrl' });
  11:     $stateProvider.state('view2', {
  12:         url: '/view2',
  13:         templateUrl: 'view2.html',
  14:         controller: 'View2Ctrl' });
  16:     $locationProvider.html5Mode(true);
  17: }]);
  19: app.value('$', $);
  20: app.value('endpoint', 'http://localhost:60448');
  21: app.value('hub', 'GreetingHub');
  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) {
  24:     $scope.user = '';
  25:     $scope.response = '';
  27:     $scope.greeting = function () {
  28:         proxy.invoke('Hello', $scope.user)
  29:             .done(function () {})
  30:             .fail(function (error) {
  31:                 console.log(error);
  32:             });
  33:     };
  35:     var connection = $.hubConnection(endpoint);
  36:     var proxy = connection.createHubProxy(hub);
  37:     proxy.on('hello', function (response) {
  38:         $scope.$apply(function () {
  39:             $scope.response = response;
  40:         });
  41:     });
  42:     connection.start()
  43:         .done(function () {
  44:             console.log('signlar connection established');
  45:         })
  46:         .fail(function (error) {
  47:             console.log(error);
  48:         });
  49: });
  51: app.controller('View2Ctrl', function ($scope, $) {
  52: });

When we went to View1 the server side "OnConnect" method will be invoked as below.


And in any page we send the message to server, all clients will got the response.


If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct.


But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.




Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected".

   1: var connection;
   2:['$rootScope', function ($rootScope) {
   3:     $rootScope.$on('$stateChangeStart', function () {
   4:         if (connection && connection.state && connection.state !== 4 /* disconnected */) {
   5:             console.log('signlar connection abort');
   6:             connection.stop();
   7:         }
   8:     });
   9: }]);

Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.




In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here.

Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.


Hope this helps,


All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.


Currently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR.

SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1.

But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.


In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.


Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked.

   1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute
   2: {
   3:     public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request)
   4:     {
   5:     }
   7:     public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod)
   8:     {
   9:     }
  10: }

The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims.

   1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request)
   2: {
   3:     var dataProtectionProvider = new DpapiDataProtectionProvider();
   4:     var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create());
   5:     // authenticate by using bearer token in query string
   6:     var token = request.QueryString.Get(WebApiConfig.AuthenticationType);
   7:     var ticket = secureDataFormat.Unprotect(token);
   8:     if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated)
   9:     {
  10:         // set the authenticated user principal into environment so that it can be used in the future
  11:         request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity);
  12:         return true;
  13:     }
  14:     else
  15:     {
  16:         return false;
  17:     }
  18: }

In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future.

Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string.

This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser.

Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked.

   1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod)
   2: {
   3:     var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId;
   4:     // check the authenticated user principal from environment
   5:     var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment;
   6:     var principal = environment["server.User"] as ClaimsPrincipal;
   7:     if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated)
   8:     {
   9:         // create a new HubCallerContext instance with the principal generated from token
  10:         // and replace the current context so that in hubs we can retrieve current user identity
  11:         hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId);
  12:         return true;
  13:     }
  14:     else
  15:     {
  16:         return false;
  17:     }
  18: }

Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below.

   1: public class DefaultHub : Hub
   2: {
   3:     public object Initialize(string host, string service, JObject payload)
   4:     {
   5:         var connectionId = Context.ConnectionId;
   6:         ... ...
   7:         var domain = string.Empty;
   8:         var identity = Context.User.Identity as ClaimsIdentity;
   9:         if (identity != null)
  10:         {
  11:             var claim = identity.FindFirst("Domain");
  12:             if (claim != null)
  13:             {
  14:                 domain = claim.Value;
  15:             }
  16:         }
  17:         ... ...
  18:     }
  19: }

Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline.

   1: app.Map("/signalr", map =>
   2:     {
   3:         // Setup the CORS middleware to run before SignalR.
   4:         // By default this will allow all origins. You can 
   5:         // configure the set of origins and/or http verbs by
   6:         // providing a cors options with a different policy.
   7:         map.UseCors(CorsOptions.AllowAll);
   8:         var hubConfiguration = new HubConfiguration
   9:         {
  10:             // You can enable JSONP by uncommenting line below.
  11:             // JSONP requests are insecure but some older browsers (and some
  12:             // versions of IE) require JSONP to work cross domain
  13:             // EnableJSONP = true
  14:             EnableJavaScriptProxies = false
  15:         };
  16:         // Require authentication for all hubs
  17:         var authorizer = new QueryStringBearerAuthorizeAttribute();
  18:         var module = new AuthorizeModule(authorizer, authorizer);
  19:         GlobalHost.HubPipeline.AddModule(module);
  20:         // Run the SignalR pipeline. We're not using MapSignalR
  21:         // since this branch already runs under the "/signalr" path.
  22:         map.RunSignalR(hubConfiguration);
  23:     });

On the client side should pass the Bearer token through query string before I started the connection as below.

   1: self.connection = $.hubConnection(signalrEndpoint);
   2: self.proxy = self.connection.createHubProxy(hubName);
   3: self.proxy.on(notifyEventName, function (event, payload) {
   4:     options.handler(event, payload);
   5: });
   6: // add the authentication token to query string
   7: // we cannot use http headers since web socket protocol doesn't support
   8: self.connection.qs = { Bearer: AuthService.getToken() };
   9: // connection to hub
  10: self.connection.start();

Hope this helps,


All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.


In the TechED North America Microsoft announced another cache service in Azure which is the Redis Cache Service. This is the 4th cache service Microsoft introduced in Azure. The first one is Shared Cache which is going to be retired in Sep as it has very critical performance issue. The second one is In-Role Cache, which is built on top of AppFabric engine, is high performance and dedicates to the role instances in the same cloud service. The third one is Managed Cache, which is based on AppFabric as well, but can be widely used by cloud service roles, virtual machines and web sites. And now we have another choice.


Create Redis Cache Service

Currently the Redis Cache can only be created from the new portal. Click "New" button and select "Redis Cache (Preview)" item.


Then I need to specify the endpoint and select a pricing tier. Currently there are 2 tiers available, basic and standard and each of them has 1GB and 250MB size sub-tiers. The different between basic and standard is that basic does support replication and SLA.


Next, select a resource group and the location where my Redis will be provisioned. As you can see currently there are only four regions I can select.


Finally click "Create" button, Microsoft Azure will start to provision a new Redis service to me. This took about 5 minutes which I'm not sure if it's normal, since other provision operations in Microsoft Azure are more faster.


When the Redis created I can view the status from "Browse", "Caches" menu item. As we can see the Redis endpoint and port when clicked "Properties" button. We need them when connecting to Redis from our application later.


Click "Key" button it will show two security keys of our Redis. We also need it to connect from our application. So we can just copy the endpoint, port and primary key in some place for later usage.


Now we have our Redis Cache ready and let's create an application to use it.


Use Redis Cache from C# (ASP.NET)

There are many client libraries for Redis where you can find here. I'd like to use the first one of C# client, which is recommended by Redis, ServiceStack.Redis. I used this library before and wrote another blog post in the April of 2012. Now let's use it again to build an ASP.NET web application in Microsoft Azure Web Site.

I created a new ASP.NET WebForm in Visual Studio. ServiceStack.Redis is available in NuGet so I can install it easily from the NuGet dialog as below. Just search for "Redis" and the first is it.

In the official document of Redis Cache, Microsoft utilizes another library to connect to Redis named "StachExchange.Redis", that can be found here. It's available in NuGet as well but still in prerelase so if you wanted to use it make sure selected "Include Prerelease" in NuGet dialog.


Next I changed the default page layout as below. So I can specify the key and value and press "Set" button to set it into Redis, while press "Get" value to retrieve from Redis.

   1: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="ShaunAzureRedisDemo1._Default" %>
   3: <asp:Content ID="BodyContent" ContentPlaceHolderID="MainContent" runat="server">
   5:     <div class="jumbotron">
   6:         <h1>ASP.NET</h1>
   7:         <p class="lead">ASP.NET is a free web framework for building great Web sites and Web applications using HTML, CSS, and JavaScript.</p>
   8:         <p><a href="" class="btn btn-primary btn-lg">Learn more &raquo;</a></p>
   9:     </div>
  11:     <div class="row">
  12:         <div class="col-md-4">
  13:             <h2>Set Value</h2>
  14:             <p>
  15:                 Key: <asp:TextBox ID="txtSetKey" runat="server"></asp:TextBox>
  16:             </p>
  17:             <p>
  18:                 Value: <asp:TextBox ID="txtSetValue" runat="server"></asp:TextBox>
  19:             </p>
  20:             <p>
  21:                 <asp:Button ID="btnSet" runat="server" Text="Set" OnClick="btnSet_Click" />
  22:             </p>
  23:         </div>
  24:         <div class="col-md-4">
  25:             <h2>Get Value</h2>
  26:             <p>
  27:                 Key: <asp:TextBox ID="txtGetKey" runat="server"></asp:TextBox>
  28:             </p>
  29:             <p>
  30:                 Value: <asp:TextBox ID="txtGetValue" runat="server" ReadOnly="true"></asp:TextBox>
  31:             </p>
  32:             <p>
  33:                 <asp:Button ID="btnGet" runat="server" Text="Get" OnClick="btnGet_Click" />
  34:             </p>
  35:         </div>
  36:         <div class="col-md-4">
  37:             <h2>Trace</h2>
  38:             <p>
  39:                 <asp:TextBox ID="txtTrace" runat="server" ReadOnly="true" TextMode="MultiLine" Height="200" Width="100%"></asp:TextBox>
  40:             </p>
  41:         </div>
  42:     </div>
  44: </asp:Content>

In the backend code I initialize an instance of ServiceStack.Redis.RedisClient with the endpoint, port and password specified, which are copied from the portal.

Yes the password is the real one. Please be nice, thank you.

   1: using ServiceStack.Redis;
   2: using System;
   3: using System.Collections.Generic;
   4: using System.Linq;
   5: using System.Web;
   6: using System.Web.UI;
   7: using System.Web.UI.WebControls;
   9: namespace ShaunAzureRedisDemo1
  10: {
  11:     public partial class _Default : Page
  12:     {
  13:         private static RedisClient _client = new RedisClient(
  14:             "", 
  15:             6379, 
  16:             "Kl7UaxeZiqA1QbclSsI02mDndOccxwD6AluliF1axmA=");
  18:         protected void Page_Load(object sender, EventArgs e)
  19:         {
  21:         }
  22:     }
  23: }

Then I implemented two button click event handlers to set and get item from Redis. The code is very simple as below.

   1: using ServiceStack.Redis;
   2: using System;
   3: using System.Collections.Generic;
   4: using System.Linq;
   5: using System.Web;
   6: using System.Web.UI;
   7: using System.Web.UI.WebControls;
   9: namespace ShaunAzureRedisDemo1
  10: {
  11:     public partial class _Default : Page
  12:     {
  13:         private static RedisClient _client = new RedisClient(
  14:             "", 
  15:             6379, 
  16:             "Kl7UaxeZiqA1QbclSsI02mDndOccxwD6AluliF1axmA=");
  18:         protected void Page_Load(object sender, EventArgs e)
  19:         {
  21:         }
  23:         protected void btnSet_Click(object sender, EventArgs e)
  24:         {
  25:             var key = txtSetKey.Text;
  26:             var value = txtSetValue.Text;
  28:             try
  29:             {
  30:                 _client.SetEntry(key, value);
  31:             }
  32:             catch (Exception ex)
  33:             {
  34:                 txtTrace.Text = ex.ToString();
  35:             }
  36:         }
  38:         protected void btnGet_Click(object sender, EventArgs e)
  39:         {
  40:             var key = txtGetKey.Text;
  42:             try
  43:             {
  44:                 var value = _client.GetEntry(key);
  45:                 txtGetValue.Text = value;
  46:             }
  47:             catch (Exception ex)
  48:             {
  49:                 txtTrace.Text = ex.ToString();
  50:             }
  51:         }
  52:     }
  53: }

Next, I created a new Azure Web Site. Make sure I selected the same location as the Redis so that the network transaction between them is free with the best performance. Then deploy my web application to Azure and we can test the Redis. As you can see I set an item and retrieve later.


If I specified a key that does not exist, it will just return NULL from the library.



Connect from Other Services and Subscriptions

A Redis Cache can be connected from other Azure services and other subscriptions, through vary types of clients. As the screenshot below I created another ASP.NET and deployed in a Cloud Service Web Role. It connected to the same Redis Cache by specifying the same endpoint, port and password so that I can retrieve the item here which was saved from the Web Site previously.


Also I can use this Redis Cache from another subscription in Node.js application. In this case I utilized another client library named "node_redis". In the code below I created a simple web service can user can set and get item from Redis.

In order to make it easy to test, I was using HTTP GET method for both set and get item to Redis. This is NOT a good solution. In production environment you should use HTTP POST to set item into Redis.

   1: (function () {
   2:     'use strict';
   4:     var express = require('express');
   5:     var bodyParser = require('body-parser');
   6:     var redis = require('redis');
   8:     var app = express();
   9:     app.use(bodyParser());
  11:     var client = redis.createClient(6379, '');
  12:     client.auth('Kl7UaxeZiqA1QbclSsI02mDndOccxwD6AluliF1axmA=');
  14:     app.get('/get/:key', function (req, res) {
  15:         var key = req.params.key;
  16:         client.get(key, function (error, reply) {
  17:             if (error) {
  18:                 res.send(500, error);
  19:             }
  20:             else {
  21:                 res.send(200, reply);
  22:             }
  23:         });
  24:     });
  26:     app.get('/set', function (req, res) {
  27:         var key = req.param('key');
  28:         var value = req.param('value');
  29:         client.set(key, value, function (error, reply) {
  30:             if (error) {
  31:                 res.send(500, error);
  32:             }
  33:             else {
  34:                 res.send(200, reply);
  35:             }
  36:         });
  37:     });
  39:     app.get('/ping', function (req, res) {
  40:         res.send(200, 'PONG!');
  41:     });
  43:     var server = app.listen(process.env.port || 3000, function () {
  44:         console.log('Listening on port %d', server.address().port);
  45:     });
  46: })();

Then I deployed it to Azure Web Site belongs to another Azure Subscription and as you can see I can retrieve the item successfully.


I can set new item into Redis from this web service.


And retrieve it from the web application in Cloud Service in another subscription.


And I can retrieve it from the Web Site as well.



Use Pub Sub Mode

Redis can be used as a distributed key-value cache, as what I demonstrated above. And it supports list and hash as well that we can save entities in a list or hash and retrieve them together. Besides, Redis support pub/sub mode that can be used as a message queue. Now let's try to change my application to use the pub/sub mode.

Firstly I need to modify the Web Site ASP.NET application so that user can publish message. I used the "About" page to do it. The layout will be changed as below that I can specify the channel name and message and publish to Redis.

   1: <%@ Page Title="About" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="About.aspx.cs" Inherits="ShaunAzureRedisDemo1.About" %>
   3: <asp:Content ID="BodyContent" ContentPlaceHolderID="MainContent" runat="server">
   4:     <h2><%
   1: : Title 
   5:     <h3>Publish</h3>
   6:     <p>
   7:         Channel: <asp:TextBox ID="txtChannel" runat="server" Text="shaun_channel"></asp:TextBox>
   8:     </p>
   9:     <p>
  10:         Message: <asp:TextBox ID="txtMessage" runat="server" Text=""></asp:TextBox>
  11:     </p>
  12:     <p>
  13:         <asp:Button ID="btnPublish" runat="server" Text="Publsih" OnClick="btnPublish_Click" />
  14:     </p>
  15:     <p>
  16:         <asp:TextBox ID="txtTrace" runat="server" ReadOnly="true" TextMode="MultiLine" Height="200" Width="100%"></asp:TextBox>
  17:     </p>
  18: </asp:Content>

The backend code was like below.

   1: using ServiceStack.Redis;
   2: using System;
   3: using System.Collections.Generic;
   4: using System.Linq;
   5: using System.Web;
   6: using System.Web.UI;
   7: using System.Web.UI.WebControls;
   9: namespace ShaunAzureRedisDemo1
  10: {
  11:     public partial class About : Page
  12:     {
  13:         private static RedisClient _client = new RedisClient(
  14:             "", 
  15:             6379, 
  16:             "Kl7UaxeZiqA1QbclSsI02mDndOccxwD6AluliF1axmA=");
  18:         protected void Page_Load(object sender, EventArgs e)
  19:         {
  20:         }
  22:         protected void btnPublish_Click(object sender, EventArgs e)
  23:         {
  24:             var channel = txtChannel.Text;
  25:             var message = txtMessage.Text;
  26:             try
  27:             {
  28:                 var id = _client.PublishMessage(channel, message);
  29:                 txtTrace.Text = string.Format("Sent! ({0})", id);
  30:             }
  31:             catch(Exception ex)
  32:             {
  33:                 txtTrace.Text = ex.ToString();
  34:             }
  35:         }
  36:     }
  37: }

Next I will create another Node.js application and deployed to Azure, which will subscribe the channel and print the message content when anything came. Below is the new Node.js file "app.js".

   1: (function () {
   2:     'use strict';
   4:     var redis = require('redis');
   6:     var client = redis.createClient(6379, '');
   7:     client.auth('Kl7UaxeZiqA1QbclSsI02mDndOccxwD6AluliF1axmA=');
   9:     client.on('subscribe', function (channel, count) {
  10:         console.log('subscribed to channel "' + channel + '"');
  11:     });
  13:     client.on('message', function (channel, message) {
  14:         console.log('[' + channel + ']: ' + message);
  15:     });
  17:     client.on('ready', function () {
  18:         client.incr('did something');
  20:         client.subscribe("shaun_channel");
  21:     });
  22: })();

Once I deployed both of them I can open the Kudu Console of the Node.js Web Site and start the "app.js" from the console page.

The Kudu Console is an administration for each Azure Web Site application.

If your website address is then the Kudu Console address would be

Then I published a message from the web site as below.


Back to the Kudu console as you can see the Node.js application received the message from my Redis.


And I can send more messages and the Node.js keep receiving them and print out.




In this post I demonstrated how to create the new Azure Redis Cache, how to use it from C# and Node.js, and how to use it as a message queue.

Redis is very powerful and popular in open source community. People uses in vary ways such as  distributed cache, NoSQL database and message queue. Previously we can install Redis server in our virtual machine, or use the virtual machine image which pre-configured with Redis installed. But both of them we need to deal with the configuration and maintenance. Now we can use Redis by creating a new Redis Cache Service and scale-up and down as we want, without any effort in installation, configuration, etc..


Sample code.


Hope this helps,


All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.


When I'm playing some cut-edging technologies such as Node.js and AngularJS, I found that I need to review some basic knowledge of program language and skill. So these days I read some books about ANSI C, and reviewed some basic concepts such as data types, structures and pointers. And I spent days to understand the function pointer that was not very clear when I was in collage, with the skills I'm having from some modern languages.


Function Pointer is Simple

Many people said pointer is very hard to understand while the function pointer is the most. But I don't think so. In fact we are using function pointer every time we wrote a function. For example, in the code below I declared a function and invoked it.

   1: #include <stdio.h>
   3: int add (int, int);
   5: int main () 
   6: {
   7:     int x = 4;
   8:     int y = 7;
   9:     int result;
  11:     result = add(x, y);
  12:     printf("%d + %d = %d\n", x, y, result);
  14:     return 0;
  15: }
  17: int add (int x, int y)
  18: {
  19:     return x + y;
  20: }

In fact when I defined the function "add" at the top of my code, I defined a function pointer variant named "add" with the type of "a function that has two integer arguments and returns an integer". And when I declared the function at the bottom I in fact assign a function to the function pointer variant "add".

When I invoked "add" function, ANSI C found the function address in memory and executed it, just as what it does against any pointers variants. Hence we can invoke this function as below.

   1: // result = add(x, y);
   2: result = (*add)(x, y);

Since "add" is a variant we can assign it to another variant and execute. In the code below I defined a function pointer variant named "op", assign "add" to it and executed it. Please note the declaration function pointer variant the name of variant between the function return type and arguments.

   1: int (*op)(int, int);
   2: op = add;
   3: result = op(x, y);

Also, we can pass a function pointer variant into another function as a parameter, or a function that returns a function pointer. This makes our code looks like functional programing. For example the code below will invoke the function pointers 10 times and return the summary of them.

   1: int run10times (int x, int y, int (*op)(int, int))
   2: {
   3:     int i;
   4:     int result = 0;
   5:     for (i = 0; i < 10; i++)
   6:     {
   7:         result += op(x, y);
   8:     }
   9:     return result;
  10: }

And we can invoked it by passing the "add" as the last parameter. In fact we passed a function pointer named "add" to it.

   1: int main () 
   2: {
   3:     int x = 4;
   4:     int y = 7;
   5:     int result;
   7:     result = run10times(x, y, add);
   8:     printf("10 times of \'%d + %d\' = %d\n", x, y, result);
  10:     return 0;
  11: }

If we have another function named "multiple" that multiples two parameters and returns, then we can pass this function to it, something likes functional programing.

   1: #include <stdio.h>
   3: int add (int, int);
   4: int multiple (int, int);
   5: int run10times (int x, int y, int (*op)(int, int));
   7: int main () 
   8: {
   9:     int x = 4;
  10:     int y = 7;
  11:     int result;
  13:     result = run10times(x, y, multiple);
  14:     printf("10 times of \'%d * %d\' = %d\n", x, y, result);
  16:     return 0;
  17: }
  19: int add (int x, int y)
  20: {
  21:     return x + y;
  22: }
  24: int multiple (int x, int y)
  25: {
  26:     return x * y;
  27: }
  29: int run10times (int x, int y, int (*op)(int, int))
  30: {
  31:     int i;
  32:     int result = 0;
  33:     for (i = 0; i < 10; i++)
  34:     {
  35:         result += op(x, y);
  36:     }
  37:     return result;
  38: }


"void *" is Magical

Function pointer in ANSI C is strong typed. As we can see in the code above, a function pointer type was combined by the return type and arguments' type. But there's a special type in ANSI C we can use to make the function pointer more flexible, but more tricky as well, which is "void *".

The "void *" means a pointer that can be pointed to any types. This means we can covert any types of a pointer to "void *". Let's create a new function named "intaddto" which add the seconds parameter to the first one. In order to pass the result back we need to use integer pointer.

   1: #include <stdio.h>
   3: void intaddto (int *, int *);
   5: int main () 
   6: {
   7:     int x = 4;
   8:     int y = 7;
  10:     intaddto(&x, &y);
  11:     printf("%d\n", x);
  13:     return 0;
  14: }
  16: void intaddto (int *x, int *y)
  17: {
  18:     *x += *y;
  19: }

Since "void *" can be converted from any types of pointer, in the code below I defined another function pointer named "op" with the arguments' type as "void *" and assign our "intaddto" to it, and invoked "op" that returned the same result.

   1: void (*op)(void *, void *);
   2: op = (void (*)(void *, void *))intaddto;
   3: op(&x, &y);

With this feature we can change our "run10times" function a bit, so that it can handle any types, make it looks like a generic function.

It just looks like a generic function but in fact it's NOT. We can use template feature to implement a fully generic function in C++.

   1: void run10times (void *x, void *y, void (*op)(void *, void *))
   2: {
   3:     int i;
   4:     for (i = 0; i < 10; i++)
   5:     {
   6:         op(x, y);
   7:     }
   8: }

Then we can pass the "intaddto" function pointer to it as below.

   1: void (*op)(void *, void *);
   2: op = (void (*)(void *, void *))intaddto;
   3: run10times(&x, &y, op);

Now let's add another function named "dbladdto" that add the second double parameter to the first double parameter.

   1: void dbladdto (double *x, double *y)
   2: {
   3:     *x += *y;
   4: }

Then we can invoke "run10times" function by passing "dbladdto" as its last parameter so that it can handler double data type.

   1: double dx = 3.14159;
   2: double dy = 2.71828;
   3: run10times(&dx, &dy, (void (*)(void *, void *))dbladdto);
   4: printf("%g\n", dx);



In this post I tried to described the function pointer in ANSI C. Function pointer is basically one kind of pointer variant. It's hard to understand just because in ANSI C, we defined a function pointer alone with the definition of the function, and use it through the function name. If we back to the basic, it's the same to invoke a function by name or by the underlying pointer variant.

"void *" is a special type of pointer that can be assign by any types' of pointer. With this feature we can build a function that allows any kinds of parameter types.

The full sample code is listed below.

   1: #include <stdio.h>
   3: void intaddto (int *, int *);
   4: void dbladdto (double *, double *);
   5: void run10times (void *, void *, void (*)(void *, void *));
   7: int main () 
   8: {
   9:     int x = 4;
  10:     int y = 7;
  12:     void (*op)(void *, void *);
  13:     op = (void (*)(void *, void *))intaddto;
  14:     run10times(&x, &y, op);
  15:     printf("%d\n", x);
  17:     double dx = 3.14159;
  18:     double dy = 2.71828;
  19:     run10times(&dx, &dy, (void (*)(void *, void *))dbladdto);
  20:     printf("%g\n", dx);
  22:     return 0;
  23: }
  25: void intaddto (int *x, int *y)
  26: {
  27:     *x += *y;
  28: }
  30: void dbladdto (double *x, double *y)
  31: {
  32:     *x += *y;
  33: }
  35: void run10times (void *x, void *y, void (*op)(void *, void *))
  36: {
  37:     int i;
  38:     for (i = 0; i < 10; i++)
  39:     {
  40:         op(x, y);
  41:     }
  42: }


Hope this helps,


All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.