Exporting Azure Table Into Excel and SQL Server using Enzo Cloud Backup

Do you have data stored in Azure Tables but can’t find a way to easily export into Excel or even SQL Server for analysis? With Enzo Cloud Backup (a free tool by the way) you can easily do that. Although Enzo Cloud Backup is a backup tool first and foremost, you can also export Azure Table data. Let me show you how to do this. Backing up an Azure Table allows you to restore data at a later time, while exporting an Azure Table allows you to inspect the content of an Azure Table and run reports if desired.

Install and Configure Enzo Cloud Backup

First and foremost, you will need to download Enzo Cloud Backup. Then, when you start Enzo Cloud Backup, you will need to login by providing a Storage Account and a Key; this is the Storage Account that the tool needs to save its internal configuration settings; it is usually best to create a separate Storage Account for Enzo Cloud Backup to use. Once the information is provided, click on Connect. To learn more on how to create a Storage Account in Microsoft Azure, see this post.


Once logged in, let’s register a Storage Account that we want to explore, and in which the Azure Table table you want to export resides. From the menu, click Connection –> Data Store –> Register –> Azure Storage. This will bring up a window that allows you to register a Storage Account you want to backup or explore. In this example, I am registering a storage account called bsctest1.


Once registered, the account will show up on the left menu of the main screen, under Azure Storage. I am showing you the bsctest1 Storage Account below. As you can see, there are a few Azure Tables showing up, and a few buttons on top: Backup, Open in Excel and Explore.


Let’s also register a database server to export to, so that we can quickly select it later (note: this step is optional). To do this, go to Connection –> Data Store –> Register –> Database Server, and enter the name of the database server along with credentials. Click OK to save the database connection information.


Exploring Azure Table Data

Let’s first explore the content of the enzodebug Azure Table. Click on the enzodebug table then click on Explore. This will open up a browser showing up to 1,000 entities at a time. You can enter a list of column names separated by a column, and a filter for the data which will come up once you click on Refresh. Click on Next allows you to browse the next set of records. The browser allows you to quickly inspect your data, but you cannot export from here. To learn more about filters, visit this page on MSDN and search for the Filter Expressions section.


Exporting to Excel

Another feature available is to export Azure Table data in Excel directly from Enzo Cloud Backup. Back on the main screen, click on the Open in Excel button. This will open a screen giving you a few options: you can provide a list of properties to return (all by default) and a filter for the data (none by default). You can also choose a Table Hint, which provides much faster data upload times when the PartitionKey contains a GUID, or random number. Select Optimize Search for GUID Values when your PartitionKey has random values.


When you are ready, click on Export. A progress indicator will tell you how far you are in the load process in Excel.


Once the download of the data is complete and Excel has all the data, you will see your data in Excel as expected. Depending on how many records you are exporting, and the PartitionKey strategy selected, this operation may take some time.


Exporting to SQL Server or Azure SQL Database

You can also export this data to SQL Server, or Azure SQL Database just as easily. Because of data type constraints and other considerations, a few more options are available to you.

From the main window, still having the table selected, click on Backup. Note that unlike the two previous options, this approach allows you to export multiple tables at once. The Backup screen will show up as follows:


If you would like to use the GUID strategy as discussed previously, you can do so under the Default Strategy tab:


From the General tab, click on Add Backup Device. A panel will be added to the list of destinations. Choose SQL Server or SQL Database from the destination Dropdown list, and provide the connection credentials. In this example, I am also creating a new database (TestExport) with a default size of 1GB (this is important; if your data needs more than 1GB of space, you need to change the database size accordingly or the export will fail). [note: if you did not register a database server previously, you can type the server name and the user id/pwd fields by hand here).


In the Data Import Options you can change a few settings that dictate the behavior of the export process depending on unexpected data conditions. I chose the create missing tables, and to add Error Columns if one or more columns cannot be loaded in SQL Server (this will allow you to load the data even if some columns fail to load). 


After you click the Start button, you can see the progress of the export by looking at the Tasks.


Once completed, we can view our data. Using SQL Server Management Studio (SSMS), let’s connect to the database where the export has occurred. Make sure to pre-select the database name on the Connection Properties tab if you are connecting to an Azure SQL Database.



Once logged in, simply select from the table:


Note that three fields were added automatically in my export (these fields are only created if there are data errors during the export, and if you have selected the Add Error Columns option earlier): __ENZORAWXML__, __ENZOERRORID__, and __ENZOERRORDESC__.  The error is telling me that one of the columns could not be exported because of a name mismatch: the TimeStamp column (date/time) already exists. That’s because in XML (the underlying storage type of Azure Tables), property names are case sensitive: in my case, each entity has both a Timestamp and TimeStamp property (note the case difference). However by default SQL Server column names are not case sensitive, and as a result it is not possible to create both fields in a table in SQL Server. While the extra TimeStamp column was not created, the __ENZORAWXML__ field contains the actual value of the field, in XML, so you can still inspect it here.



As shown in this blog post, Enzo Cloud Backup is a tool that allows you to not only backup Azure Storage content, but also easily browse and export Azure Tables for further analysis in Excel and SQL Server / Azure SQL Database. 

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Sharing Tables Between Two or More SQL Servers using Couchbase

In this blog post I will show you how you can allow SQL Server to tap into Couchbase directly using regular SQL commands, and use Couchbase Buckets to share data between two or more SQL Server databases.

SQL Server is a powerful database server when it comes to relational data structures for cloud and enterprise applications, and Couchbase is becoming a leading platform for NoSQL distributed databases offering extreme scale and simple geo replication. Many application developers want the benefits of relational data storage and the scale and ease of use of a NoSQL data store. But without the right tools, it is very difficult to combine both technologies seamlessly without creating data silos. In addition, while many DBAs will manage SQL Server effectively, they are usually unable to assist with data integration projects that need to bring both databases together.

In some organizations, it is important to be able to share data between multiple SQL Server databases. This is usually done with built-in features, such as Replication, or Log Shipping for example. However these options can be cumbersome to implement and have specific limitations, such as the database versions being used by both SQL Server databases. More importantly, these features are not as effective when the two databases are geographically separated, when both databases are considered read/write, or when the two databases cannot connect to each other directly.

How It Works

In order to understand how Couchbase and SQL Server can work together, we must first explore the essential differences between the two systems. While both Couchbase and SQL Server are database servers, Couchbase is traditionally used as a document-centric platform storing files as JSON documents accessed using application code, while SQL Server stores data in a proprietary format that is accessed using the SQL language from applications or by analysts running reports.

One of the major differences between the two systems is related to the format of the data being stored. In Couchbase, data is stored as a JSON document structure, which can have properties, arrays and objects. This means that some JSON documents have depth in their data structure that is not easily represented using rows and columns. SQL Server on the other hand stores and services data in rows and columns, and as a result is not a friendly data store for JSON documents. As a result, in order to exchange data between the two database servers, a data transformation needs to take place, between JSON documents and rows and columns.

In addition to the storage format, Couchbase uses the REST protocol to service data; REST commands are sent as HTTP requests. This differs from SQL Server, which services data using a proprietary protocol called TDS (Tabular Data Stream). This difference in protocols means that SQL Server cannot natively connect directly to a Couchbase server.

Enzo™ Unified is designed to solve the data structure and protocol differences outlined above, providing SQL Server a bridge to communicate natively with Couchbase in real-time, using Linked Server (a feature of SQL Server allowing remote querying of other database systems). With Enzo Unified, SQL Server can connect to Couchbase, fetch data, and modify data inside Couchbase using the SQL language, as if Couchbase understood the TDS protocol and the SQL language of SQL Server.

From an installation standpoint, Enzo Unified is a Windows Service that understands the TDS protocol natively; as such it can be co-hosted with SQL Server on the same machine, or installed on a separate server altogether.


The ability to communicate to Couchbase using the SQL language directly from SQL Server creates unique opportunities to build real-time solutions with less complexity. Let’s take a look at an example on how useful this can be.


Let’s assume we have two SQL Server databases installed at two physical locations that cannot connect directly over the network: one in the USA and one in Europe; these two SQL Server databases hold website users that have previously registered, and we want SQL Server to keep its own copy of locally registered users: the US database will hold users from North America, and the European database will hold users from the old continent. This design provides good performance because each website has its dedicated database in the same geographic region.

However, when a new user registers, we want to make sure that the email address provided is unique across all users worldwide. Indeed, a new user should not be allowed to use an existing email address during the registration process. Instead, if a user from Europe logs into the US website, the website should prevent the creation of a new user. 

In this example we will be using Couchbase to store a copy of the web user records centrally. Because Couchbase is a distributed database, it can be installed both in the USA and in Europe; its buckets can easily be replicated across geographic regions automatically with simple configuration settings (replication in Couchbase is suggested here to improve the performance or requests, but it not technically necessary). To simplify our diagram, we will show Couchbase as a single unit; however Couchbase is best installed with multiple instances for performance and availability. An instance of Enzo Unified is installed in each geographic location; Enzo Unified will serve as a real-time bridge between SQL Server and Couchbase, and present the Couchbase bucket as a table to SQL Server. Finally, let’s assume that Couchbase is installed in the cloud so that it can be accessed from both locations (see my blog post on how to install Couchbase in Windows Azure).


Accessing Couchbase From SQL Server

In order to share user data across multiple SQL Server databases, SQL Server must first be able to read and write to Couchbase. This is done through Enzo Unified, either by connecting directly to it using ADO.NET or SQL Server Management Studio (SSMS), or through SQL Server by registering Enzo Unified as a Linked Server. The default bucket from Couchbase is used to store web users: the id will contain the user email, and additional properties are saved (first name, last name, and age). As you can see below, Couchbase contains a single document for user hroggero@enzounified.com.


Let’s fetch this record from SSMS.

First, we need to register the schema of this document so that Enzo Unified can represent it as a database table (in Enzo, tables are not used to store data; they provide a bridge to underlying services, so they are called virtual tables). The following command (‘createclass’) creates a new virtual table called ‘webusers’; notice the list of columns we define for this virtual table (id, type, firstname, lastname and age). We will be using the column ‘id’ as the email address of the user, and the type property will be hard-coded as ‘user’.

exec couchbase.createclass 'webusers', 'default', 'string id,string type,string firstname,string lastname,int age'

Once the virtual table has been created, it can be accessed as a regular SQL table (called ‘webusers’), and through stored procedures (insertwebusers, deletewebusers,updatewebusers and selectwebusers).

Next, we will register Enzo Unified as a Linked Server to allow triggers, stored procedures, and views to access Couchbase directly. Generally speaking it is not required to create a Linked Server against Enzo Unified; a developer can connect directly to Enzo Unified using ADO.NET for example. However a Linked Server is necessary for SQL Server to access Couchbase natively (from triggers for example). Once the Linked Server has been registered, we can simply access the Couchbase documents with a simple SQL syntax; for example the following SQL command could be part of a trigger to check if the email address provided already exists, as such:


This record is accessible from both locations (USA and Europe). The performance of this call is primarily tied to the network bandwidth; very little information is actually transmitted over the network. This command ran in less than once second in the US with Couchbase installed in Windows Azure and SSMS running from my office.

When a new web user registers from either website, a trigger could simply call a stored procedure on Enzo Unified to insert a record in Couchbase. This stored procedure was created during the ‘createclass’ call made previously. An INSERT command on the ‘webusers’ table is also possible.


At this point, we have two users registered. Note that SQL Server can read and write to Couchbase, so it is possible to delete and update records just as easily.

Retrieving Data Using Couchbase Views

An interesting feature of Couchbase is the ability to create Views, which are a projection of a subset of documents that can be accessed for reading. Enzo Unified supports Couchbase views as well.

Let’s create a new view in Couchbase, which returns all email addresses registered; the Couchbase view itself is a JavaScript function that checks for the type of document (a ‘user’ document), and returns the ‘id’ of the document (which is the email address). In our current test, we only have one kind of document (a user); however it is good practice to include a field that describes the kind of document you are dealing with.


Then in SSMS, we simply declare the view as being available in Enzo Unified. After connecting directly to Enzo Unified, you declare the view as such:

exec couchbase.defineview 'emails', 'default', 'emails', 'emails'

The syntax of the ‘defineview’ command in Enzo is beyond the scope of this blog; this command essentially creates a new virtual table called ‘emails’ in Enzo, which we can now call directly. This virtual table is read-only (the associated insert,update and delete commands are not created). You may notice that we are not using a Linked Server in the following SQL call; this is a direct, native call to Enzo Unified from SSMS, so we do not need to use the registered Linked Server because the call is not made by SQL Server.



This blog introduces you to the fundamental differences between SQL Server and Couchbase, and how Enzo Unified for Couchbase can enhance applications by leveraging the two database platforms. Enzo Unified enables SQL Server to tap into the power of in-memory tables provided by Couchbase, and Couchbase buckets can be updated directly through database events to keep data synchronized. In this example, I showed you how data can be shared by two geographically separated SQL Server databases through native SQL commands to enable data validation without complex replication or synchronization jobs.


About Enzo Unified

Enzo Unified is a data platform that helps companies reduce development and data integration project timelines, improve data quality, and increase operational efficiency by solving some of the most complex real-time data consumption challenges. For more information, contact info@enzounified.com, or visit http://www.enzounified.com.

Configuring and Using the Redis Azure Cache

In this blog post I will show you how to configure and use a cloud caching service called the Azure Redis Cache. This caching service makes it easy to store information in the cloud to save temporary data, or share data with other services using nothing more than a shared Internet address.

Creating the Azure Redis Cache Service

The first step is to create the Redis cache service itself. For this, you will need an Azure account, and login to the Azure management portal (https://portal.azure.com). To create a new Redis cache, simply browse from the available list of services (click on Browse), and find the Redis Cache service. Provide the name for your cache, which will be used as part of the Internet address of your service (the DNS name). In the example below, I am creating the cache as “bsctest”, and as a result the DNS address for my service will be “bsctest.redis.cache.windows.net”. Also note that you will need to select a pricing tier; I picked the C0 Basic pricing tier. WARNING: Make sure to click on the “Select” button at the bottom of the screen for the pricing tier change to take effect. Then click the “Create” button to create the service; this may take a few minutes.


You will soon need the Access Key from the sample code below; to find your access key, go back to the Redis Cache (you can either go back to Browse All, or use the shortcut that was created for you on the main dashboard). From there, click on the newly created cache, and in the Settings window, click on Access Keys. Only will only need one of those keys; click on the icon next to the access key and save it somewhere handy.


Using the Azure Redis Cache Service

Once the cache service has been created, using it is a breeze. All we have to do is create a .NET application, reference the Redis client libraries, and interact with the cache data.

First, let’s create a sample Console application using Visual Studio 2013 using C#.


The right-click on the project name itself (from Solution Explorer window), and click Manage NuGet Packages. From there, click on the Online link on the left, and type Redis in the search box.  We will use the StackExchange.Redis package; select it and click on Install. You can now close the NuGet window and return to your project.

Note that you may get an exception if the cache hasn’t been created just yet, so make sure the cache service is running before proceeding.

Let’s connect to the cache, and store a simple string for our test. First, you need to connect to the Redis Cache using the Connect method on the ConnectionMultiplexer object; then, use the StringGet and StringGet methods to write/read data.

Here is the complete code for this simple test:

using StackExchange.Redis;

static void Main(string[] args)


            var connection = ConnectionMultiplexer.Connect(
                "{0}.redis.cache.windows.net,connectTimeout=5000,abortConnect=false,ssl=true,password={1}", name, key));

            var db = connection.GetDatabase();
            Console.WriteLine("Setting a cache value");
            db.StringSet("firstname", "James");

            Console.WriteLine("Reading from the cache now...");
            var val = db.StringGet("firstname");

            Console.WriteLine("  value: " + val);


When you run the above code, you should see something like this.


Et voila! The Redis cache offers many features; the example above is only a test, so I invite you to investigate the caching service more in depth before jumping in production with it.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

How To Install Couchbase In Microsoft Azure For Test Purposes

In this blog, I will show you how to install and configure an instance of Couchbase (http://www.couchbase.com/), a powerful NoSQL database, on a Microsoft Azure A0/A1 server configuration with the smaller installation possible of Couchase for limited development and test purposes. Note that such installation is not recommended for production loads and only intended to provide the cheapest instance possible in Azure that can be used to explore the features of Couchbase.

Step 1: Create a Virtual Machine in Microsoft Azure


Let’s create a new Virtual Machine (Windows Server 2012 R2 Datacenter) in Microsoft Azure. First, you need to login to Microsoft Azure (http://portal.azure.com). Click on the Virtual Machine tab on the left, and make sure to select a basic A2 size for the Virtual Machine as this will speed up installation of Couchbase. Selecting an A0/A1 now will likely not work since one of the steps is highly CPU intensive. We will change the size of the VM back to an A0/A1 after the installation of Couchbase is finalized.


Once the Virtual machine is running and ready to be configured with Couchbase, click on Connect inside the menu option located to the right of the Virtual Machine in Azure. This will open up a Remote Desktop Connection screen which will allow you to log into your machine with the administrative credentials you just provided.


Once logged in, turn off Enhanced Security Configuration (ESC) for administrators so that you can more easily browser to the Couchbase website and download the installation package. To turn off ESC you need to start Server Manager, and click on the Local Server tab on the left; find Enhanced Security Configuration on the list of properties, and click on the On link, which will prompt you for additional choices. Select Off for Administrators, and click OK.



Step 2: Install Couchbase

Open you browser and download the Couchbase installation program. Open browser to http://www.couchbase.com. Click on the Download button, and choose Couchbase Server (which expands the page with further download options). Make sure to click on the Community Edition button since we are testing Couchbase, then click on the Download link. A download screen will open; click Save As, and choose your D: drive to save the installation package.



Once downloaded, open up Windows Explorer, and select your D drive. You should see the installation package; on my machine, the file name is couchbase-server-community_4.0.0-rc0-windows_amd64.exe, but yours might be different depending on the release you downloaded. Right-click on the Couchbase installation package and select Properties, and click on the Unblock button (so that you can run this EXE in the next step) and click OK.


Double-click on the Couchbase installation program you just unblocked. When the installation Wizard starts, click Next, accept all default configuration settings and click Install. The installation process is rather quick, and within a few minutes Couchbase is configured.

A warning about the local windows firewall will come up; click on Yes to disable the Windows Firewall.


Once the installation is complete, you will see this screen showing a successful installation, then click on Finish.


Once you click on Finish, the default browser will open, pointing to the Couchbase Console. The console is used to manage your Couchbase configuration; a shortcut is also added on the Windows Desktop for convenience. Once the browser has opened the Couchbase Console, click on the Setup button.

Step 3: Configure Couchbase



Because we are creating a simple configuration for Couchbase, you can leave the default settings as provided for the most part. Because we want to run Couchbase on an A0 machine, with only 756MB of total RAM, change the Data RAM Quota to 256MB (the minimum allowed) if you plan to downsize to an A0 later, or 300MB (or higher) if you plan to use an A1 size or higher, and set the Index Quota to 256MB. The new Couchbase Cluster will be configured on, the local machine.

NOTE: Do not change the path of the local directories to point to the D drive; the D drive is considered volatile and only used for temporary data, which could render Couchbase unusable upon restart of the machine.

Clicking Next will create the configuration of the Couchbase server.


Next, choose zero or one of the sample buckets if you plan to downsize to an A0 later, or any number of sample buckets if you plan to use an A1 or higher, and click Next. A bucket is equivalent to a database and will take no less than 100MB in memory. If you plan to use a A0 Virtual Machine, you should limit your selection to a single bucket because your VM will be low in memory.

Once you are ready to proceed, click Next.


In this screen we configure the default bucket. As mentioned before, the minimum size for a bucket is 100MB, so let’s allocate 100MB per node for the RAM Quota. Leave the other options to their default values.


Uncheck Software Update Notifications for this test installation, and click Next.


In the last step of the configuration wizard, you specify a user name and password for the Server itself. Leave Administrator as the user name, and choose a password. We will need this information later. Click Next to complete the configuration of Couchbase.


Step 4: Explore Couchbase Locally


Once completed, you will see the administration console of Couchbase, including the RAM allocation and disk usage overview. The sample buckets begin to load immediately but could take a few minutes to finalize.


Click on the blue arrow next to the Data Buckets tab to inspect your buckets’ summary data. This gives you a quick overview of the RAM allocation for this bucket, and the storage size on disk. Clicking on the bucket name itself provides detailed utilization metrics.


Clicking on the Documents button allows you to inspect your data. Note that you may need to wait a little while, and possibly refresh your browser before you can inspect the data. The browser for the data is simple, and gives you a short list of JSON documents at a time; you can search for specific documents and browse the list page by page.


Step 5: Finalize Configuration In Azure


We installed Couchbase on a A2 Virtual Machine so that the installation process can complete. Once the data has completely loaded (this will take a few minutes), you can resize the Virtual Machine to an A1, or an A0 if you so choose; note that an A0 is barely able to run Couchbase, so you should select this option only if you want to minimize your monthly bill and you are willing to wait a while for your requests to take place. To do this, you will need to go back to the Azure portal, find your Virtual Machine, and change the size accordingly by clicking on Basic A0 and clicking on the Select button at the bottom. Resizing the Virtual Machine will take a few minutes.


While you are on the configuration screen of your virtual machine, note down your VM’s IP Address; we will need this later to test the cluster.

Last but not least, we need to open up certain ports in Azure for the Virtual Machine to be accessible to external clients; this is not a required step if your test client will be inside another Virtual Machine in Azure, but since my test machine is at home, I need to open up those ports.

Normally we need to open up two sets of ports: those on the operating system (the Windows Firewall), and those in Azure. Since we turned off the Windows Firewall earlier, we only need to open up the ports in Azure. The list of ports to open can be found in this article: http://docs.couchbase.com/couchbase-manual-2.0/#network-ports. So we will open TCP ports 8091, 8092, 11210, and 11211. To do so, from your Virtual Machine dashboard in the Azure Management Console, click on the Endpoints setting, and add the above ports.

NOTE: Port 8091 is used for the web configuration console; you should only need to open up this port if you want to configure Couchbase remotely.


Once all the TCP rules are added you should see a configuration similar to this:


Now, let’s try accessing Couchbase from a remote browser. Since I opened port 8091 for web administration, I will be able to manage Couchbase remotely from my workstation. The URL for the Couchbase admin console is your IP Address followed by the port number and the Index page, as such: HTTP://YOUR_SERVER_IP:8091/Index. This will prompt you for a login screen, where you enter Administrator for the user name (it’s case sensitive) and the password you specified earlier in the configuration wizard.


You should also note that we did not secure the server with an SSL certificate; as a result all traffic is unencrypted and could be inspected on your network in clear text, including the password.

Once logged in, as expected, I can see my Couchbase web management interface and manage my Couchbase installation from my workstation.


And that’s it! You have successfully installed and configured Couchbase on a single node, on a small Virtual Machine in Microsoft Azure for testing purposes.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Developers, Get Ready for Windows 10

The wave of Windows 10 development has already started. With many cool features, and the Universal Windows Platform (with APIs that are guaranteed to be present on all devices running Windows 10), you can start your engines and kick the tires! Here are a few resources you may find useful:

Visual Studio 2015 Download:   https://www.visualstudio.com/?Wt.mc_id=DX_MVP4030577

Here is an overview of what’s new with Windows 10 for developers:  https://dev.windows.com/en-us/getstarted/whats-new-windows-10/?Wt.mc_ic=dx_MVP4030577

Here is how to get started with Windows 10 app development:  https://dev.windows.com/en-us/getstarted/?Wt.mc_ic=dx_MVP4030577

Last but not least, here are a few videos on the Microsoft Virtual Academy you should check out:  https://www.microsoftvirtualacademy.com/en-US/training-courses/getting-started-with-windows-10-for-it-professionals-10629/?Wt.mc_ic=dx_MVP4030577

Let’s get started!

Sample Pricing Comparison (2): Amazon AWS and Microsoft Azure

A couple of years ago, I wrote a blog about Amazon and Microsoft Azure pricing (here) because I was curious about this topic. However, pricing being such a volatile and complex topic, this blog is a refresher on current 2015 pricing, using the same assumptions, in an attempt to measure the evolution of the pricing models in both Amazon AWS and Microsoft Azure.

Scenario and Assumptions

In this blog, I will use similar requirements as stated two years ago, with two exceptions: the outbound Internet traffic (from 1TB to 1/2TB of egress data tx), and the database level (from Enterprise to Standard edition). The change of the Internet traffic is due to the fact that the pricing calculator for Microsoft Azure limits the outbound traffic to 1/2TB. And the change of the database level is to keep the monthly expenditure from two years ago roughly similar in Amazon; note that in 2015, the Amazon pricing for SQL Server Enterprise Edition appears to be significantly greater than it was in 2013 according to my analysis. If we were to keep the Enterprise Edition of SQL Server as a requirement, Amazon would become significantly more expensive than Microsoft Azure (by multiple factors).

The updated requirements are:

  • SQL Server database, Standard Edition, 10GB of storage, 1CPU, 1 million requests, 10GB per month of data tx
  • 10 websites running ASP.NET, 1CPU, 1/2 TB of data tx out to the Internet per month
  • 2 Middle-tier Servers running .NET, 2CPUs
  • Reporting Services - 10 reports run daily, 1GB of data out to Internet per month

The general guidelines for pricing comparison remain the same as well:

  • Use License-free model as much as possible
  • Use equivalent service configuration as much as possible
  • Ignore temporary/promotional offers
  • Using North America pricing
  • SQL Server database can run in Microsoft Azure SQL Database (SQL Database in short) for comparison purposes

The above assumptions and guidelines ensure that the comparison is as close as possible between Amazon AWS and Microsoft Azure.

Amazon AWS Pricing

Amazon’s pricing has reduced considerably (48%) from two years ago, although the level of the database service has been reduced to the Standard Edition.  Downgrading the database to the Standard Edition however is not too significant for this analysis since most of the features of the Standard Edition are similar to the Azure SQL Database offering; nevertheless, it is an important change in requirements and the database level downgrade could impact some customers. I am also keeping the EC2 offering for the web hosting component and the middle tier servers. The operating cost of the selected configuration is $954 per month, down from $1,832 in 2013.



Microsoft Azure Pricing

Generally speaking, the total hosting cost for this solution using Microsoft Azure has also been reduced significantly (by 38%), which is great news. Competition between the vendors is driving costs down, and is helping refine their offerings. A significant driver for the Microsoft pricing was, and remains, the Azure SQL Database. The database offering has changed significantly from 2013 since there are now performance guarantees. Microsoft expresses its performance levels in terms of DTUs (Database Throughput Units), which is an overall performance level provided based on I/O, Memory and CPU consumption. As a result, it is not possible to establish a clear link between the expected performance level of Amazon’s versus Microsoft’s, since the database performance requirements of an application can vary greatly, and Microsoft’s performance levels are based on a mix of resource consumption. As a result, I selected a P1 level for Azure SQL Database, which should be close to the equivalent Amazon offering; this offering provides up to 500GB of database storage (there is no way to request a P1 database with 10GB of storage). Database availability is also important, and it seems that thanks to its automatic failover capabilities, the Azure SQL Database service offers greater built-in recoverability than with Amazon RDS’s standard offering. Note that the VMs used in Azure are slightly underpowered compared to Amazon since they offer less RAM, but are the closest configurations I could find at this time.




As we can see from the above results, the offerings of both vendors, while similar, are continuing to diverge. In some cases, Amazon seems to offer more granular offerings by allowing customers to choose specific IOPS levels on SQL Server for example, while Microsoft focuses on more built-in capabilities, such as automated failover of the database server. In some cases, the Amazon offering provides better configuration (such as more RAM on the web servers) and in other cases Microsoft provides superior service (such as no additional cost for load balancing and enterprise features for the database layer). This means that the feature, performance, and availability surface offered by both cloud vendors for a comparable offering can vary greatly. However, given the differences outlined, and given the requirements stated above, both vendors provide a roughly similar level of service at a similar price point.

Compared to the 2013 pricing levels, it seems that both vendors were able to cut costs and reduce their price; in this specific configuration, Amazon reduced its pricing by 48% and Microsoft by 38%.

As a final note, it is important to realize that this analysis is purely theoretical, and that it is only meant to provide general guidance on Amazon and Microsoft pricing; it is not meant to make any general pricing statements on any vendor in particular and is limited to the requirements as set previously. It should also be noted that if application needs require a high database or server service level, the monthly cost could vary greatly than what is outlined in this blog.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Monitoring Alerts For Azure Virtual Machine

When hosting a service in the cloud, you may need to monitor and send alerts when specific conditions take place, such as when your service is not running. In this blog post, I will show you how to create a simple alert in Microsoft Azure that sends an email when no activity is taking place on a virtual machine. As a pre-requisite, you will need a Microsoft Azure account, and a Virtual Machine up and running.

To create an alert, select Management Services from the left bar (using the current portal). 


This will bring up a list of alerts currently defined, if any. In the sample screenshot below, you will see an alert defined for a Virtual Machine; the alert itself has not been triggered (the status is Not Activated).


Let’s create a new Alert to monitor that will activate when 0 bytes have been sent out by the Virtual Machine within a 15 minute period. In other words, we want to be alerted when no outbound network traffic has been detected for 15 minutes, which represents a likely severe condition on the machine itself (either the machine is stopped, or the services are not running at all since no traffic is detected).

Click on the image icon to add an alert, select a name for the alert, a subscription, and Virtual Machine as the Source Type. Make sure you select the correct virtual machine in the Service Name list, then click the arrow to move to the second page.


Select Network Out as the metric to monitor, a ‘less than or equal’ condition, and 0 for Bytes. Select 15 minutes for the Evaluation Window. At this point, you are almost done; you just need to indicate which actions you want to take when the condition has been met. For our purposes, simply check ‘Send email to the service administrator and co-administrators’.  Ensure the ‘Enable rule’ is checked, and click OK to save this alert.


If your service experiences an issue, you will see that the alert has been ‘activated’, with a warning sign. The alert called ‘Custom Monitoring’ below shows an active alert.


You will also receive an email from Microsoft Azure when:

  • - The condition has been met (and the alert is active)
  • - The condition is no longer met (and the alert has been resolved)

You can monitor other services in Microsoft Azure the same way and become aware when serious issues are affecting your services. Although this alerting mechanism does not help you understand the root cause of the problem, you can use it as a mechanism to proactively resolve service issues.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

What is the Service Bus for Windows Server?

As programmers, we can build bridges between various systems in two ways: point to point, or loosely coupled. Within each design paradigm, additional options are available to us, such as which tools and platforms we can use, and how we actually perform the integration, such as whether or not we need to split large workloads into smaller chunks. Generally speaking, the methods we choose to use to perform the work (also known as integration patterns) can be implemented using most technologies; however some technologies make it easier to perform certain tasks than others. To this end, let’s take a look at the Service Bus for Windows Server (that I will call simply Service Bus going forward), a platform allowing loosely coupled integrations, and some of its major features.

About the Service Bus

The Service Bus is a messaging technology that you can install and configure on your own virtual machines. You can think of the Service Bus as a messaging technology that sits between MSMQ and BizTalk. It is similar to MSMQ in the sense that it deals with messages, and it is similar to BizTalk in the sense that it is a Publication/Subscription framework for distributing workloads and processes. Of course, BizTalk is a full blown integration platform which allows you to design integration workflows, which is not the case with the Service Bus by itself. However, there was a technology gap between MSMQ and BizTalk, and the Service Bus fills it. Its first incarnation was in Microsoft Azure, as the Azure Service Bus. The on-premises version is called the Service Bus for Windows Server, and can be configured using PowerShell scripts, or a user interface highly similar to the Microsoft Azure platform by installing the Windows Azure Pack. In order to install and manage your Service Bus with the Windows Azure Pack, I recommend you first install the Azure Pack, create a Tenant and a Subscription, and then the Service Bus using the Configuration Wizard (where you will be able to specify the tenant account you created earlier). This will make it easier to configure your Service Bus later.

In the world of the Service Bus, a Topic is a receiving channel for messages, and Subscriptions are outgoing channels that programs read messages from. When you configure a Topic, you usually configure at least one Subscription.  A program sends messages into a Topic, which are moved into the Subscriptions defined, and other programs read the messages from the Subscription.  In other words, a Subscription is also a queue.

Major Features

The Service Bus comes with very interesting capabilities, some of which are detailed below. At a high level, you can use the Service Bus in two ways: either as a messaging platform (much like MSMQ), or as a Pub/Sub implementation in which one queue can forward messages to other queues (the key foundation of a service bus). The first one is called Queues, and the second Topics. You can think of a Topic as a more flexible queue, because you can perform certain key tasks without coding. Let’s review the following capabilities that can be performed with Topics: Routing, Filtering, an Dead-Letter Queue.  For a more in-depth overview of Queues, Topics, and Subscriptions, take a look at this MSDN article.


Routing is the ability to forward a message for consumption into one or more queues. The key concept is that there could be multiple destination queues (Subscriptions); if a Topic has 5 subscriptions, the message will be routed to all 5 subscriptions, for 5 applications to read from. Let’s say you are dealing with a system that enrolls students for classes; the system responsible for registering classes will send a single message into the Enrollment Topic to indicate that an enrollment has completed. Assuming 2 systems are interested in the Enrollment topic, such as the Financial Aid and the Student Housing systems, you would create two Subscriptions (one for each). This achieves loose coupling because the Registration module has no idea how many systems, if any, will be interested in receiving enrollment completion events.


Filtering is the ability to select which Subscription will receive the message sent to a Topic. To follow our previous example, we could say that the Student Housing system wants all messages (in which case no filter is defined), but the Financial Aid system is only interested in messages for which at least 2 classes were selected. You could create a filter on the queue by specifying that a custom property on the message, which contains the count of selected classes, must have a value of 2 or higher. This allows you to reduce the workload on the Financial system by filtering out messages that do not meet specific criteria.

Deal-Letter Queue

A deal-letter queue is a system queue in which problem messages are stored. For example, if a message creates downstream issues, such as a system crash, it may never be removed from the queue properly by the consuming application. An internal counter specifies how many times a message has been accessed by consumers, and if it has been accessed too many times, the Service Bus will assume that the message has a problem, and will be moved into the Dead-Letter queue automatically. This allows the Service Bus to remove poison messages that affect the stability of the bus.


Implementing loosely couple systems can have significant benefits for organizations that need to orchestrate multiple systems efficiently, without hard-coding their integration needs. The Service Bus has many more features, but these are the ones that caught my attention. I encourage you to learn more about the Service Bus for Microsoft here.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

SQL Server IaaS and Retry Logic

Recently I had an interesting discussion with a customer and a question came up: should we still worry about Retry Handling in our application code when our SQL Server runs in virtual machines in an Infrastructure as a Service (IaaS) implementation?

More about the question

First, let’s review this question in more detail.  Let’s assume you currently run an application on virtual machines (or even physical machines) that are hosted at your favorite hosting provider, and you are interested in moving this application in the Microsoft cloud (Microsoft Azure). Your decision is to keep as much of your current architecture in place, and move your application “as-is” in virtual machines in Azure; depending on who you talked to, you probably heard that moving into IaaS (in Azure or not) is a simple fork lift. Except that your application depends on SQL Server, and now you have a choice to make: will your database server run on a virtual machine (IaaS), or using the platform as a service SQL Database (PaaS)?  For the remainder of this discussion I will assume that you “can” go either way; that’s not always true because some applications use features that are only available in SQL Server IaaS (although the gap is now relatively small between SQL Server and SQL Database).

What could go wrong

SQL Database (PaaS) is an environment that is highly load balanced and can potentially fail over to other server nodes automatically and frequently (more frequently than with your current hosting provider). As a result, your application could experience more frequent disconnections. Most often than not, applications are not designed to automatically retry their database requests when such disconnections occur. That’s because most of the time, when a disconnection happens it is usually a bad thing, and there are bigger problems to solve (such as a hard drive failure). However, in the cloud (and specifically with PaaS databases), disconnections happen for a variety of reasons, and are not necessarily an issue; they just happen quickly. As a result, implementing retry logic in your application code makes you application significantly more robust in the cloud, and more resilient to transient connection issues (for more information about this, look up the ).

However, applications that use SQL Server in VMs (IaaS) in Microsoft Azure may also experience random disconnections. Although there are no published metrics that compare the resiliency of the availability of VMs compared to PaaS implementations, VMs are bound to restart at some point (due to host O/S upgrades for example), or rack failures, causing downtime of your SQL Server instance (or a failover event if you run in a cluster). While VMs in Microsoft Azure that run in a load balanced mode can have a service uptime that exceeds 99.95%, VMs running SQL Server are never load-balanced; they can be clustered at best (but even in clustered situations, there are no uptime guarantees since the VMs are not load balanced). VMs also depend on an underlying storage that is prone to “throttling” (read this blog post about Azure Storage Throttling for more information), which could also induce temporary slowdowns, or timeouts. So for a variety of reasons, an application that runs SQL Server in VMs can experience sporadic, and temporary disconnections that could warrant a retry at the application layer.

Retry Handling

As a result, regardless of your implementation decision (SQL Server IaaS, or SQL Database PaaS), it is prudent (if not highly recommended) to modify your application code to include some form of retry logic; adding retry logic will create the perception that your application slowed down by hiding the actual connection failure. There are a few implementation models, but the most popular for the Microsoft Azure platform is the Transient Fault Handling Application Block (mostly used for ADO.NET code). This application block will help you implement two kinds of retries: connection and transaction retries. Connection retries are performed if your code is unable to connect to the database for a short period of time, and transaction retries will attempt to resubmit a database request in case the previous request failed for transient reasons. The framework is extensible and gives you flexibility to decide whether you want to retry in a linear manner, or through a form of exponential retry.

Note that the Entity Framework version 6 and higher include automatic retry policies; see this article for more information.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Monitoring Flights and Sending SMS with Taskmatics Scheduler and Enzo Unified

Software developers need to build solutions quickly so that businesses remain competitive and agile. This blog post shows you how Taskmatics Scheduler (http://www.taskmatics.com) and Enzo Unified (http://www.enzounified.com) can help developers build and deploy solutions very quickly by removing two significant pain points: the learning curve of new APIs, and orchestrating Internet services.

Sample Solution

Let’s build a solution that checks incoming flights in Miami, Florida, and send a text message using SMS when new flights arrive to one or more phone numbers. To track flight arrivals, we will be using FlightAware’s (http://www.flightaware.com) service which provides a REST API to retrieve flight information. To send SMS messages, we will be using Twilio’s (http://www.twilio.com) service which provides an API as well for sending messages.

To remove the learning from these APIs, we used Enzo Unified, a Backend as a Service (BaaS) platform that enables the consumption of services through native SQL statements. Enzo Unified abstracts communication and simplifies development of a large number of internal systems and Internet services. In this example, Enzo Unified is hosted on the Microsoft Azure platform for scalability and operational efficiency.

To orchestrate and schedule the solution, we used the Taskmatics Scheduler platform. Taskmatics calls into your custom code written in .NET on a schedule that you specify, which is configured to connect to Enzo Unified in the cloud. The call to Enzo Unified is made using ADO.NET by sending native SQL statements to pull information from FlightAware, and send an SMS message through Twilio. At a high level, the solution looks like this:


Figure 1 – High Level call sequence between Taskmatics Scheduler and Enzo Unified

How To Call FlightAware and Twilio with Enzo Unified

Developers can call Enzo Unified using a REST interface, or a native SQL interface. In this example, the developer uses the SQL interface, leveraging ADO.NET. The following code connects to Enzo Unified as a database endpoint using the SqlConnection class, and sends a command to fetch flights from a specific airport code using an SqlCommand object. Fetching FlightAware data is as simple as calling the “Arrived” stored procedure against the “flightaware” database schema.

var results = new List<ArrivedFlightInfo>();


// Connect to Enzo Unified using SqlConnection

using (var connection = new SqlConnection(parameters.EnzoConnectionString))

  // Prepare call to FlightAware’s Arrived procedure

  using (var command = new SqlCommand("flightaware.arrived", connection))



    command.CommandType = System.Data.CommandType.StoredProcedure;

    command.Parameters.Add(new SqlParameter("airport", airportCode));

    command.Parameters.Add(new SqlParameter("count", 10));

    command.Parameters.Add(new SqlParameter("type", "airline"));


    // Call FlightAware’s Arrived procedure

    using (var reader = command.ExecuteReader())

      while (reader.Read())

        results.Add(new ArrivedFlightInfo


          Ident = (String)reader["ident"],

          AircraftType = (String)reader["aircrafttype"],

          OriginICAO = (String)reader["origin"],

          OriginName = (String)reader["originName"],

          DestinationName = (String)reader["destinationName"],

          DestinationCity = (String)reader["destinationCity"]

          // ... additional code removed for clarity...



Calling Twilio is just as easy. A simple ADO.NET call to the SendSMS stored procedure in the “Twilio” schema is all that’s needed (the code is simplified to show the relevant part of the call).

// Establish a connection Enzo Unified

using (var connection = new SqlConnection(parameters.EnzoConnectionString))

  using (var command = new SqlCommand("twilio.sendsms", connection))



    command.CommandType = System.Data.CommandType.StoredProcedure;

    command.Parameters.Add(new SqlParameter("phones", phoneNumbers));

    command.Parameters.Add(new SqlParameter("message", smsMessage));


    // Call Twilio’s SendSMS method



If you inspect the above code carefully, you will notice that it does not reference the APIs of FlightAware or Twilio. Indeed, calling both FlightAware and Twilio was done using ADO.NET calls against Enzo Unified; because Enzo Unified behaves like a native database server (without the need to install special ODBC drivers), authenticating, making the actual API calls, and interpreting the REST results was entirely abstracted away from the developer, and replaced by an SQL interface, which dramatically increases developer productivity. Database developers can call Enzo Unified directly to test FlightAware and Twilio using SQL Server Management Studio (SSMS). The following picture shows the results of calling Enzo Unified from SSMS to retrieve arrived flights from FlightAware.


Figure 2 – Calling the FlightAware service using simple SQL syntax in SQL Server Management Studio

Sending a SMS text message using Twillio is just as simple using SSMS:


Figure 3 – Calling the Twilio service using simple SQL syntax in SQL Server Management Studio

How To Schedule The Call With Taskmatics Scheduler

In order to run and schedule this code, we are using Taskmatics Scheduler, which provides an enterprise grade scheduling and monitoring platform. When a class written in .NET inherits from the Taskmatics.Scheduler.Core.TaskBase class, it becomes automatically available as a custom task inside the Taskmatics Scheduler user interface. This means that a .NET library can easily be scheduled without writing additional code. Furthermore, marking the custom class with the InputParameters attribute provides a simple way to specify input parameters (such as the airport code to monitor, and the phone numbers to call) for your task through the Taskmatics user interface.

The following simplified code shows how a custom task class is created so that it can be hosted inside the Taskmatics Scheduler platform. Calling Context.Logger.Log gives developers the ability to log information directly to Taskmatics Scheduler for troubleshooting purposes.

namespace Taskmatics.EnzoUnified.FlightTracker


    // Mark this class so it is visible in the Taskmatics interface


    public class FlightNotificationTask : TaskBase


        // Override the Execute method called by Taskmatics on a schedule

        protected override void Execute()


            // Retrieve parameters as specified inside Taskmatics

            var parameters = (FlightNotificationParameters)Context.Parameters;


            // Invoke method that calls FlightAware through Enzo Unified

            var arrivedFlights = GetArrivedFlights(parameters);


            // do more work here… such as identify new arrivals

            var newFlights = FlightCache.FilterNewArrivals(arrivedFlights);


            // Do we have new arrivals since last call?

            if (newFlights.Count > 0)


               // Invoke method that calls Twilio through Enzo Unified

               var results = SendArrivedFlightsViaSMS(newFlights, parameters);


                // Update cache so these flights won’t be sent through SMS again




                Context.Logger.Log("SMS phase skipped due to no new arrivals.");


            Context.Logger.Log("Job execution complete.");


Installing the task into the Taskmatics Scheduler platform is very straightforward. Log into the user interface and create a definition for the flight tracker task. This step allows you to import your library into the system to serve as a template for the new scheduled task that we will create next.


Figure 4 - Import your custom task as a definition

Once you have created your definition, go to the “Scheduled Tasks” section of the user interface, and create the task by selecting the definition that you just created from the Task dropdown. This is also where you will schedule the time and frequency that the task will run as well as configure the input parameters for the task.


Figure 5 - Schedule your custom task to run on the days and times you specify.


Figure 6 - Configure the parameters for the scheduled task.

Finally, from the Dashboard screen, you can run your task manually and watch the output live, or look at a past execution of the task to see the outcome and logs from that run. In the image below, you can see the execution of the Flight Tracking task where we monitored recent arrivals into the Miami International Airport (KMIA).


Figure 7 - Review and analyze previous task executions or watch your tasks live as they run.


This blog post shows how developers can easily build integrated solutions without having to learn complex APIs using simple SQL statements, thanks to Enzo Unified’s BaaS platform. In addition, developers can easily orchestrate and schedule their libraries using the Taskmatics Scheduler platform. Combining the strengths of Enzo Unified and Taskmatics, organizations can reap the following benefits:

  • Rapid application development by removing the learning curve associated with APIs
  • Reduced testing and simple deployment by leveraging already tested services
  • Service orchestration spanning Internet services and on-premises systems
  • Enterprise grade scheduling and monitoring

You can download the full sample project on GitHub here: https://github.com/taskmatics-45/EnzoUnified-FlightTracking

About Blue Syntax Consulting

Our mission is to make your business successful through the technologies we build, create innovative solutions that are relevant to the technical community, and help your company adopt cloud computing where it makes sense. We are now making APIs irrelevant with Enzo® Unified. For more information about Enzo Unified and how developers can access services easily using SQL statements or a simple REST interface, visit http://www.enzounified.com or contact Blue Syntax Consulting at info@bluesyntaxconsulting.com.

About Taskmatics

Taskmatics was founded by a group of developers looking to improve the productivity of their peers. Their flagship application, Taskmatics Scheduler, aims to boost developer productivity and reduce the effort involved in creating consistent and scalable tasks while providing a centralized user interface to manage all aspects of your task automation. For more information and a free 90-day trial, visit http://taskmatics.com or email us at info@taskmatics.com.