Developers, Get Ready for Windows 10

The wave of Windows 10 development has already started. With many cool features, and the Universal Windows Platform (with APIs that are guaranteed to be present on all devices running Windows 10), you can start your engines and kick the tires! Here are a few resources you may find useful:

Visual Studio 2015 Download:   https://www.visualstudio.com/?Wt.mc_id=DX_MVP4030577

Here is an overview of what’s new with Windows 10 for developers:  https://dev.windows.com/en-us/getstarted/whats-new-windows-10/?Wt.mc_ic=dx_MVP4030577

Here is how to get started with Windows 10 app development:  https://dev.windows.com/en-us/getstarted/?Wt.mc_ic=dx_MVP4030577

Last but not least, here are a few videos on the Microsoft Virtual Academy you should check out:  https://www.microsoftvirtualacademy.com/en-US/training-courses/getting-started-with-windows-10-for-it-professionals-10629/?Wt.mc_ic=dx_MVP4030577

Let’s get started!

Sample Pricing Comparison (2): Amazon AWS and Microsoft Azure

A couple of years ago, I wrote a blog about Amazon and Microsoft Azure pricing (here) because I was curious about this topic. However, pricing being such a volatile and complex topic, this blog is a refresher on current 2015 pricing, using the same assumptions, in an attempt to measure the evolution of the pricing models in both Amazon AWS and Microsoft Azure.

Scenario and Assumptions

In this blog, I will use similar requirements as stated two years ago, with two exceptions: the outbound Internet traffic (from 1TB to 1/2TB of egress data tx), and the database level (from Enterprise to Standard edition). The change of the Internet traffic is due to the fact that the pricing calculator for Microsoft Azure limits the outbound traffic to 1/2TB. And the change of the database level is to keep the monthly expenditure from two years ago roughly similar in Amazon; note that in 2015, the Amazon pricing for SQL Server Enterprise Edition appears to be significantly greater than it was in 2013 according to my analysis. If we were to keep the Enterprise Edition of SQL Server as a requirement, Amazon would become significantly more expensive than Microsoft Azure (by multiple factors).

The updated requirements are:

  • SQL Server database, Standard Edition, 10GB of storage, 1CPU, 1 million requests, 10GB per month of data tx
  • 10 websites running ASP.NET, 1CPU, 1/2 TB of data tx out to the Internet per month
  • 2 Middle-tier Servers running .NET, 2CPUs
  • Reporting Services - 10 reports run daily, 1GB of data out to Internet per month

The general guidelines for pricing comparison remain the same as well:

  • Use License-free model as much as possible
  • Use equivalent service configuration as much as possible
  • Ignore temporary/promotional offers
  • Using North America pricing
  • SQL Server database can run in Microsoft Azure SQL Database (SQL Database in short) for comparison purposes

The above assumptions and guidelines ensure that the comparison is as close as possible between Amazon AWS and Microsoft Azure.

Amazon AWS Pricing

Amazon’s pricing has reduced considerably (48%) from two years ago, although the level of the database service has been reduced to the Standard Edition.  Downgrading the database to the Standard Edition however is not too significant for this analysis since most of the features of the Standard Edition are similar to the Azure SQL Database offering; nevertheless, it is an important change in requirements and the database level downgrade could impact some customers. I am also keeping the EC2 offering for the web hosting component and the middle tier servers. The operating cost of the selected configuration is $954 per month, down from $1,832 in 2013.

image

image

Microsoft Azure Pricing

Generally speaking, the total hosting cost for this solution using Microsoft Azure has also been reduced significantly (by 38%), which is great news. Competition between the vendors is driving costs down, and is helping refine their offerings. A significant driver for the Microsoft pricing was, and remains, the Azure SQL Database. The database offering has changed significantly from 2013 since there are now performance guarantees. Microsoft expresses its performance levels in terms of DTUs (Database Throughput Units), which is an overall performance level provided based on I/O, Memory and CPU consumption. As a result, it is not possible to establish a clear link between the expected performance level of Amazon’s versus Microsoft’s, since the database performance requirements of an application can vary greatly, and Microsoft’s performance levels are based on a mix of resource consumption. As a result, I selected a P1 level for Azure SQL Database, which should be close to the equivalent Amazon offering; this offering provides up to 500GB of database storage (there is no way to request a P1 database with 10GB of storage). Database availability is also important, and it seems that thanks to its automatic failover capabilities, the Azure SQL Database service offers greater built-in recoverability than with Amazon RDS’s standard offering. Note that the VMs used in Azure are slightly underpowered compared to Amazon since they offer less RAM, but are the closest configurations I could find at this time.

image

 image

Conclusion

As we can see from the above results, the offerings of both vendors, while similar, are continuing to diverge. In some cases, Amazon seems to offer more granular offerings by allowing customers to choose specific IOPS levels on SQL Server for example, while Microsoft focuses on more built-in capabilities, such as automated failover of the database server. In some cases, the Amazon offering provides better configuration (such as more RAM on the web servers) and in other cases Microsoft provides superior service (such as no additional cost for load balancing and enterprise features for the database layer). This means that the feature, performance, and availability surface offered by both cloud vendors for a comparable offering can vary greatly. However, given the differences outlined, and given the requirements stated above, both vendors provide a roughly similar level of service at a similar price point.

Compared to the 2013 pricing levels, it seems that both vendors were able to cut costs and reduce their price; in this specific configuration, Amazon reduced its pricing by 48% and Microsoft by 38%.

As a final note, it is important to realize that this analysis is purely theoretical, and that it is only meant to provide general guidance on Amazon and Microsoft pricing; it is not meant to make any general pricing statements on any vendor in particular and is limited to the requirements as set previously. It should also be noted that if application needs require a high database or server service level, the monthly cost could vary greatly than what is outlined in this blog.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Monitoring Alerts For Azure Virtual Machine

When hosting a service in the cloud, you may need to monitor and send alerts when specific conditions take place, such as when your service is not running. In this blog post, I will show you how to create a simple alert in Microsoft Azure that sends an email when no activity is taking place on a virtual machine. As a pre-requisite, you will need a Microsoft Azure account, and a Virtual Machine up and running.

To create an alert, select Management Services from the left bar (using the current portal). 

image

This will bring up a list of alerts currently defined, if any. In the sample screenshot below, you will see an alert defined for a Virtual Machine; the alert itself has not been triggered (the status is Not Activated).

image

Let’s create a new Alert to monitor that will activate when 0 bytes have been sent out by the Virtual Machine within a 15 minute period. In other words, we want to be alerted when no outbound network traffic has been detected for 15 minutes, which represents a likely severe condition on the machine itself (either the machine is stopped, or the services are not running at all since no traffic is detected).

Click on the image icon to add an alert, select a name for the alert, a subscription, and Virtual Machine as the Source Type. Make sure you select the correct virtual machine in the Service Name list, then click the arrow to move to the second page.

image

Select Network Out as the metric to monitor, a ‘less than or equal’ condition, and 0 for Bytes. Select 15 minutes for the Evaluation Window. At this point, you are almost done; you just need to indicate which actions you want to take when the condition has been met. For our purposes, simply check ‘Send email to the service administrator and co-administrators’.  Ensure the ‘Enable rule’ is checked, and click OK to save this alert.

image

If your service experiences an issue, you will see that the alert has been ‘activated’, with a warning sign. The alert called ‘Custom Monitoring’ below shows an active alert.

image

You will also receive an email from Microsoft Azure when:

  • - The condition has been met (and the alert is active)
  • - The condition is no longer met (and the alert has been resolved)

You can monitor other services in Microsoft Azure the same way and become aware when serious issues are affecting your services. Although this alerting mechanism does not help you understand the root cause of the problem, you can use it as a mechanism to proactively resolve service issues.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

What is the Service Bus for Windows Server?

As programmers, we can build bridges between various systems in two ways: point to point, or loosely coupled. Within each design paradigm, additional options are available to us, such as which tools and platforms we can use, and how we actually perform the integration, such as whether or not we need to split large workloads into smaller chunks. Generally speaking, the methods we choose to use to perform the work (also known as integration patterns) can be implemented using most technologies; however some technologies make it easier to perform certain tasks than others. To this end, let’s take a look at the Service Bus for Windows Server (that I will call simply Service Bus going forward), a platform allowing loosely coupled integrations, and some of its major features.

About the Service Bus

The Service Bus is a messaging technology that you can install and configure on your own virtual machines. You can think of the Service Bus as a messaging technology that sits between MSMQ and BizTalk. It is similar to MSMQ in the sense that it deals with messages, and it is similar to BizTalk in the sense that it is a Publication/Subscription framework for distributing workloads and processes. Of course, BizTalk is a full blown integration platform which allows you to design integration workflows, which is not the case with the Service Bus by itself. However, there was a technology gap between MSMQ and BizTalk, and the Service Bus fills it. Its first incarnation was in Microsoft Azure, as the Azure Service Bus. The on-premises version is called the Service Bus for Windows Server, and can be configured using PowerShell scripts, or a user interface highly similar to the Microsoft Azure platform by installing the Windows Azure Pack. In order to install and manage your Service Bus with the Windows Azure Pack, I recommend you first install the Azure Pack, create a Tenant and a Subscription, and then the Service Bus using the Configuration Wizard (where you will be able to specify the tenant account you created earlier). This will make it easier to configure your Service Bus later.

In the world of the Service Bus, a Topic is a receiving channel for messages, and Subscriptions are outgoing channels that programs read messages from. When you configure a Topic, you usually configure at least one Subscription.  A program sends messages into a Topic, which are moved into the Subscriptions defined, and other programs read the messages from the Subscription.  In other words, a Subscription is also a queue.

Major Features

The Service Bus comes with very interesting capabilities, some of which are detailed below. At a high level, you can use the Service Bus in two ways: either as a messaging platform (much like MSMQ), or as a Pub/Sub implementation in which one queue can forward messages to other queues (the key foundation of a service bus). The first one is called Queues, and the second Topics. You can think of a Topic as a more flexible queue, because you can perform certain key tasks without coding. Let’s review the following capabilities that can be performed with Topics: Routing, Filtering, an Dead-Letter Queue.  For a more in-depth overview of Queues, Topics, and Subscriptions, take a look at this MSDN article.

Routing

Routing is the ability to forward a message for consumption into one or more queues. The key concept is that there could be multiple destination queues (Subscriptions); if a Topic has 5 subscriptions, the message will be routed to all 5 subscriptions, for 5 applications to read from. Let’s say you are dealing with a system that enrolls students for classes; the system responsible for registering classes will send a single message into the Enrollment Topic to indicate that an enrollment has completed. Assuming 2 systems are interested in the Enrollment topic, such as the Financial Aid and the Student Housing systems, you would create two Subscriptions (one for each). This achieves loose coupling because the Registration module has no idea how many systems, if any, will be interested in receiving enrollment completion events.

Filtering

Filtering is the ability to select which Subscription will receive the message sent to a Topic. To follow our previous example, we could say that the Student Housing system wants all messages (in which case no filter is defined), but the Financial Aid system is only interested in messages for which at least 2 classes were selected. You could create a filter on the queue by specifying that a custom property on the message, which contains the count of selected classes, must have a value of 2 or higher. This allows you to reduce the workload on the Financial system by filtering out messages that do not meet specific criteria.

Deal-Letter Queue

A deal-letter queue is a system queue in which problem messages are stored. For example, if a message creates downstream issues, such as a system crash, it may never be removed from the queue properly by the consuming application. An internal counter specifies how many times a message has been accessed by consumers, and if it has been accessed too many times, the Service Bus will assume that the message has a problem, and will be moved into the Dead-Letter queue automatically. This allows the Service Bus to remove poison messages that affect the stability of the bus.

Conclusion

Implementing loosely couple systems can have significant benefits for organizations that need to orchestrate multiple systems efficiently, without hard-coding their integration needs. The Service Bus has many more features, but these are the ones that caught my attention. I encourage you to learn more about the Service Bus for Microsoft here.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

SQL Server IaaS and Retry Logic

Recently I had an interesting discussion with a customer and a question came up: should we still worry about Retry Handling in our application code when our SQL Server runs in virtual machines in an Infrastructure as a Service (IaaS) implementation?

More about the question

First, let’s review this question in more detail.  Let’s assume you currently run an application on virtual machines (or even physical machines) that are hosted at your favorite hosting provider, and you are interested in moving this application in the Microsoft cloud (Microsoft Azure). Your decision is to keep as much of your current architecture in place, and move your application “as-is” in virtual machines in Azure; depending on who you talked to, you probably heard that moving into IaaS (in Azure or not) is a simple fork lift. Except that your application depends on SQL Server, and now you have a choice to make: will your database server run on a virtual machine (IaaS), or using the platform as a service SQL Database (PaaS)?  For the remainder of this discussion I will assume that you “can” go either way; that’s not always true because some applications use features that are only available in SQL Server IaaS (although the gap is now relatively small between SQL Server and SQL Database).

What could go wrong

SQL Database (PaaS) is an environment that is highly load balanced and can potentially fail over to other server nodes automatically and frequently (more frequently than with your current hosting provider). As a result, your application could experience more frequent disconnections. Most often than not, applications are not designed to automatically retry their database requests when such disconnections occur. That’s because most of the time, when a disconnection happens it is usually a bad thing, and there are bigger problems to solve (such as a hard drive failure). However, in the cloud (and specifically with PaaS databases), disconnections happen for a variety of reasons, and are not necessarily an issue; they just happen quickly. As a result, implementing retry logic in your application code makes you application significantly more robust in the cloud, and more resilient to transient connection issues (for more information about this, look up the ).

However, applications that use SQL Server in VMs (IaaS) in Microsoft Azure may also experience random disconnections. Although there are no published metrics that compare the resiliency of the availability of VMs compared to PaaS implementations, VMs are bound to restart at some point (due to host O/S upgrades for example), or rack failures, causing downtime of your SQL Server instance (or a failover event if you run in a cluster). While VMs in Microsoft Azure that run in a load balanced mode can have a service uptime that exceeds 99.95%, VMs running SQL Server are never load-balanced; they can be clustered at best (but even in clustered situations, there are no uptime guarantees since the VMs are not load balanced). VMs also depend on an underlying storage that is prone to “throttling” (read this blog post about Azure Storage Throttling for more information), which could also induce temporary slowdowns, or timeouts. So for a variety of reasons, an application that runs SQL Server in VMs can experience sporadic, and temporary disconnections that could warrant a retry at the application layer.

Retry Handling

As a result, regardless of your implementation decision (SQL Server IaaS, or SQL Database PaaS), it is prudent (if not highly recommended) to modify your application code to include some form of retry logic; adding retry logic will create the perception that your application slowed down by hiding the actual connection failure. There are a few implementation models, but the most popular for the Microsoft Azure platform is the Transient Fault Handling Application Block (mostly used for ADO.NET code). This application block will help you implement two kinds of retries: connection and transaction retries. Connection retries are performed if your code is unable to connect to the database for a short period of time, and transaction retries will attempt to resubmit a database request in case the previous request failed for transient reasons. The framework is extensible and gives you flexibility to decide whether you want to retry in a linear manner, or through a form of exponential retry.

Note that the Entity Framework version 6 and higher include automatic retry policies; see this article for more information.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Monitoring Flights and Sending SMS with Taskmatics Scheduler and Enzo Unified

Software developers need to build solutions quickly so that businesses remain competitive and agile. This blog post shows you how Taskmatics Scheduler (http://www.taskmatics.com) and Enzo Unified (http://www.enzounified.com) can help developers build and deploy solutions very quickly by removing two significant pain points: the learning curve of new APIs, and orchestrating Internet services.

Sample Solution

Let’s build a solution that checks incoming flights in Miami, Florida, and send a text message using SMS when new flights arrive to one or more phone numbers. To track flight arrivals, we will be using FlightAware’s (http://www.flightaware.com) service which provides a REST API to retrieve flight information. To send SMS messages, we will be using Twilio’s (http://www.twilio.com) service which provides an API as well for sending messages.

To remove the learning from these APIs, we used Enzo Unified, a Backend as a Service (BaaS) platform that enables the consumption of services through native SQL statements. Enzo Unified abstracts communication and simplifies development of a large number of internal systems and Internet services. In this example, Enzo Unified is hosted on the Microsoft Azure platform for scalability and operational efficiency.

To orchestrate and schedule the solution, we used the Taskmatics Scheduler platform. Taskmatics calls into your custom code written in .NET on a schedule that you specify, which is configured to connect to Enzo Unified in the cloud. The call to Enzo Unified is made using ADO.NET by sending native SQL statements to pull information from FlightAware, and send an SMS message through Twilio. At a high level, the solution looks like this:

clip_image002

Figure 1 – High Level call sequence between Taskmatics Scheduler and Enzo Unified

How To Call FlightAware and Twilio with Enzo Unified

Developers can call Enzo Unified using a REST interface, or a native SQL interface. In this example, the developer uses the SQL interface, leveraging ADO.NET. The following code connects to Enzo Unified as a database endpoint using the SqlConnection class, and sends a command to fetch flights from a specific airport code using an SqlCommand object. Fetching FlightAware data is as simple as calling the “Arrived” stored procedure against the “flightaware” database schema.

var results = new List<ArrivedFlightInfo>();

 

// Connect to Enzo Unified using SqlConnection

using (var connection = new SqlConnection(parameters.EnzoConnectionString))

  // Prepare call to FlightAware’s Arrived procedure

  using (var command = new SqlCommand("flightaware.arrived", connection))

  {

    connection.Open();

    command.CommandType = System.Data.CommandType.StoredProcedure;

    command.Parameters.Add(new SqlParameter("airport", airportCode));

    command.Parameters.Add(new SqlParameter("count", 10));

    command.Parameters.Add(new SqlParameter("type", "airline"));

 

    // Call FlightAware’s Arrived procedure

    using (var reader = command.ExecuteReader())

      while (reader.Read())

        results.Add(new ArrivedFlightInfo

        {

          Ident = (String)reader["ident"],

          AircraftType = (String)reader["aircrafttype"],

          OriginICAO = (String)reader["origin"],

          OriginName = (String)reader["originName"],

          DestinationName = (String)reader["destinationName"],

          DestinationCity = (String)reader["destinationCity"]

          // ... additional code removed for clarity...

        });

    }

Calling Twilio is just as easy. A simple ADO.NET call to the SendSMS stored procedure in the “Twilio” schema is all that’s needed (the code is simplified to show the relevant part of the call).

// Establish a connection Enzo Unified

using (var connection = new SqlConnection(parameters.EnzoConnectionString))

  using (var command = new SqlCommand("twilio.sendsms", connection))

  {

    connection.Open();

    command.CommandType = System.Data.CommandType.StoredProcedure;

    command.Parameters.Add(new SqlParameter("phones", phoneNumbers));

    command.Parameters.Add(new SqlParameter("message", smsMessage));

 

    // Call Twilio’s SendSMS method

    command.ExecuteReader();

  }

If you inspect the above code carefully, you will notice that it does not reference the APIs of FlightAware or Twilio. Indeed, calling both FlightAware and Twilio was done using ADO.NET calls against Enzo Unified; because Enzo Unified behaves like a native database server (without the need to install special ODBC drivers), authenticating, making the actual API calls, and interpreting the REST results was entirely abstracted away from the developer, and replaced by an SQL interface, which dramatically increases developer productivity. Database developers can call Enzo Unified directly to test FlightAware and Twilio using SQL Server Management Studio (SSMS). The following picture shows the results of calling Enzo Unified from SSMS to retrieve arrived flights from FlightAware.

image

Figure 2 – Calling the FlightAware service using simple SQL syntax in SQL Server Management Studio

Sending a SMS text message using Twillio is just as simple using SSMS:

image

Figure 3 – Calling the Twilio service using simple SQL syntax in SQL Server Management Studio

How To Schedule The Call With Taskmatics Scheduler

In order to run and schedule this code, we are using Taskmatics Scheduler, which provides an enterprise grade scheduling and monitoring platform. When a class written in .NET inherits from the Taskmatics.Scheduler.Core.TaskBase class, it becomes automatically available as a custom task inside the Taskmatics Scheduler user interface. This means that a .NET library can easily be scheduled without writing additional code. Furthermore, marking the custom class with the InputParameters attribute provides a simple way to specify input parameters (such as the airport code to monitor, and the phone numbers to call) for your task through the Taskmatics user interface.

The following simplified code shows how a custom task class is created so that it can be hosted inside the Taskmatics Scheduler platform. Calling Context.Logger.Log gives developers the ability to log information directly to Taskmatics Scheduler for troubleshooting purposes.

namespace Taskmatics.EnzoUnified.FlightTracker

{

    // Mark this class so it is visible in the Taskmatics interface

    [InputParameters(typeof(FlightNotificationParameters))]

    public class FlightNotificationTask : TaskBase

    {

        // Override the Execute method called by Taskmatics on a schedule

        protected override void Execute()

        {

            // Retrieve parameters as specified inside Taskmatics

            var parameters = (FlightNotificationParameters)Context.Parameters;

 

            // Invoke method that calls FlightAware through Enzo Unified

            var arrivedFlights = GetArrivedFlights(parameters);

 

            // do more work here… such as identify new arrivals

            var newFlights = FlightCache.FilterNewArrivals(arrivedFlights);

 

            // Do we have new arrivals since last call?

            if (newFlights.Count > 0)

            {

               // Invoke method that calls Twilio through Enzo Unified

               var results = SendArrivedFlightsViaSMS(newFlights, parameters);

 

                // Update cache so these flights won’t be sent through SMS again

                FlightCache.SaveFlightsToCache(newFlights);

            }

            else

                Context.Logger.Log("SMS phase skipped due to no new arrivals.");

 

            Context.Logger.Log("Job execution complete.");

        }
    }
}

Installing the task into the Taskmatics Scheduler platform is very straightforward. Log into the user interface and create a definition for the flight tracker task. This step allows you to import your library into the system to serve as a template for the new scheduled task that we will create next.

clip_image006

Figure 4 - Import your custom task as a definition

Once you have created your definition, go to the “Scheduled Tasks” section of the user interface, and create the task by selecting the definition that you just created from the Task dropdown. This is also where you will schedule the time and frequency that the task will run as well as configure the input parameters for the task.

clip_image008

Figure 5 - Schedule your custom task to run on the days and times you specify.

clip_image010

Figure 6 - Configure the parameters for the scheduled task.

Finally, from the Dashboard screen, you can run your task manually and watch the output live, or look at a past execution of the task to see the outcome and logs from that run. In the image below, you can see the execution of the Flight Tracking task where we monitored recent arrivals into the Miami International Airport (KMIA).

clip_image012

Figure 7 - Review and analyze previous task executions or watch your tasks live as they run.

Conclusion

This blog post shows how developers can easily build integrated solutions without having to learn complex APIs using simple SQL statements, thanks to Enzo Unified’s BaaS platform. In addition, developers can easily orchestrate and schedule their libraries using the Taskmatics Scheduler platform. Combining the strengths of Enzo Unified and Taskmatics, organizations can reap the following benefits:

  • Rapid application development by removing the learning curve associated with APIs
  • Reduced testing and simple deployment by leveraging already tested services
  • Service orchestration spanning Internet services and on-premises systems
  • Enterprise grade scheduling and monitoring

You can download the full sample project on GitHub here: https://github.com/taskmatics-45/EnzoUnified-FlightTracking

About Blue Syntax Consulting

Our mission is to make your business successful through the technologies we build, create innovative solutions that are relevant to the technical community, and help your company adopt cloud computing where it makes sense. We are now making APIs irrelevant with Enzo® Unified. For more information about Enzo Unified and how developers can access services easily using SQL statements or a simple REST interface, visit http://www.enzounified.com or contact Blue Syntax Consulting at info@bluesyntaxconsulting.com.

About Taskmatics

Taskmatics was founded by a group of developers looking to improve the productivity of their peers. Their flagship application, Taskmatics Scheduler, aims to boost developer productivity and reduce the effort involved in creating consistent and scalable tasks while providing a centralized user interface to manage all aspects of your task automation. For more information and a free 90-day trial, visit http://taskmatics.com or email us at info@taskmatics.com.

Copy Files From You Local Computer To An Azure VM and Back

Do you need to copy files from your local workstation to Azure Virtual Machines? No need to use FTP, or send files on cloud drives or blobs. The only thing you need to do is to access your local drives from your cloud VM and pull the files over. You can also use the same approach to pull files from your cloud VM locally. Here is how it’s done.

First, logon to Microsoft Azure and browse to your Virtual Machine from the portal. From there, select your Virtual Machine and click on the Connect button at the bottom (make sure you select your VM first by clicking on the status field for example so that the VM clearly shows with a darker blue background).

AzureMgmt

After clicking on Connect, you will be prompted to Open or Save the RDP file for the remote session to your VM. Click on Save, and click on Open folder once the save operation is complete.  This will open the directory where the RDP file was saved and automatically select the file for you.

Save

SaveComplete

Right-click on your RDP file, and choose Edit from the dropdown menu. You will see the Remote Desktop Connection configuration window. Click on the Local Resources tab, and in the Local devices and resources section, click on the More… button.

RemoteLocalResources

Expand the Drives and click on the local drives you want to share from within your Virtual Machine (I selected my C drive), then click OK.

LocalDrives

Finally, click on the Connect button to logon to your Virtual Machine. Once logged on, you will see your local drive available from within Explorer as a mapped drives as shown below. You can now copy and move files from your Virtual Machines in and out of your Workstation easily. Because you can change any RDP file in the same way, you can do the same with virtual machines hosted on other cloud platforms or from your own network.

Explorer

 

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

How To Create A Powershell Script To Backup Your Azure Blobs

In this post I will show you how easy it is to create a PowerShell script using Visual Studio that can backup your Azure Blobs. The same concept can be applied to backup Azure Tables and SQL Database. To successfully follow this example, you will need a few things. First, we are coding in C# with Visual Studio 2012. The actual backup operation is rather complex, so we will be using the free API provided by Blue Syntax (http://www.bluesyntaxconsulting.com) to perform the actual backup operation. The PowerShell script will be calling the API.

Install And Configure Enzo Cloud Backup

As a pre-requisite, you will need to install the Enzo Cloud Backup tool, and the Enzo Cloud Backup API Version 3. Both can be found on the Blue Syntax website (http://www.bluesyntaxconsulting.com/backup30.aspx). Click on the Download button found on the product description page, and install Enzo Cloud Backup, and Enzo Cloud Backup API.

Once installed, start Enzo Cloud Backup. You will see a Login page. You will need to specify an Azure Storage Account where Enzo will store its configuration information. It is recommended to use a Storage Account that is used by Enzo only. Once you have created the Storage Account in your Azure subscription, enter the Account Name and an Account Key on this screen to proceed.  For detailed information on how to create an Azure Storage Account and access the Account Key, read this blog post: http://azure.microsoft.com/en-us/documentation/articles/storage-create-storage-account/.

EnzoLogin

Once logged in, you will need to register your product in order to obtain license keys; this is a very simple step and will ensure that you obtain the necessary license keys to run this example (to register your product, start Enzo Cloud Backup, and go to Help –> Request Permanent License Keys). There is no charge for the Community Edition of Enzo Cloud Backup; the API comes at no charge as well. Once registered you will receive an email with your license keys, so make sure you enter a valid email address in the registration key.

Register

Create a Class Library Project

Now that the Enzo Cloud Backup tool is installed along with the API, let’s create a new project in Visual Studio 2012. The project type is a class library. I named this project PSEnzoBackup. Make sure you select a Visual C# project; the code provided further down is written in C#.

NewProject

Configure Your Project

Once the project has been created, rename the Class1.cs file to BackupBlob.cs and make sure the class name is also renamed to BackupBlob. The end result should look like this in you Solution Explorer.

 RenameClass

Once the class has been renamed, add the following references to your project:

  • System.Management.Automation
  • CloudBackupAPI
  • EnzoBackupAPI

You can find the Automation DLL on your development machine (usually under C:\Program Files (x86)\Reference Assemblies\Microsoft\WindowsPowerShell). The other two libraries can be found in the directory where you installed the Enzo Cloud Backup API (they are usually found in a subdirectory of C:\Program Files (x86)\BlueSyntax\).

Add A Backup Method

At this point, the project is ready for development. Paste the following code into your BackupBlob.cs file as-is.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Management.Automation;

namespace PSEnzoBackup
{
    [Cmdlet(VerbsCommon.New, "BlobBackup")]
    public class BackupBlob : PSCmdlet
    {
        private string[] _blobNames;

        [Parameter(
            Mandatory=true,
            ValueFromPipelineByPropertyName = true,
            ValueFromPipeline = true,
            Position = 0,
            HelpMessage = "The list of blobs to backup (container/blob) separated by a comma. Can use * as a wildcard."
            )
        ]
        [Alias("ListBlob")]
        public string[] BlobNames
        {
            get { return _blobNames; }
            set { _blobNames = value; }
        }

        protected override void ProcessRecord()
        {
            WriteObject("Starting the backup operation...");

            BSC.Backup.Helper.AzureBlobBackupHelper backup = new BSC.Backup.Helper.AzureBlobBackupHelper(
                "ENZO_STORAGE_ACCOUNT",
                "ENZO_STORAGE_KEY",
                false,
                "YOUR_API_LICENSE_KEY");
            backup.Location = BSC.Backup.Helper.DeviceLocation.LocalFile;
            backup.DeviceURI = @"c:\tmp\backupfile.bkp";
            backup.Override = true;
            backup.SpecificBlobs = _blobNames.ToList();
            backup.UseCloudAgent = false;

            string operationId = backup.Backup();

        }

    }
}

The ProcessRecord() method is a protected override of the method that will be called by the PowerShell command. In this method, we simply create a reference to the AzureBlobBackupHelper class; the constructor requires the account name and keys of the Azure Storage Account used by Enzo Cloud Backup, and the API license key. Additional properties are available on the backup object; we are specifying that a local file backup device will be created, and will override any existing file. The SpecificBlobs property is set to the list of blob names provided as parameters to the PowerShell command. The Backup() method returns an OperationId which can be used by other Helper classes to check on the progress of the backup operation. We are hard-coding the account credentials and the name of the backup file, but it would be easy to provide parameters instead (as we did with the BlobNames parameter).

Before compiling this code, you will need to replace a few things:  the ENZO_STORAGE_ACCOUNT and ENZO_STORAGE_KEY are those used by the Backup application when you login (see the pre-requisites above). As indicated earlier, you must run the Enzo Backup application at least once to create the necessary configuration tables that the API uses. The AccountName and AccountKey you use to login in Enzo Backup are the ones you need to specify here.  The YOUR_API_LICENSE_KEY is the API License Key you received by email when registering your product with Blue Syntax. Once you have specified those values, you can compile this code.  Note the path where the DLL is being created when you compile as you will need it soon.

DllPath 

Test the PowerShell Script

Open a PowerShell command and type this command to load the PowerShell library we just compiled.

Import-Module "C:\YOUR_FULL_PATH_TO_PROJECT_DIRECTORY\PSEnzoBackup\bin\Debug\PSEnzoBackup.dll"

And finally, run the following command to backup all your blobs. The BlobNames parameter is a list of blob names (specified as ContainerName/BlobName) separated by a comma. The list of blobs supports a wildcard (*/* means all containers and all blobs). For more information about this parameter, and other properties available by this API, visit the online help here: http://www.bluesyntaxconsulting.com/EnzoCloudBackup30/APIBackupAB.aspx.

New-BlobBackup -BlobNames */*

You can easily wrap the commands provided by the Enzo Cloud Backup API in a PowerShell module, including backup Azure Tables, SQL Databases, and even restoring them. This example shows how to backup to disk, but you can also backup to blobs. To learn more about the API visit http://www.bluesyntaxconsulting.com/Backup30API.aspx.

Download The Sample Code

You can download the sample code provided above. In order to run this project, you will need to update the project references, have PowerShell 3.0 installed, the Microsoft Azure SDK installed, and the Enzo Cloud Backup API installed as well. Click here to download the sample project.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Backing up Azure Blobs

As customers adopt the Microsoft Azure platform, the need to backup Azure Blobs is becoming increasingly important. That’s because Azure Blobs are used by both the Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) components of Microsoft Azure. In fact, Azure Blobs are also becoming more popular with local software applications that need a central location to store configuration information and application logs. Since Blobs are a form of data, there is a need to back them up.

Until now, the way to backup Azure Blobs was to copy the files on local hard drives, which many third party tools currently provide. However Azure Blobs contain additional information that files can’t carry, such as custom metadata, HTTP properties, and Access Control List (ACL) details providing security information on those blobs. But Azure Blobs are not the only components that need to be backed up: Azure Blob Containers are also important. They are the equivalent of directory structures, and also carry metadata and ACL information. Existing third party tools do not provide a mechanism to save this additional information.

I created a free utility called Enzo Cloud Backup, version 3, now available for download (http://www.bluesyntaxconsulting.com/). This tool allows you to perform backup and restore operations on Azure Blobs (in addition to Azure Tables and Azure SQL Database). It is designed to handle a very large number of blobs and containers. In addition, an API is also available at no charge so that you can programmatically initiate backup and restore operations.

The free edition provides all the features of the advanced edition, but is limited in the number of backups that can be performed monthly. Give it a try; you will be surprised by how easy it is to use. And if you have some feedback, please send them to me!

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

The Business Case for a Data Server

As a developer you are familiar with Web Servers, and Database Servers. Both service data, in different ways. And this creates an interesting challenge. Let’s face it: accessing data is hard. Data in databases, in XML documents, in PDF documents, in flat files, on FTP servers in a proprietary format, in cubes, in no-SQL… you name it. In addition to those storage formats, data is made available in a large variety of file formats, available through a multitude of protocols, and serviced through an ever increasing set of providers each with their own authentication and authorization implementation. But the storage of data and the way to access it isn’t the only challenge. Who wants to access all this data? Developers, DBAs, reports writers, Systems Integrators, consultants, managers, business users, office workers, applications…

What’s the real problem?

Accessing data is hard because of those three forces: storage formats, protocols, and consumers. Different consumers need the same data in different ways, because they consume data in different ways. When the data is needed by applications (such as a mobile phone), then it is best accessed using REST requests returning JSON documents; this is usually best accomplished by servicing data through a web server. But if the data is needed by a report (such as Excel, or a cube for analytical purposes), it is usually best accessed using SQL commands, which is usually best accomplished using a database server.

This creates an interesting challenge: who are the consumers for a given set of data, and in which protocol should it be accessed? This simple question is hard to answer because for a given set of data, consumers change over time, but not the data stores, nor the protocols. The challenge is usually resolved by hiring consultants (or spending a lot of time with internal resources) to build bridges that move/copy data from one storage format to the next, so it can be used by the consumers that need the data in a specific format; that’s why integration tools are very popular: let’s copy the data from point A to point B, and C, so it can be used by consumers (business partners, reports, executives…). All this takes time and money. Copying no-sql data to a relational database or in a cube, so it can be reported on, takes effort. Or extracting flat files from an FTP site daily, and load it in a database, so it can used by a web application requires the creation of a complex set of programs that orchestrate this work.

As a result, the difficulty in making data access ubiquitous inhibits certain companies from making timely decisions, because the necessary data is not immediately available in the right format at the right time. And as mentioned previously, the need for data in various formats is clear. How many deployments of SSIS (SQL Server Integration Services), Joomla, BizTalk, Informatica, Scheduled jobs and ETL processes that move files around are you aware of? Some are needed because complex transformations are necessary, but a vast number of those implementations are in place because the source data is simply in the wrong format. [note: I am not saying these tools are not necessary; I am naming these tools to outline the need for data movement in general]

Introducing the Data Server

With this challenge continuing to grow with every new platform, API, consumer, and storage type, I suggest that a new technology be built, that I will simply call a Data Server, so that data can be serviced in real-time through virtually any protocol regardless of where the data comes from. In other words, it shouldn’t matter who needs the data and through which protocol; a mobile application needs data through REST/JSON? No problem. A report needs the exact same data through SQL? No problem. The same data, coming from the same source, should be accessible in real-time regardless of consumer preferences. If data were available in a ubiquitous manner, regardless of the protocol being used, a large number of integration routines would become obsolete, or would be largely simplified.

So what are the attributes of a Data Server?  It should be able to hide the complexities of the underlying data sources, and present data in a uniform way, through multiple protocols. For example, a Data Server would present Tweets through a REST/JSON interface and through an SQL interface (the same data in real-time; not a copy of the data). SharePoint lists could be made available through the same REST/JSON interface, or SQL as well. An FTP server could be accessed through REST/JSON and SQL too, and so would WMI, no-sql, CICS screens, flat files, SOAP endpoints… Regardless of the origin of the data, it would be serviced through a uniform response in the desired protocol.

The Data Server could also abstract security by shielding the authentication and authorization details of the underlying source of data.  This makes a lot of sense too because most sources of data use different security protocols, or variations of published standards. For example, authenticating to Microsoft Bing, Azure Blobs, Google Maps, FTP, SharePoint or Twillio is difficult because they all use different implementations. So abstracting authentication through a single REST layer or an SQL interface, and adding a layer of authorization on top of these data endpoints makes things much easier and more secure. It becomes possible to monitor data consumption across private and public sources of data as well, which can be important in certain organizations.

Data Cross Concerns

A Data Server would also help in implementing data cross concerns that are not usually easy to configure (or use), such as caching, asynchronous processing, scheduling, logging and more. For example, caching becomes much more interesting through a data server because it doesn’t matter which interface is used to access the data; cached data sets can be made available to both REST/JSON and SQL interfaces at the same time, which means that the data needs to be cached only once and remains consistent no matter which consumer reads the data. 

Asynchronous processing is also an interesting cross concern; consumers can start a request without waiting for its completion, through REST or SQL equally. For example, a REST command could initiate an asynchronous request, and a SQL command could check the completion of the request and fetch the results. Since the protocol used by consumers becomes an implementation choice, the data takes center place in a Data Server. Accessing, managing, recycling and updating data through a vast array of data sources becomes protocol agnostic.

Conclusion

To accelerate data-intensive projects and to help organizations consume the data they need efficiently, data should be made available in a uniform way regardless of the client protocol, no matter where it comes from, so that it doesn’t matter anymore who needs to consume that data. By creating this level of abstraction, the authentication and authorization mechanism can be streamlined too, so that there is only one way to secure the information. And since a Data Server channels requests to a multitude of data endpoints, it becomes a hub where common data related concerns can be implemented, including caching, scheduling and asynchronous requests.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.