Microsoft Azure and Threat Modeling You Apps

In this blog post I will introduce you to the Microsoft Threat Modeling Tool (TMT) and how it can be used specifically with the Microsoft Azure environment for application development and application deployment purposes. In order to do this, I will take a specific example, using the Enzo Cloud Backup tool my company is building, and show you how to perform a high level threat model for a specific operation: a backup request.

Introduction

First things first… what is the Threat Modeling Tool (TMT)? It’s a pretty cool piece of software that allows you to lay down your application architecture, with its various dependencies and protocols; it becomes a canvas for understanding and identifying potential threats. The tool itself may provide certain recommendations, and a list of checkpoints, but even if you decide to use the tool as a simple architecture diagram, it is very effective at doing so. You can use TMT to create simple, system-level diagrams, or to dive into specific operations to get into the data flow of your system. I refer to system-level diagrams as Level 0, and a data flow as Level 1.  The deeper you go, the deeper the analysis. Interestingly enough, I am starting to use TMT as a troubleshooting guide as well, since it does a good job at representing inter-system connections and data flows.

As you draw your diagram, you will need to choose from a list of processes, external components, data stores, data flows and trust boundaries. Selecting the correct item in this list is usually simple; you also have Generic items that allows you specify every security property manually.

image

Enzo Cloud Backup – Level 0 Diagram

Let’s start with a Level 0 system diagram of the Enzo Cloud Backup system. As we can expect, this is a very high level overview which provides context for other more detailed diagrams. The Level 0 diagram shows two primary trust domains: the corporate network where Enzo Cloud Backup is installed, and the Microsoft Azure trust domain, where the backup agent is installed. This diagram also introduces the concept that the agent and the client communicate directly, and through an Azure Storage account. As a result, two communication paths need to be secured, and both happen to take place over an HTTPS connection. The cloud agent also communicates to an Azure Storage account through HTTPS within the Microsoft Azure trust boundary, and to a data source (that is being backed up or restored). A more complex diagram would also show the fact that multiple Virtual Machines could be deployed in the cloud, all communicating to the same storage account, but for simplicity I am only showing one Virtual Machine. Also note that in my Level 0 diagram I choose not to document the sequence of events; in fact my Level 0 diagrams are not specific enough to dive into specific events or data flows.

Although the diagram below shows boxes and circles, selecting the right stencil is important for future analysis. There are no specific visual cues on the diagram itself indicating if you are dealing with a Managed or Unmanaged process for example, but the tool shows this information in the Properties window (shown later) and it will use this information to provide a set of recommendations. In addition, each item in the diagram comes with a set of properties that you can set, such as whether a data flow is secured, and if it requires authentication and/or authorization.

image

Enzo Cloud Backup – Level 1 Remote Backup Operation

Now let’s dive into a Level 1 diagram. To do so, let’s explore the systems and components used by a specific process: a remote backup operation. Level 1 diagrams are usually more complex, and as a result need to show as many data stores as possible, and optionally identify the order of data flows. While TMT doesn’t natively give you the option to document the sequence of your data flow, you can simply name each data flow by adding a number in front of it to achieve a similar result. By convention, I use 0 as initialization tasks, 1 as the starting request, and sometimes I even use decimals to document what happens within a specific step. But feel free to use any mechanism you think works for you if you need to document the sequence of your data flows.

My Level 1 diagram below shows that a human starts the backup request through Enzo Cloud Backup, which then sends a request through a Queue. The backup service then picks up the work request and starts a backup thread that performs the actual work. The thread itself is the one creating entries in the history data store that the client reads to provide a status to the user. As we can see in this diagram, the cross-boundary communications are as expected from our Level 0 diagram. Also worth noting is that the backup agent reads from a configuration file; from a security standpoint this could represent a point of failure and disrupt the backup process if it is tempered with. The client depends on the local machine registry which is also a store to properly secure as a result.  Last but not least, the numbering provided in front of each data flow allows you to walk through the steps taken during the backup request.

image

As mentioned previously, you can configure each item in the diagram with important properties which will ultimately help TMT provide recommendations. For example, I selected an HTTPS data flow for the Monitoring Status reading from the History Data store. Since I selected an HTTPS data flow, the tool automatically set some of the security attributes; I also provided information about the payload itself (it is a REST payload for example). Other properties, not shown below, are also available. And you can create your own properties if necessary, for documentation purposes.

image

Going Further with TMT

TMT is being used by large corporations to provide more detailed application diagrams, going as far as internal memory access. In fact the tool allows you to add notes, and have a checklist of important security verifications for your diagrams. TMT also comes with a build-in security report that gives you which steps you should consider in making you system more secure. The following diagram is a short sample output of this report for the Registry Hive access. The more information you provide, including the properties on your data stores, data flows, processes and trust boundaries, the better.

image

Last but not least, the tool comes with an Analysis view of your diagram, which is the basis for generating the report shown above. For example TMT identified that the History Data store could become unavailable; while this sounds like a simple problem to solve through a retry mechanism, this could be used as a Denial of Service attack against the tool. Note that TMT will highlight the component in question to make it easy to understand the area at risk.

image

Give it a ride! TMT is easy to use and provides a solid framework for documenting and analyzing your systems; and since most corporations have a vested interested in securing they applications in the cloud, TMT offers the framework to do so. To learn more about TMT, check out this link: http://blogs.microsoft.com/cybertrust/2014/04/15/introducing-microsoft-threat-modeling-tool-2014/.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

DocumentDB vs Azure SQL vs Azure Table

Microsoft Azure is now offering a third storage option for developers: DocumentDB.  DocumentDB is a no-sql storage service that stores JSON documents natively and provides indexing capabilities along with other interesting features. Microsoft Azure also offers Azure Tables, another no-sql storage option, and Azure SQL, which is a relational database service. This blog post provides a very high level comparison of the three storage options.

Storage

Purely from a storage perspective, the only relational storage offering is Azure SQL. This means that only Azure SQL will provide a schema and a strongly typed data store. This gives Azure SQL some interesting advantages related to search performance on complex indexes. Both Azure Table and DocumentDB are no-sql storage types, which means they do not enforce schema on their records (a record is called an Entity in Azure Table, and a Document in DocumentDB). However DocumentDB stores its data as a native JSON object, and allows attachments to a document, while Azure Table stores its data in XML (although you can retrieve Azure Table entities as JSON objects). The advantage of storing native JSON objects is that the data maps directly into JSON objects in the client application code. It is also worth noting that Azure SQL comes with storage capacity limitations; it can only store up to 500GB of data in a single database.

Indexing

All three options provide a form of indexing. Azure SQL offers the most advanced indexing capability allowing you to define indexes with multiple columns and other advanced options (such as a WHERE clause as part of the index filter). DocumentDB offers automatically generated indexes allowing queries to efficiently retrieve information. Azure Tables offer only a single index on its PartitionKey; this means that querying on properties other than the PartitionKey can take a significant amount of time if you have a lot of data.

Programmability

Azure SQL provides advanced server-side programming capabilities, through triggers, column-level integrity checks, views and functions (UDF). The programming language for Azure SQL is T-SQL. DocumentDB offers a similar programmability surface using JavaScript, which provides a more uniform experience for developers. Azure Tables do not offer server-side programmability.

Referential Integrity and Transactions

All three options provide some form of data consistency, although limited for Azure Tables. Azure SQL offers full-blown RDBMS referential integrity and transactional support; with T-SQL developers can nest transactions server-side and perform commit or rollback operations. And Azure SQL offers strong Referential Integrity through foreign keys, unique constraints, NOT NULL constraints and more. DocumentDB does not offer strong Referential Integrity (there is no concept for foreign keys for example) but supports transactions through JavaScript. As a result, developers can use JavaScript to enforce server-side referential integrity programmatically. Azure Tables offers basic transaction support through the use of its BATCH operation, although there are a few stringent requirements: up to 100 entities can be batched together, and all must share the same PartitionKey.

Service Cost

Azure SQL pricing recently changed and is now driven by the level of service you desire, which makes it difficult to compare with other storage options. Assuming a Basic level of service and a relatively small database (up to 2GB), you can expect to pay roughly $5.00 per month starting Nov 1st 2014. Pricing of Azure SQL goes up significantly with higher levels of service, and can reach over $3,000 per month for the more demanding databases (the price point impacts the maximum size of the database and the throughput available). DocumentDB offers a starting price point at $45.00 per month per Capacity Unit, which represents a storage quantity and specific throughput objectives (note: the price for DocumentDB is estimated since it is currently in preview). Azure Tables only have a storage price point which makes this storage option very affordable. A 1GB storage requirement with Azure Tables will cost about $0.12 per month with high availability and geo-redundancy.

All the storage options incur additional charges for transactions, which is not included in this analysis to simplify the comparison.

Summary

The three storage options provided by Microsoft Azure (Azure SQL, DocumentDB and Azure Tables) provide an array of implementation options at various price points depending on the needs of your application. Azure SQL provides higher data integrity but is limited in storage capacity, DocumentDB offers a no-sql implementation with configurable consistency levels and JavaScript server-side logic, and Azure Table offers simple no-sql storage capacity.

The following table is my interpretation of the Microsoft Azure storage options used for storing objects or tables.

 

Storage

Indexing

Server-Side Programmability

Ref. Integrity and Transaction Support

Service Cost

Azure SQL

Database

Up to 500GB

Yes

Multi-column

Yes

T-SQL

Yes for R.I.

Yes for Tx Support

Starts at $5 / DB / Month

(DB = Database)

DocumentDB

JSON

Petabytes

Yes

Automatically created/maintained

Yes

JavaScript

Through JavaScript

Supports Tx

starts at $45 / CU / mo

(CU = Capacity Unit)

Azure Table

XML

200TB

Primary Key only

No secondary index

No

Limited

(uses BATCH operation)

$0.12 per GB

 

NOTE 1: This blog post was updated on 9/11/14 at 9:48pm ET to correct an inaccuracy on DocumentDB indexing.
NOTE 2: This blog post was updated on 9/14/14 at 6:37pm ET to correct a typo.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

About the new Microsoft Innovation Center (MIC) in Miami

Last night I attended a meeting at the new MIC in Miami, run by Blain Barton (@blainbar), Sr IT Pro Evangelist at Microsoft. The meeting was well attended and is meant to be run as a user group format in a casual setting. Many of the local Microsoft MVPs and group leaders were in attendance as well, which allows technical folks to connect with community leaders in the area.

If you live in South Florida, I highly recommend to look out for future meetings at the MIC; most meetings will be about the Microsoft Azure platform, either IT Pro or Dev topics.

For more information on the MIC, check out this announcement:  http://www.microsoft.com/en-us/news/press/2014/may14/05-02miamiinnovationpr.aspx.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Azure Table JSON vs. AtomPub

A few months ago Microsoft released an update to the Microsoft Azure platform allowing developers to access Azure Tables using JSON payloads. Until then, developers had no choice but to use the AtomPub model.  Since this was the only data representation model available, there was no choice to be made, and as a result the use of the AtomPub model was implicit. With the introduction of JSON payloads, developers can now choose one or the other. Although the choice of using AtomPub or JSON is largely transparent (there are at this time a couple of minor exceptions), meaning that there are no functionality differences, developers should use the JSON payload going forward because the payload is significantly smaller. Take a quick look at payload examples on the MSDN documentation

To use the JSON payload, developers can set the PayloadFormat property as such (tableClient is and instance of CloudTableClient):

  tableClient.DefaultRequestOptions.PayloadFormat = TablePayloadFormat.JsonNoMetadata;

In order to see how much data was actually flowing through, Microsoft published a small test application that allows you to send a few entities into an Azure Table. The source code provided by Microsoft can be found on that page, so it’s pretty easy to try it out. However I needed to see this in action, and wanted to use Fiddler to see the actual payload leaving my machine. So I used the same general idea provided by the sample code from Microsoft and modified it to create my own test harness. To do this, I created a new Console application using Visual Studio 2012, added the Windows Azure Storage 3.0 library as a Nuget package, and created a simple application. You can download my test harness here.  The application saves an log data in a test table called LogTest.  You can change the ‘count’ variable to add more records if you want.

I used Fiddler to capture the tests. The first test was to insert 100 entities in my Azure Table using JSON. The second test was to read those entities back using JSON. The third was to insert 100 entities using AtomPub and the fourth was to read them back using AtomPub. Here are the results of those tests:

  JSON Payload AtomPub Payload Payload Savings
Insert 100 Entities 97,700 bytes sent 155,870 bytes sent -37%
Read 100 Entities 40,550 bytes received 122,578 bytes received -67%

 

In my tests, the JSON payload reduced network traffic (bytes sent) by 37% when inserting data, and about 67% when reading data (bytes received). Note that there was virtually no performance improvement in terms of execution time; the advantage of using JSON is limited to smaller network payloads. Still, this is a significant difference; considering that extracting data out of the Microsoft data centers costs money, companies that read lots of data from Azure Tables could realize cost savings as a result of using JSON payloads.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Discover the Management API in Microsoft Azure

Microsoft recently published a Management API framework within Microsoft Azure. If you haven’t had a chance to review this new feature, still in preview at the time of this writing, you may be surprised… So what is this new feature? Who would use it, and why?

The Management API offers interesting capabilities in at least two scenarios:  transform existing web services into more modern REST APIs, and leverage an advanced API management platform.

If you want to move your older public APIs, such as web services, into the era of mobile computing, complete with monitoring, servicing and security management, you are in luck… Microsoft Azure’s Management API provides a remarkably easy way to build a service proxy that hides older public APIs and allows you to define the REST methods you want to expose individually, and how they map to your older Web Services. The Management API provides a user interface to define the API verbs and the backend web service endpoint that serves it.

In addition to providing a mechanism to upgrade existing web services as REST services without writing any line of code, the Management API offers a strong management interface allowing you to specify caching options, access keys, access control and monitoring of your REST endpoints.

This Management API is designed for companies to upgrade their traditional web services as REST services without the need to change their code base. It also allows small and large organizations to obtain a management platform that a simple web service upgrade would not provide out of the box, such as caching and monitoring.

Although the Management API is still in a preview phase, I highly recommend you investigate the capabilities of this amazing new feature in Microsoft Azure. To learn more about it, please visit the Microsoft Azure website: http://azure.microsoft.com/en-us/documentation/articles/api-management-get-started/

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Microsoft Azure News: Capturing VM Images

If you have a Virtual Machine (VM) in Microsoft Azure that has a specific configuration, it used to be difficult to clone that VM. You had to sysprep the VM, and clone the data disks. This was slow, prone to errors, and stopped you from being productive.

No more!

A new option, called Capture, allows you to easily select a VM, running or not. The capture will copy the OS disk and data disks and create a new image out of it automatically for you. This means you can now easily clone an entire VM without affecting productivity.  To capture a VM, simply browse to your Virtual Machines in the Microsoft Azure management website, select the VM you want to clone, and click on the Capture button at the bottom. A window will come up asking to name your image. It took less than 1 minute for me to build a clone of my server.

And because it is stored as an image, I can easily create a new VM with it. So that’s what I did… And that took about 5 minutes total.  That’s amazing…  To create a new VM from your image, click on the NEW icon (bottom left), select Compute/Virtual Machine/From Gallery, and select My Images from the left menu when selecting an Image. You will find your newly created image. Because this is a clone, you will not be prompted for a new login; the user id/password is the same.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Awesome event: //publish/

Did you hear about //publish/?  It’s a pretty cool event organized by Microsoft and supported by the community to help you finalize your Windows apps, including Windows 8, and Windows Phone. Major prizes are at the key too! You can get free support, testing and even free consulting services with a Microsoft engineer! This event is designed to help you overcome the final blockers and publishing your apps.

Hurry up! Visit https://publishwindows.com, find a location near you, and sign up. It’s that easy.

Here is a good post about this event with more details:  MVP Guest Post

In the cloud, and back.

Unfortunately, not all projects that try to adopt cloud computing are successful. Some of them are doomed from the start, while others manage to limp along and eventually succeed. Although there are many successful implementations of cloud projects, on which businesses depend on daily, this blog is about an actual adoption failure, and attempts to present facts that led to this failure. The company in question is very large, does business at a national level, and is highly subject to seasonal activities; so cloud computing was very attractive for its ability to scale (up and down) and use multiple data centers for failover and high availability. However, as much as the high level alignment was right, many early signs of adoption were ignored and ultimately lead the company to almost entirely redo its technical architecture planning away from cloud computing. At least for now.

High Availability Requirements

While High Availability is a major attribute of cloud computing vendors, including Microsoft Windows Azure, it is slightly overrated as some organizations are starting to find out. Indeed, if you read the fine prints, high availability is usually expressed per month. So a 99.9% high availability on a database server in the cloud per month is in fact relatively low on a yearly basis. While this isn’t a significant problem for most organizations, it can spell trouble for the highly seasonal business. Indeed, if your business is highly seasonal, and you end up making 80% of your income within 60 days of the year, the systems better be up and running for 60 days with a 99.99% or better availability on a yearly basis. You just can’t afford downtime.

Too Early

While cloud computing has matured significantly over the last year or so, this project relied on very early versions of the Microsoft Windows Azure platform, which at the time only offered Platform as a Service capabilities. While staying on the bleeding edge is important for companies to remain competitive, this customer couldn’t be successful in this environment. There were too many workarounds implemented and unproven cloud architecture patterns selected; the lack of Virtual Machines and permanent storage disks was a significant burden for this project. This company simply tried to adopt cloud computing too quickly without starting with smaller projects to build up its knowledge capital.

Bad Taste

Last but not least, the cloud adoption failure left a bad taste with parts of the management team, making it difficult to justify any now valid cloud implementation patterns. This is a shame, because the company has difficulties in thinking beyond its current data center boundaries by fear of another failure. Indeed, who would go into a management meeting and propose cloud computing to this customer at this time? Timing, indeed, is of the essence.

While it could take time for this customer to look back and extract lessons learned for this significant adoption failure, it could very well help in unusual ways the next time around.  With a clearer understanding of the benefits of cloud computing, and some of its adoption challenges, this company will undoubtedly build a more thoughtful approach to cloud adoption over the next few years and build better customer solutions in the end. And although I don’t wish this kind of adoption pains to anyone, it could be a necessary evil for some corporations while cloud implementation patterns are better defined, and more widely understood by management and technical teams alike.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Azure SQL Database = Long-Term Storage?

Here is an interesting concept that I would like to share. I always looked at Azure SQL Database (the Microsoft PaaS relational database engine) as a first in class database server; and of course it is. But when you compare SQL Server to Azure SQL Database, it becomes quickly evident that SQL Server has more features, performs better, and has fewer limitations. And it makes total sense: SQL Server is a full-blown, configurable database platform, while Azure SQL Database is a limited version of SQL Server running on a shared server environment.

Some Key Differences


Let’s first review a few key differences between SQL Server and Azure SQL Database. The following differences are not meant to be exhaustive, but they represent key variations that make sense in the context of this blog post.

Performance

SQL Server is a highly scalable database server that can process hundreds or thousands of requests per second, with very high concurrency levels and virtually unlimited throughput (at least as much as the server allows). For example, there are almost no limits on memory access, or disk I/O, as long as the underlying hardware allows it. While SQL Server has internal limitations, they usually far exceed those of Azure SQL Database. Some of the performance limitations of Azure SQL Database are implemented in the form of Throttling, with specific error codes, to prevent a single user of a database to impact other customers on the same server.

Database Features

SQL Server also comes with many additional features than purely its relational engine, such as Linked Servers, Encryption, Full Text Indexing, SQL Agent and more. As a result, with a couple of exceptions, it is fair to think of Azure SQL Database as a subset of SQL Server (basically most of the relational engine). With Azure SQL Database you can create databases, tables, stored procedures, run most T-SQL statements and build a complete database. However some of the more advanced features are not available. On the other hand, one of Azure SQL Database’s amazing feature is the ability to create new databases quickly and easily, without having to worry about low-level configuration or to figure out on which server the database will reside.

Availability Features

Azure SQL Database has a significant advantage over SQL Server in the area of high availability, up to 99.9% of monthly uptime. While SQL Server offers configuration options that can exceed 99.9%, Azure SQL Database’s availability is provided by default, without any specific configuration, which is not the case for SQL Server. This means that high availability is built directly into the service and doesn’t require specialized knowledge to install or maintain.

Cost

Another important aspect of Azure SQL Database is cost: you pay for what you use. The larger the database, the higher the cost. And the longer you keep the database, the more you pay over time. This means that there are no licenses to worry about, and if you create a database for 24 hours, then drop it, you will pay for 24 hours of uptime; in the US, a 1GB database costs about $9.99 per month for the entry level editions, which is not very expensive.

A Parallel with Long-Term Storage Disks


Keeping the above information in mind, Azure SQL Database offers interesting capabilities that are difficult to achieve with SQL Server, at a reasonable price. Specifically, the ability to programmatically (and quickly) create databases, with high availability, is unparalleled. Let’s draw a parallel with long-term storage disks. Long-term storage is considered cheaper than hot disks, and is usually slower. The primary purpose of long-term storage is recoverability at a reasonable price. So if we assume that 99.9% availability monthly is acceptable for roughly $9.99 per month per 1GB of data, Azure SQL Database can be used to offload data that is not accessed very often and for which slower access time is reasonable.

This means that Azure SQL Database could be used to store temporary processing units of work (like batch processes), store historical data away from the primary database, create temporary tables for reporting purposes and more. And because SQL Server can communicate directly to Azure SQL Database using Linked Server, the abstraction can be total from the perspective of an end user. Stored procedures could be reading data directly from the cloud, or even merge with local data, to provide a unified view of the information to end users. Using Azure SQL Database as a long-term backend store for SQL Server seems to make a lot of sense for many scenarios.

 

About Herve Roggero


Herve Roggero, Windows Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

March Events in South Florida

Here are two events that I will participate soon: one on SQL Server, and the other on Windows Azure.

March 19, 6PM – SELECT * FROM Twitter

On March 19, at Carnival Cruise Lines, in Miami, I will be speaking about a new concept: no-API. Strap in your seatbelts and discover how SQL Server can take center stage when it comes to data movement and extraction. Data virtualization can empower the DBA to build real-time and batch solutions without developers, and gain more control over data quality. You will see how DBAs can easily directly tap into social media, documents, cloud computing, Internet services and even internal systems like message queuing and legacy systems. You can register here: http://www.fladotnet.com/Reg.aspx?EventID=705. I hope to see you there!

March 29 – Windows Azure Global Bootcamp

On March 29, join us at NOVA University, Fort Lauderdale, to talk Windows Azure, part of a global event driven by the community. This all day training will include Infrastructure as a Service, Platform as a Service and hands-on labs! Three experts will join me: Adnan Cartwright (@jinnxey), Shanavas Thayyullathil (from Microsoft), and Mir Majeed (@mirmajeed) . Register here: http://www.eventbrite.com/myevent?eid=9720104093.

 

For other great events in South Florida, check out http://www.fladotnet.com.

Twitter