In the cloud, and back.

Unfortunately, not all projects that try to adopt cloud computing are successful. Some of them are doomed from the start, while others manage to limp along and eventually succeed. Although there are many successful implementations of cloud projects, on which businesses depend on daily, this blog is about an actual adoption failure, and attempts to present facts that led to this failure. The company in question is very large, does business at a national level, and is highly subject to seasonal activities; so cloud computing was very attractive for its ability to scale (up and down) and use multiple data centers for failover and high availability. However, as much as the high level alignment was right, many early signs of adoption were ignored and ultimately lead the company to almost entirely redo its technical architecture planning away from cloud computing. At least for now.

High Availability Requirements

While High Availability is a major attribute of cloud computing vendors, including Microsoft Windows Azure, it is slightly overrated as some organizations are starting to find out. Indeed, if you read the fine prints, high availability is usually expressed per month. So a 99.9% high availability on a database server in the cloud per month is in fact relatively low on a yearly basis. While this isn’t a significant problem for most organizations, it can spell trouble for the highly seasonal business. Indeed, if your business is highly seasonal, and you end up making 80% of your income within 60 days of the year, the systems better be up and running for 60 days with a 99.99% or better availability on a yearly basis. You just can’t afford downtime.

Too Early

While cloud computing has matured significantly over the last year or so, this project relied on very early versions of the Microsoft Windows Azure platform, which at the time only offered Platform as a Service capabilities. While staying on the bleeding edge is important for companies to remain competitive, this customer couldn’t be successful in this environment. There were too many workarounds implemented and unproven cloud architecture patterns selected; the lack of Virtual Machines and permanent storage disks was a significant burden for this project. This company simply tried to adopt cloud computing too quickly without starting with smaller projects to build up its knowledge capital.

Bad Taste

Last but not least, the cloud adoption failure left a bad taste with parts of the management team, making it difficult to justify any now valid cloud implementation patterns. This is a shame, because the company has difficulties in thinking beyond its current data center boundaries by fear of another failure. Indeed, who would go into a management meeting and propose cloud computing to this customer at this time? Timing, indeed, is of the essence.

While it could take time for this customer to look back and extract lessons learned for this significant adoption failure, it could very well help in unusual ways the next time around.  With a clearer understanding of the benefits of cloud computing, and some of its adoption challenges, this company will undoubtedly build a more thoughtful approach to cloud adoption over the next few years and build better customer solutions in the end. And although I don’t wish this kind of adoption pains to anyone, it could be a necessary evil for some corporations while cloud implementation patterns are better defined, and more widely understood by management and technical teams alike.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Azure SQL Database = Long-Term Storage?

Here is an interesting concept that I would like to share. I always looked at Azure SQL Database (the Microsoft PaaS relational database engine) as a first in class database server; and of course it is. But when you compare SQL Server to Azure SQL Database, it becomes quickly evident that SQL Server has more features, performs better, and has fewer limitations. And it makes total sense: SQL Server is a full-blown, configurable database platform, while Azure SQL Database is a limited version of SQL Server running on a shared server environment.

Some Key Differences


Let’s first review a few key differences between SQL Server and Azure SQL Database. The following differences are not meant to be exhaustive, but they represent key variations that make sense in the context of this blog post.

Performance

SQL Server is a highly scalable database server that can process hundreds or thousands of requests per second, with very high concurrency levels and virtually unlimited throughput (at least as much as the server allows). For example, there are almost no limits on memory access, or disk I/O, as long as the underlying hardware allows it. While SQL Server has internal limitations, they usually far exceed those of Azure SQL Database. Some of the performance limitations of Azure SQL Database are implemented in the form of Throttling, with specific error codes, to prevent a single user of a database to impact other customers on the same server.

Database Features

SQL Server also comes with many additional features than purely its relational engine, such as Linked Servers, Encryption, Full Text Indexing, SQL Agent and more. As a result, with a couple of exceptions, it is fair to think of Azure SQL Database as a subset of SQL Server (basically most of the relational engine). With Azure SQL Database you can create databases, tables, stored procedures, run most T-SQL statements and build a complete database. However some of the more advanced features are not available. On the other hand, one of Azure SQL Database’s amazing feature is the ability to create new databases quickly and easily, without having to worry about low-level configuration or to figure out on which server the database will reside.

Availability Features

Azure SQL Database has a significant advantage over SQL Server in the area of high availability, up to 99.9% of monthly uptime. While SQL Server offers configuration options that can exceed 99.9%, Azure SQL Database’s availability is provided by default, without any specific configuration, which is not the case for SQL Server. This means that high availability is built directly into the service and doesn’t require specialized knowledge to install or maintain.

Cost

Another important aspect of Azure SQL Database is cost: you pay for what you use. The larger the database, the higher the cost. And the longer you keep the database, the more you pay over time. This means that there are no licenses to worry about, and if you create a database for 24 hours, then drop it, you will pay for 24 hours of uptime; in the US, a 1GB database costs about $9.99 per month for the entry level editions, which is not very expensive.

A Parallel with Long-Term Storage Disks


Keeping the above information in mind, Azure SQL Database offers interesting capabilities that are difficult to achieve with SQL Server, at a reasonable price. Specifically, the ability to programmatically (and quickly) create databases, with high availability, is unparalleled. Let’s draw a parallel with long-term storage disks. Long-term storage is considered cheaper than hot disks, and is usually slower. The primary purpose of long-term storage is recoverability at a reasonable price. So if we assume that 99.9% availability monthly is acceptable for roughly $9.99 per month per 1GB of data, Azure SQL Database can be used to offload data that is not accessed very often and for which slower access time is reasonable.

This means that Azure SQL Database could be used to store temporary processing units of work (like batch processes), store historical data away from the primary database, create temporary tables for reporting purposes and more. And because SQL Server can communicate directly to Azure SQL Database using Linked Server, the abstraction can be total from the perspective of an end user. Stored procedures could be reading data directly from the cloud, or even merge with local data, to provide a unified view of the information to end users. Using Azure SQL Database as a long-term backend store for SQL Server seems to make a lot of sense for many scenarios.

 

About Herve Roggero


Herve Roggero, Windows Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

March Events in South Florida

Here are two events that I will participate soon: one on SQL Server, and the other on Windows Azure.

March 19, 6PM – SELECT * FROM Twitter

On March 19, at Carnival Cruise Lines, in Miami, I will be speaking about a new concept: no-API. Strap in your seatbelts and discover how SQL Server can take center stage when it comes to data movement and extraction. Data virtualization can empower the DBA to build real-time and batch solutions without developers, and gain more control over data quality. You will see how DBAs can easily directly tap into social media, documents, cloud computing, Internet services and even internal systems like message queuing and legacy systems. You can register here: http://www.fladotnet.com/Reg.aspx?EventID=705. I hope to see you there!

March 29 – Windows Azure Global Bootcamp

On March 29, join us at NOVA University, Fort Lauderdale, to talk Windows Azure, part of a global event driven by the community. This all day training will include Infrastructure as a Service, Platform as a Service and hands-on labs! Three experts will join me: Adnan Cartwright (@jinnxey), Shanavas Thayyullathil (from Microsoft), and Mir Majeed (@mirmajeed) . Register here: http://www.eventbrite.com/myevent?eid=9720104093.

 

For other great events in South Florida, check out http://www.fladotnet.com.

The Magic of SQL: no-API

Once in a while, you get the unique opportunity to create something different. So different that it feels right, wrong, strange, wonderful and transformative at the same time. So I decided to blog about a concept, which is turning into reality. It has to do with the increasing complexity of APIs, and the magic of SQL. APIs are for developers; including Web APIs, Web Services, XML, JSON… all that ecosystem of documents that represent data is for computers. And that’s fine. But it’s clearly not for human consumption. At least, not the software engineering kind. Did you count the number of Web APIs on the net? And how many data protocols exist out there (such as WMI, FTP, SOAP…), and how many document types exist (such as Excel, PDF, Flat Files, Zipped documents…)?  It’s actually mind blowing… and even some developers have a hard time keeping up with technology trends.  For example, how much will it take for a developer with 5 years of experience to correctly fetch, and page through Twitter feeds?

On the other hand, we have a wonderful technology available: SQL. It can be used to Read, Delete, Update, and Add data, among other things. And with a little more work, you can even join multiple data sets together to bring light to new data sets. SQL is easy to use, easy to learn, widespread, and relatively standard across database vendors. So, how easy would it be to communicate with all these data sources if only they understood SQL? Yeah… as in “SELECT * FROM Twitter.Timeline”. Or for SharePoint: “SELECT * FROM SharePoint.mylist”…  or even for Windows Azure Tables: “SELECT * FROM AzureStorage.MyTable1”.  Would it be useful? To who? And why?

Honestly I am shocked that this problem hasn’t been solved before.  Sure, you have a few ODBC drivers out there that will “pretend” like you are talking SQL. But in reality, these solutions are not Server based, and as a result they have many severe drawbacks. I am not going to dive into the specifics right now as to why, but suffice to say that no one will use specialized ODBC drivers to access a central data service (like a SaaS platform, or an Enterprise-class Data as a Service); it’s too cumbersome. So I am talking about the need for a real server-based database-like server that understands SQL on one end, and hides the complexities of the APIs on the other.

In other words, a no-API solution.

Who cares?

 

Actually, there are many potential interested parties for an SQL-like paradigm. Here are a few: managers that know SQL (actually the number is large in IT), junior developers, DBAs, Data Architects, business analysts, SharePoint administrators, report writers, ETL developers, business intelligence vendors, and probably some Enterprise Architects as well, looking for technologies that simplify their ecosystems.  And why do they care? Because they usually depend on developers to build custom solutions to access their own data…

Would a report developer be interested in accessing SharePoint Lists using SQL to build a report? Probably. No need to learn the SharePoint API anymore.

Would a business analyst be interested in getting Tweets into Excel for analysis because the Marketing department needs some competitive intelligence?  Possibly. And no need to learn the Twitter API.

Would a DBA be interested in saving table records in an FTP site directly using SQL? Probably. No need to code batch programs to do this.

The list goes on and on… I can see a large community of users interested in using SQL for daily tasks. It’s easier, it has fewer dependencies and because the underlying APIs are hidden, they become almost irrelevant. One language to access them all. At least, that’s the vision.

But Why?

 

Ah… but why? Why oh why would anyone care?  It’s simpler.  In fact, it’s the simplest possible way of accessing data.  Imagine what it would take to build a simple report that pulls data from SharePoint, and another database. Simple, right?  On paper, it’s easy. But you need a large technology stack to build the report:  ETL tools to fetch data from both SharePoint, and from the other database. A web service to call the SharePoint API so that it can be called by the ETL tool (and don’t you dare tell me you can read SQL directly… Smile  you are not supposed to, and it’s not even possible for SharePoint Online/Office 365). A temporary database to store it all. A job to refresh the data every 24 hours perhaps. And finally you can build the report. Oh, and if you are a larger company, a DEV, TEST and STAGE environment where all of that is duplicated… We are talking weeks or even months of work here…

But if you could do it all using SQL, why even bother with an ETL tool, a temporary database, a job, or even a web service?  Just pull the data, join it on the report, and you are done! And on top of it, it’s real time!  Why? Because the API is virtualized as a data source. So the playing field is even for reporting tools to play nicely without the need to move data around.

Let’s be careful: I am not saying a no-API platform replaces the need for ETL, web services or even temporary databases. They just offer a much simpler alternative for the myriad of projects that simply don’t need heavy infrastructure.

I am also saying however that a large part of a corporate workforce would become more productive if they had easy access to the sea of information locked up behind APIs. In my humble opinion, developers are becoming a bottleneck for many parts of an organization, because they are in short supply. So let’s remove the bottleneck! Let’s empower an entire workforce by giving them the magic of no-API through SQL.

So…

 

So here it is.  In my opinion, APIs are getting too complex for many non-developers, but data exposed by APIs is too valuable to corporations to be locked behind APIs. And SQL is the natural choice for these users. So let’s give it to them.

This is call for action. To learn more, check out our new website:  http://www.bluesyntaxconsulting.com and let us know what you think.

About Herve Roggero

Herve Roggero, Windows Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Backup Options for Windows Azure–Summary From Mike Martin

Mike Martin (@TechMike2KX, blog), Windows Azure MVP, did an awesome job at summarizing the various backup options for Windows Azure in this video. He first starts with an overview of redundancy in Windows Azure that covers Microsoft’s SLA and dives into the need for recovery for Windows Azure in general, including IaaS services and PaaS. Mike talks about multiple built-in solutions, and even provides a demo of third party vendors, such as Red Gate’s backup (Mike Wood Windows Azure MVP,  @mikewo, blog), and Zudio (Mark Rendle, Windows Azure MVP, @markrendle, blog).  I highly recommend you take a look at this video to get a quick overview of Windows Azure redundancy and recovery options.

Great job Mike!

PaaS Adoption Blockers

Cloud computing has enabled many organizations, small and large, to build business solutions on a platform that enables growth, IT management flexibility, and agility, while optimizing operating costs. I refer to this objective of cloud computing as Adaptive Computing, which is achieved through the use of Platform as a Service (PaaS). As organizations begin to understand the value of cloud computing, many are realizing that certain key blockers prevent them from realizing all the benefits they would otherwise reap. Realizing these benefits require the use of Platform as a Service (PaaS) because PaaS provides complete abstraction from underlying server and networking resources, and as a result allows organizations to build applications and services that are designed to adjust resource consumption dynamically based on actual demand. As a result, PaaS provides an optimized IT resources consumption model that yields a lower total cost of ownership over time. However, PaaS suffers from key adoption challenges which tends to drive corporations to using Infrastructure as a Service (IaaS) instead. In this post I am providing the top 5 PaaS adoption blockers I have seen so far, specifically for organizations that have a choice to use PaaS or IaaS as their technology foundation.

Blocker 1 – Vendor lock-in

I overheard some vendors discuss how they can convince customers to use services that would lock them in for a long period of time. While this may be good for the vendor, it is obviously undesirable for customers, because vendor lock-in prevents business agility in the long term. Some of the most advanced cloud services that offer the foundation for adaptive computing require the use of vendor-specific technologies. And as a result, some customers are choosing to avoid these technologies in favor of simpler, less flexible architectures, such as IaaS. Paradoxically, this vendor lock-in is creating a disservice to the computing industry and hurting PaaS adoption.

Blocker 2 – Low availability

Most cloud vendors advertise 99.9% availability or higher for certain services. However you will need to check your cloud provider Service Level Agreements for information on how the availability is computed. In some cases, the availability is calculated monthly. A 99.9% uptime monthly means roughly 9 hours of downtime per year. While this may be acceptable to many organizations, businesses for which uptime is critical will find this level of availability too costly for their operations. As a result, organizations that need higher availability will need to spend a significant time and effort in building compensating components that can be too difficult to design and test successfully, hence blocking adoption.

Blocker 3 – Unpredictable pricing

While most cloud providers offer pricing calculators, properly estimating cloud computing costs can be challenging, specifically when you need to consider licensing and labor costs (such as migration and operational costs). However a bigger challenge is the unpredictable nature of pricing, which can change (up or down) in any given month. This is specially true for PaaS applications that are built with adaptive mechanisms designed to automatically expand or reduce IT resources based on actual computing needs. On one hand, adaptive computing provides a more optimized cost curve because only the necessary computing resources are used when needed; on the other hand it becomes practically impossible to predict bills on a month to month basis. As a result, some companies prefer to use IaaS deployments to predict their monthly costs, instead of using more optimized PaaS implementations that typically yield unpredictable bills.

Blocker 4 – On-premises parity

The need to fall back to on premises operations, or to use cloud computing as an extension of on premises operations means that applications should function regardless of their environment. Because of a lack of parity between cloud technologies with on premises services, companies struggle with cloud adoption using PaaS. The parity gap usually affects PaaS solutions; because IaaS is a form of hosting, it can be used to eliminate the differences of parity between on premises and cloud components, and even between cloud providers. As a result, the parity gap tends to drive companies to use IaaS implementations instead of more optimized PaaS architectures. This leads to more rigid cloud implementations, which prevents organizations from achieving the economical promises of adaptive computing only achievable through the adoption of PaaS.

Blocker 5 – Rate of Change

A high rate of change in cloud computing is normal; cloud vendors compete in a young and evolving field, creating a significant foundation for technology innovation. This highly evolving computing model creates new technology platforms yearly and at times retires services that are not economical or that experience low adoption. However this rate of change can create significant challenges for organizations that seek stability and rely on multi-year IT investments, hence preventing larger organizations from adopting cloud computing. In some instances, PaaS software development kits see radical changes within just a few months, causing significant rework for certain applications.

Conclusion

Understanding cloud adoption gaps can help you be more successful in your cloud computing roadmap and help you leverage the lessons learned of other organizations. As a result, the PaaS adoption blockers identified in this post are not meant to discourage you from adopting the cloud; instead they are meant to give you a high level understanding of typical adoption challenges to help your organization prepare for the future of adaptive computing. Many organizations choose IaaS as their first step into cloud computing, because it is the least disruptive. However achieving the highest levels of agile computing that lead to optimum use of IT resources can only be achieved through the use of PaaS, which suffers from the above blockers.

About Herve Roggero

Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Cloud Adoption Challenges

While cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization.

Cloud First, Burst?

In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both. 

What’s the Problem?

Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption.

Performance or Scalability?

I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption.

Uptime?

Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users.

About Herve Roggero

Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Is Cloud Computing just Hype?

So.. is Cloud Computing really just hype? I thought I would touch on a topic that seems to come up frequently from the business community; business owners don’t always understand the business implications of cloud computing. And for good reasons… so many companies claim to “be in the cloud” that it can be hard to figure out what the cloud actually does to make things better.

What it’s not…

Let’s start with what the cloud is not. The cloud isn’t a magic potion that makes your business more profitable, or is always cheaper to use. The cloud doesn’t inherently make your vendors’ services better, faster, or cheaper either. It’s not a programming paradigm that automatically fixes application problems, and it doesn’t always make things easier.

So why bother?

What it is…

In my opinion, the cloud is a place where you have the opportunity to rethink how you use centralized resources for your business. It requires careful planning and purposeful actions. Simply moving an existing workload, like a SharePoint infrastructure in the cloud, probably won’t deliver anything of value by itself. And it probably won’t be much cheaper either; in fact in some cases running in the cloud can increase your long-term costs. However if you have a set of integration needs, device and mobility requirements, a desire to move certain security components out of your network, or simplify managed services around your SharePoint infrastructure, hosting SharePoint in the cloud may make sense.

For Independent Software Vendors for example (ISV), the cloud can be an opportunity to transform an existing business application into a SaaS model and gain new customers, add new revenue streams and simplify certain aspects of your application maintenance cycles. Talking about ISVs, you may want to read a white paper I authored for AAJ Technologies: http://aajtech.com/SiteCollectionDocuments/Successful%20Path%20to%20SaaS%20Applications.pdf.

So while the cloud itself isn’t hype, many companies are creating hype around the cloud. So before you start adopting cloud computing or use a vendor’s cloud computing offering, ask the right questions, map your business needs to the benefits of the cloud vendor, understand the costs involved, and you will be able to leverage the promises of cloud computing for your business.

About Herve Roggero

Herve Roggero, Windows Azure MVP in South Florida, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626).

TechEd 2013–Day 3 Summary

On day 3 I attended an interesting talk on Hekaton, now know as SQL Server 2014, by Sunil Agarwal. Sunil showed us how Hekaton turns stored procedures into C code and loads them as DLLs for SQL Server to use. He also showed us how in-memory tables are organized in memory and explained that the reason in-memory tables were so fast was that they were not organized in a B-tree, but as a hash table. Sunil mentioned that on one test, statements against a standard table was generating over 900,000 CPU instructions, while the same statement against an in-memory table was generating about 40,000 instructions. The demo from Sunil showed that inserting 500,000 records was sub-second with in memory-tables... on his laptop!  If you want to see the recorded presentation, go to: http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/DBI-B204#fbid=ZqUWDASLjhg

Many other things took place on Day 3, including a side session with the product and licensing group and an amazing private event evening at Emeril’s restaurant in New Orleans, where I had the pleasure of seeing many of the MVPs present at TechEd.

About Herve Roggero

Herve Roggero, Windows Azure MVP in South Florida, works for AAJ Technologies (http://www.aajtech.com) and is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626).

TechEd 2013 - Day 2 Summary

Wow… what another amazing day. I attended two excellent sessions on Day 2 of TechEd: Predictive Analytics with Microsoft Big Data (Hadoop / HDInsights) [Val Fontama and Saptak Sen] and Best Practices for Building Your Strategy for a Private Cloud [Eduardo Kassner].  In the Big Data talk Val and Saptak discussed important considerations for building a predictive analytics environment in general terms, then demonstrated how to build one using Hadoop. It was pretty fascinating. Val presented a sample analytic workflow going through 5 phases: Business Problem, Data Collection and Preparation, Model Development, Model Deployment and Monitoring. Last but not least, Val also presented the common pitfalls in building a predictive analytic environment: Sample Bias, Over-fitting the model, and Poor interpretation.

In the Private Cloud presentation, Eduardo used both cunning remarks and somewhat funny (and welcome) jokes that drove the points home nicely. Eduardo had some really interesting discussion points that present a view of private cloud computing in a different light than the typical vendor/hardware discussion you typically hear. In fact, his presentation was so well constructed, and the available resources online are so well thought out that this might very well be the best presentation on cloud computing I have ever attended. Eduardo presented a Capability Maturity Model that exposes a corporation’s objectives and processes in a way that makes it possible to deliver incremental cloud computing services that make sense for the organization, in a completely cloud vendor agnostic model. If you are interested in learning more, check out the Optimization model from Microsoft here: http://www.microsoft.com/optimization/default.mspx.

About Herve Roggero

Herve Roggero, Windows Azure MVP in South Florida, works for AAJ Technologies (http://www.aajtech.com) and is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626).

Twitter