Geeks With Blogs

Cloud Lessons Developing Data Solutions in the Cloud
For those of us who were early adopters of Microsoft's SQL Azure database offering, there were some early obstacles, such as limited database size (when we started, they were talking about increasing the maximum size to 50 GB) and limited backup and recovery options.

Over the years, Microsoft has steadily enhanced their database offering, and vendors such as RedGate have offered services to cover the gaps.

Now Microsoft is rolling out a new generation of databases, addressing three key issues:
  1. Point-in-time recovery and geo-replication to overcome backup and recovery deficiencies.
  2. Consistent performance to overcome unpredictability of shared resource environments.
  3. Larger database sizes (up to 500 GB).

These changes come with a significant price increase for anyone who sharded their data into several small databases. We are trying to determine which performance level of the new database offerings we will need in order to support current performance. Fortunately, Microsoft is giving current usage data in terms of their new offering's usage units (DTUs).

If you are looking for these metrics, check out their new Real-Time Performance Metrics.

We were surprised how heavy our usage is, which has led us to re-examine our indexing and optimize for our costliest queries. Once we determine our performance needs on the new databases, we will need to combine our existing small databases into fewer large databases to save cost. We hope to find some performance boosts in the process as we overcome some of the sharding overhead.

Posted on Thursday, September 18, 2014 10:48 AM Azure , Cloud , Database | Back to top


Comments on this post: New Microsoft Azure SQL performance metrics

Comments are closed.
Comments have been closed on this topic.
Copyright © Wesley Wilson | Powered by: GeeksWithBlogs.net