Creating a SharePoint DataLake with SQL Server using Enzo Unified

In this blog post I will show how you can easily copy a SharePoint list to a SQL Server table, and keep the data updated one a specific frequency, allowing you to easily create a DataLake for your SharePoint lists. This will work with SharePoint 2013 and higher, and with SharePoint Online. While you can spend a large amount of time learning the SharePoint APIs and its many subtleties, it is far more efficient to configure simple replication jobs that will work under most scenarios.

The information provided in this post will help you get started in setting a replication of SharePoint lists to a SQL Server database, so that you can query the local SQL Server database from Excel, Reporting tools, or even directly to the database. You should also note that Enzo Unified provides direct real-time access to SharePoint Lists through native SQL commands so you can view, manage and update SharePoint List Items.

image

 

Installing Enzo Unified

To try the steps provided in this lab, you will need the latest version of Enzo Unified (1.7 or higher) provided here:  http://www.enzounified.com/download. The download page also contains installation instructions.

Enzo Unified Configuration

Once Enzo Unified has been installed, start Enzo Manager (located in the Enzo Manager directory where Enzo was installed). Click on File –> Connect and enter the local Enzo connection information. 

NOTE:  Enzo Unified is a Windows Service that looks like SQL Server; you must connect Enzo Manager to Enzo Unified which by default is running on port 9550. The password should be the one you specified during the installation steps. The following screen shows typical connection settings against Enzo Unified:

image

Create Connection Strings

Next, you will need to create “Central Connection Strings” so that Enzo will know how to connect to the source system (SharePoint) and the destination database (SQL Server). You manage connection strings from the Configuration –> Manage Connection Strings menu. In the screen below, you can see that a few connection strings have been created. The first one is actually a connection string to Enzo Unified, which we will need later.

image

The next step is to configure the SharePoint adapter by specifying the credentials used by Enzo Unified. Configuring the SharePoint adapter is trivial: three parameters are needed: a SharePoint login name, the password for the login, and the URL to your SharePoint site. You should make sure the login has enough rights to access SharePoint lists and access SharePoint Fields.

image

Once the configuration to the SharePoint site is complete, you can execute commands against Enzo Unified using SQL Server Management Studio.

Fetch records from SharePoint using SQL Server Management Studio

To try the above configuration, open SQL Server Management Studio, and connect to Enzo Unified (not SQL Server). From the same machine where Enzo is running, a typical connection screen looks like this:

image

Once you are connected to Enzo Unified, and assuming your SharePoint site has a list called Enzo Test, you can run simple SQL commands like this:

SELECT * FROM SharePoint.[list@Enzo Test]

Create a Virtual Table

You will also need to create a Virtual Table in Enzo so that the SharePoint list looks like a table in Enzo Unified. A Virtual Table is made of columns that match the SharePoint list you want to replicate. To do this, open Enzo Manager, select the SharePoint adapter, and create a new Virtual Table by clicking on the NEW icon; provide a name for the Virtual Table, and select the columns to create through a picker. In the example below, I am creating a Virtual Table called vEnzoTest, which mirrors a SharePoint List called ‘Enzo Test’.

image

The picker allows you to execute the SQL command to validate it is working. Clicking OK will automatically add all the requested columns to the Virtual Table.

Make sure to pick the ID and Modified columns; this will be required later.

image

Once completed, I can run commands like this against the SharePoint adapter using SQL Server Management Studio:

SELECT * FROM SharePoint.vEnzoTest

SELECT * FROM SharePoint.vEnzoTest WHERE Title ID > 100

The difference with the previous SQL command is that the virtual table will only return the columns specified by the Virtual Table.

Configure Data Sync Jobs

Once the Virtual Table has been created, you can add new Jobs to copy the SharePoint data into SQL Server, and keep updates synchronized with the SQL Server table. A simple configuration screen allows you to setup the data sync jobs. You can choose which operations to replicate, the destination table, and a schedule for data updates. In the example below I am setting up a 3 data sync jobs of the vPosts Virtual Table: initialization, insert/update, and delete, updated every 5 minutes.

image

You can also use Enzo Manager to monitor the data sync jobs, or run them manually.

Once the jobs have been created, you can simply connect to the SQL Server database (not Enzo Unified) and see the replicated data. For example, you can connect to SQL Server, and run the following statement assuming the above destination table (as shown in the screenshot) has been created.

SELECT * FROM DataLake.SharePoint.vPosts

Conclusion

This post shows you how to easily configure Enzo Unified to replicate SharePoint lists to a local SQL Server database to enable reporting and other data integration projects such as a Data Lake.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Writing a Voice Activated SharePoint Todo List - IoT App on RPi

Ever wanted to write a voice activated system on an IoT device to keep track of your “todo list”, hear your commands being played back, and have the system send you a text message with your todo list when it’s time to walk out the door?  Well, I did. In this blog post, I will provide a high level overview of the technologies I used, why I used them, a few things I learned along the way, and partial code to assist with your learning curve if you decide to jump on this.  I also had the pleasure of demonstrating this prototype at Microsoft’s Community Connections in Atlanta in front of my colleagues.

How It Works

I wanted to build a system using 2 Raspberry Pis (one running Windows 10 IoT Core, and another running Raspbian) that achieved the following objectives:

  • * Have 2 RPis that communicate through the Azure Service Bus
    This was an objective of mine, not necessarily a requirement; the intent was to have two RPis running different Operating Systems communicate asynchronously without sharing the same network
  • * Learn about the Microsoft Speech Recognition SDK
    I didn’t want to send data to the cloud for speech recognition; so I needed an SDK on the RPi to perform this function; I chose the Microsoft Speech Recognition SDK for this purpose

    * Communicate to multiple cloud services without any SDK so that I could program the same way on Windows and Raspbian (Twilio, Azure Bus, Azure Table, SharePoint Online)
    I also wanted to minimize the learning curve of finding which SDK could run on a Windows 10 IoT Core, and Raspbian (Linux); so I used Enzo Unified to abstract the APIs and instead send simple HTTPS commands allowing me to have an SDK-less development environment (except for the Speech Recognition SDK). Seriously… go find an SDK for SharePoint Online for Raspbian and UWP (Windows 10 IoT Core).

The overall solution looks like this:

image

Technologies

In order to achieve the above objectives, I used the following bill of materials:

Technology Comment Link
2x Raspberry Pi 2 Model B Note that one RPi runs on Windows 10 IoT Core, and the other runs Raspbian http://amzn.to/2qnM6w7
Microphone I tried a few, but the best one I found for this project was the Mini AKIRO USB Microphone http://amzn.to/2pGbBtP
Speaker I also tried a few, and while there is a problem with this speaker on RPi and Windows, the Logitech Z50 was the better one http://amzn.to/2qrNkop
USB Keyboard I needed a simple way to have keyboard and mouse during while traveling, so I picked up the iPazzPort Mini Keyboard; awesome… http://amzn.to/2rm0FOh
Monitor You can use an existing monitor, but I also used the portable ATian 7 inch display. A bit small, but does the job. http://amzn.to/2pQ5She 
IoT Dashboard Utility that allows you to manage your RPis running Windows; make absolutely sure you run the latest build; it should automatically upgrade, but mine didn’t. http://bit.ly/2rmCWOU
Windows 10 IoT Core The Microsoft O/S used on one of the RPis; Use the latest build; mine was 15063; if you are looking for instructions on how to install Windows from a command prompt, the link provided proved useful  http://bit.ly/2pG9gik
Raspbian Your RPi may be delivered with an SD card preloaded with the necessary utilities to install Raspbian; connecting to a wired network makes the installation a breeze. http://bit.ly/2rbnp7u
Visual Studio 2015 I used VS2015, C#, to build the prototype for the Windows 10 IoT Core RPi http://bit.ly/2e6ZGj5
Python 3 On the Raspbian RPi, I used Python 3 to code. http://bit.ly/1L2Ubdb
Enzo Unified I installed and configured an Enzo Unified instance (version 1.7) in the Azure cloud; for Enzo to talk to SharePoint Online, Twilio, Azure Service Bus and Azure Storage, I also needed accounts with these providers. You can try Enzo Unified for free for 30 days. http://bit.ly/2rm4ymt

 

Things to Know

Creating a prototype involving the above technologies will inevitably lead you to collect a few nuggets along the way. Here are a few.

Disable Windows 10 IoT Core Updates

While disabling updates is generally speaking not recommended, IoT projects usually require a predictable environment that does not reboot in the middle of a presentation. In order to disable Windows Updates on this O/S I used information published Mike Branstein on his blog: http://bit.ly/2rcOXt9

Try different hardware, and keep your receipts…

I had to try a few different components to find the right ones; the normally recommended S-150 USB Logitech speakers did not work for me; I lost all my USB ports and network connectivity as soon as I plugged it in. Neither did the JLab USB Laptop speakers. I also tried the 7.1 Channel USB External Sound Card but was unable to make it work (others were successful). For audio input, I also tried the VAlinks Mini Flexible USB microphone; while it worked well, it picked up too much noise compared to the AKIRO, and became almost unusable in a room with 20 people where you have background noise.

Hotel WiFi Connections

This was one of the most frustrating part of this whole experience on Windows 10 IoT Core. You should know that this operating system does not currently come equipped with a browser. This means that you cannot easily connect to a hotel network since this usually requires starting a browser so that you can enter a user id and password provided by the hotel. Further more, since there is also no possible way to “forget” a previously registered network, you can find yourself in a serious bind… I first purchased the Skyroam Mobile Hotspot, hoping it would provide the answer. Unfortunately the only time I tried it, in Tampa Florida, the device could not obtain a connection. So I ended up adding a browser object into my UWP application and force it to refresh a specific image every time I start the app; this will force the hotel login page to show up when needed. I am still looking for a good solution to this problem.

Speech Privacy Policy on Windows

Because parts of the code I am running leverages the underlying APIs of Cortana, it seems that you must accept the Cortana privacy policy; this is required only the first time you run the application, but is obviously a major nightmare for applications you may want to ship. I am not aware of any programmatic workaround at this time. This stackoverflow post provides information about this policy and how to accept it.

How It Looks Like

A picture is worth a thousand words… so here is the complete setup:

20170502_225941

C# Code

Since this is an ongoing prototype I will not share the complete code at this time; however I will share a few key components/techniques I used to make this work.

Speech Recognition

I used both continuous dictation speech recognition, and grammar-based recognition from the Microsoft Speech Recognition API. The difference is that the first one gives you the ability to listen to “anything” being said, and the other will only give you a set of results that match the expected grammar. Both methods give you a degree of confidence so you can decide if the command/text input was sufficiently clear. The following class provides a mechanism for detecting input either through continuous dictation or using a grammar file. The timeout ensures that you do not wait forever. This code also returns the confidence level of the capture.

 

using Enzo.UWP;
using System;
using System.Collections.Generic;

using System.Diagnostics;
using System.Net.Http;
using System.Threading.Tasks;
using Windows.ApplicationModel;
using Windows.Devices.Gpio;
using Windows.Media.SpeechRecognition;
using Windows.Media.SpeechSynthesis;
using Windows.Storage;

namespace ClientIoT
{

    public class VoiceResponse
    {
        public string Response = null;
        public double RawConfidence = 0;
    }

    public class VoiceInput
    {
        private const int SPEECH_TIMEOUT = 3;
        private System.Threading.Timer verifyStatus;
        private string lastInput = "";
        private double lastRawConfidence = 0;
        private bool completed = false;
        private bool success = false;

        public async Task<VoiceResponse> WaitForText(string grammarFile)
        {
            return await WaitForText(SPEECH_TIMEOUT, grammarFile);
        }

        public async Task<VoiceResponse> WaitForText(int timeout = SPEECH_TIMEOUT, string grammarFile = null)
        {
            var resp = new VoiceResponse();
            try
            {
                success = false;
                completed = false;
                lastInput = "";
                lastRawConfidence = 0;

                SpeechRecognizer recognizerInput;
                DateTime dateNow = DateTime.UtcNow;

                recognizerInput = new SpeechRecognizer();
                recognizerInput.ContinuousRecognitionSession.ResultGenerated += ContinuousRecognitionSession_InputResultGenerated;
                recognizerInput.StateChanged += InputRecognizerStateChanged;
                recognizerInput.Timeouts.BabbleTimeout = TimeSpan.FromSeconds(timeout);
                recognizerInput.ContinuousRecognitionSession.Completed += ContinuousRecognitionSession_Completed;
                recognizerInput.ContinuousRecognitionSession.AutoStopSilenceTimeout = TimeSpan.FromSeconds(SPEECH_TIMEOUT);
                recognizerInput.Constraints.Clear();

                if (grammarFile != null)
                {
                    StorageFile grammarContentFile = await Package.Current.InstalledLocation.GetFileAsync(grammarFile);
                    SpeechRecognitionGrammarFileConstraint grammarConstraint = new SpeechRecognitionGrammarFileConstraint(grammarContentFile);
                    recognizerInput.Constraints.Add(grammarConstraint);
                }

                var compilationResult = await recognizerInput.CompileConstraintsAsync();

                // If successful, display the recognition result.
                if (compilationResult.Status != SpeechRecognitionResultStatus.Success)
                {
                    Debug.WriteLine(" ** VOICEINPUT - VoiceCompilationError - Status: " + compilationResult.Status);
                }

                recognizerInput.ContinuousRecognitionSession.AutoStopSilenceTimeout = TimeSpan.FromSeconds(timeout);
                recognizerInput.RecognitionQualityDegrading += RecognizerInput_RecognitionQualityDegrading;
                await recognizerInput.ContinuousRecognitionSession.StartAsync();

                System.Threading.SpinWait.SpinUntil(() =>
                    completed
                );
               
                resp = new VoiceResponse() { Response = lastInput, RawConfidence = lastRawConfidence };
               
                try
                {
                    recognizerInput.Dispose();
                    recognizerInput = null;
                }
                catch (Exception ex)
                {
                    Debug.WriteLine("** WaitForText (1) - Dispose ** " + ex.Message);
                }
            }
            catch (Exception ex2)
            {
                Debug.WriteLine("** WaitForText ** " + ex2.Message);
            }
            return resp;
        }

        private void RecognizerInput_RecognitionQualityDegrading(SpeechRecognizer sender, SpeechRecognitionQualityDegradingEventArgs args)
        {
            try
            {
                Debug.WriteLine("VOICE INPUT - QUALITY ISSUE: " + args.Problem.ToString());
            }
            catch (Exception ex)
            {
                Debug.WriteLine("** VOICE INPUT - RecognizerInput_RecognitionQualityDegrading ** " + ex.Message);
            }
        }

        private void ContinuousRecognitionSession_Completed(SpeechContinuousRecognitionSession sender, SpeechContinuousRecognitionCompletedEventArgs args)
        {
            if (args.Status == SpeechRecognitionResultStatus.Success
                || args.Status == SpeechRecognitionResultStatus.TimeoutExceeded)
                success = true;
            completed = true;
           
        }

        private void ContinuousRecognitionSession_InputResultGenerated(SpeechContinuousRecognitionSession sender, SpeechContinuousRecognitionResultGeneratedEventArgs args)
        {
            try
            {
                lastInput = "";
                if ((args.Result.Text ?? "").Length > 0)
                {
                    lastInput = args.Result.Text;
                    lastRawConfidence = args.Result.RawConfidence;
                    Debug.WriteLine(" " + lastInput);
                }
            }
            catch (Exception ex)
            {
                Debug.WriteLine("** ContinuousRecognitionSession_InputResultGenerated ** " + ex.Message);
            }
        }

        private void InputRecognizerStateChanged(SpeechRecognizer sender, SpeechRecognizerStateChangedEventArgs args)
        {
            Debug.WriteLine("  Input Speech recognizer state: " + args.State.ToString());
        }
    }
}

For example, if you want to wait for a “yes/no” confirmation, with a 3 second timeout, you would call the above code as such:

var yesNoResponse = await (new VoiceInput()).WaitForText(3, YESNO_FILE);

And the yes/no grammar file looks like this:

<?xml version="1.0" encoding="utf-8" ?>
<grammar
  version="1.0"
  xml:lang="en-US"
  root="enzoCommands"
  xmlns="http://www.w3.org/2001/06/grammar"
  tag-format="semantics/1.0">

  <rule id="root">
    <item>
      <ruleref uri="#enzoCommands"/>
      <tag>out.command=rules.latest();</tag>
    </item>
  </rule>

  <rule id="enzoCommands">
    <one-of>
      <item> yes </item>
      <item> yep </item>
      <item> yeah </item>
      <item> no </item>
      <item> nope </item>
      <item> nah </item>
    </one-of>
  </rule>

</grammar>

Calling Enzo Unified using HTTPS to Add a SharePoint Item

Another important part of the code is its ability to interact with other services through Enzo Unified, so that no SDK is needed on the UWP application. For an overview on how to access SharePoint Online through Enzo Unified, see this previous blog post.

The following code shows how to easily add an item to a SharePoint list through Enzo Unified. Posting this request to Enzo requires two parameters (added as headers) called “name” and “data” (data is an XML string containing the column names and values to be added as a list item).

public static async Task SharePointAddItem(string listName, string item)
{
            string enzoCommand = "/bsc/sharepoint/addlistitemraw";
            List<KeyValuePair<string, string>> headers = new List<KeyValuePair<string, string>>();

            string data = string.Format("<root><Title>{0}</Title></root>", item);

            headers.Add(new KeyValuePair<string, string>("name", listName));
            headers.Add(new KeyValuePair<string, string>("data", data));

            await SendRequestAsync(HttpMethod.Post, enzoCommand, headers);
}

And the SendRequestAsync method below shows you how to call Enzo Unified. Note that I added two cache control filters to avoid HTTP caching, and additional flags for calling Enzo Unified on an HTTPS port where a self-signed certificate is installed.

private static async Task<string> SendRequestAsync(HttpMethod method, string enzoCommand, List<KeyValuePair<string, string>> headers)
{
            string output = "";
            var request = EnzoUnifiedRESTLogin.BuildHttpWebRequest(method, enzoCommand, headers);
           
            var filter = new Windows.Web.Http.Filters.HttpBaseProtocolFilter();
            if (IGNORE_UNTRUSTEDCERT_ERROR)
            {
                filter.IgnorableServerCertificateErrors.Add(Windows.Security.Cryptography.Certificates.ChainValidationResult.Untrusted);
                filter.IgnorableServerCertificateErrors.Add(Windows.Security.Cryptography.Certificates.ChainValidationResult.InvalidName);
            }
            filter.CacheControl.ReadBehavior = Windows.Web.Http.Filters.HttpCacheReadBehavior.MostRecent;
            filter.CacheControl.WriteBehavior = Windows.Web.Http.Filters.HttpCacheWriteBehavior.NoCache;

            Windows.Web.Http.HttpClient httpClient = new Windows.Web.Http.HttpClient(filter);

            try
            {
                using (var response = await httpClient.SendRequestAsync(request))
                {
                    output = await response.Content.ReadAsStringAsync();
                }
            }
            catch (Exception ex)
            {
                System.Diagnostics.Debug.WriteLine(" ** Send Http request error: " + ex.Message);
            }
            return output;
}

Last but not least, the BuildHttpWebRequest method looks like this; it ensures that the proper authentication headers are added, along with the authentication identifier for Enzo:

public static Windows.Web.Http.HttpRequestMessage BuildHttpWebRequest(Windows.Web.Http.HttpMethod httpmethod, string uri, List<KeyValuePair<string,string>> headers)
{
            bool hasClientAuth = false;

            Windows.Web.Http.HttpRequestMessage request = new Windows.Web.Http.HttpRequestMessage();

            request.Method = httpmethod;
            request.RequestUri = new Uri(ENZO_URI + uri);

            if (headers != null && headers.Count() > 0)
            {
                foreach (KeyValuePair<string, string> hdr in headers)
                {
                    request.Headers[hdr.Key] = hdr.Value;
                }
            }

            if (!hasClientAuth)
                request.Headers["authToken"] = ENZO_AUTH_GUID;

            return request;
}

Text to Speech

There is also the Text to Speech aspect, where the system speaks back what it heard, before confirming and acting on the command. Playing back is actually a bit strange in the sense that it requires a UI thread. In addition, it seems that Windows 10 IoT Core and Raspberry Pi don’t play nice together; it seems that every time a playback occurs, a loud tick can be heard before and after. A solution appears to be using USB speakers, but none worked for me. The code below simply plays back a specific text and waits a little while in an attempt to give enough time for the playback to finish (the code is non-blocking, so the SpinWait attempts to block the code until completion of the playback).

private async Task Say(string text)
{
            SpeechSynthesisStream ssstream = null;

            try
            {
                SpeechSynthesizer ss = new SpeechSynthesizer();
                ssstream = await ss.SynthesizeTextToStreamAsync(text);
            }
            catch (Exception exSay)
            {
                Debug.WriteLine(" ** SPEECH ERROR (1) ** - " + exSay.Message);
            }

            var task1 = this.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, async () =>
            {
                try
                {
                    await media.PlayStreamAsync(ssstream);
                }
                catch (Exception exSay)
                {
                    Debug.WriteLine(" ** SPEECH ERROR (2) ** - " + exSay.Message);
                }
            });

            // Wait a little for the speech to complete
            System.Threading.SpinWait.SpinUntil(() => 1 == 0, lastInput.Length * 150);

}

Calling the above code is trivial:

await Say("I am listening");

 

Python

The code in python was trivial to build; this RPi was responsible for monitoring events in the Azure Service Bus and turning on/off the LED attached to it. The following pseudo code shows how to call Enzo Unified from Python without using any SDK:

import sys
import urllib
import urllib2
import requests

enzourl_receiveMsg=”http://…/bsc/azurebus/receivedeletefromsubscription”
enzo_guid=”secretkeygoeshere”
topicName=”enzoiotdemo-general”
subName=”voicelight”

while 1=1
   try:
      headers={‘topicname’:topicName,
         ‘authToken’:enzoguid,
         ‘subname’:subName,
         ‘count’:1,
         ‘timeoutSec’:1
      }
      response=requests.get(enzourl_receiveMsg,headers=headers)
      resp=response.json()
      if (len(resp[‘data’][‘Table1’]) > 0
         #extract response here…

 

Conclusion

This prototype demonstrated that while there were a few technical challenges along the way, it was relatively simple to build a speech recognition engine that can understand commands using Windows 10 IoT Core, .NET, and the Microsoft Speech Recognition SDK. 

Further more, the intent of this project was also to demonstrate that Enzo Unified made it possible to code against multiple services without the need for an SDK on the client side regardless of the platform and the development language.  Abstracting SDKs through simple HTTP calls makes it possible to access Twilio, SharePoint Online, Azure services and much more without any additional libraries on the client system.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

MS Dynamics CRM Adapter now available on Enzo Unified

I am proud to present our first third party adapter created by Aldo Garcia. With this adapter, you can communicate to Microsoft Dynamics CRM Online directly from SQL Server Management Studio, and from stored procedures/views/functions. You can also programmatically access MS Dynamics CRM records through this adapter using basic HTTP commands and headers, without the need to download/use/install the Dynamics CRM SDK. This provides a simple, light-weight programmatic environment for both SQL and lightweight clients (such as IoT and mobile devices).

Aldo’s adapter allows you to issue simple SQL EXEC or SELECT/INSERT/UPDATE/DELETE commands to manage records in MS Dynamics. For example, this command retrieves all the records found in the account table:

SELECT * FROM DynamicsCRM.RetrieveMultiple@account

And this command does the same thing using an EXEC request:

EXEC DynamicsCRM.RetrieveMultiple ‘account’

The SELECT command supports simple WHERE clauses, TOP N, NULL field filters, and ORDER BY operations.

To view how Aldo is able to read/write MS Dynamics data through simple SQL commands, visit his blog post here.

And to see how to use the MS Dynamics adapter to load data in bulk, please visit this blog post.

ENZO UNIFIED - DEV CONTEST 2017

Compete for a chance to win $2,000!

That’s right. We are offering a $2,000 first prize for the most innovative use of our technology. Within just a few hours you will be surprised how much you can accomplish with our .NET Developer SDK. In addition, your work could be advertised nationally and promoted at customer locations worldwide as part of our solutions and earn significant additional revenue if you so desire.

CONTEST REGISTRATIONhttp://tinyurl.com/j2erpm2

 

Prizes

 

-          First Place Prize:  $2,000
-          Second Place Prize:  $750
-          Third Place Prize: $250

All submissions should be made by May 1st 2017 by midnight (US Eastern Time)

Contact us at info@enzounified.com for more information

Additional Information

How to write your first adapter:  http://geekswithblogs.net/hroggero/archive/2017/03/14/writing-your-first-adapter-for-enzo-unified.aspx

Download the Enzo Unified SDK at this location:  http://www.enzounified.com/downloads/sdk/1.7/setup-sdk-17.exe

Download the help file for the Enzo Unified SDK here:  https://goo.gl/BbYm83

Download the installation doc for the Enzo Unified SDK here:  https://goo.gl/GQDvB9

Writing your first adapter for Enzo Unified

This blog post shows you how to write your first adapter using the Enzo Unified SDK, so that you can extend the capabilities of SQL Server and/or create a light-weight HTTP proxy for applications, IoT devices or mobile applications. This post assumes you have a high level understanding of Enzo Unified (http://www.enzounified.com) and that you have installed and configured the Enzo Unified SDK (download details below).

In this post, you will learn how to create an adapter to get the current UCT time using the official US NIST time servers. You will write this logic in a new adapter called TimeServers and expose a method called GetUSTime().  Once coded, you will be able to access this method through SQL Server as a stored procedure, or using an HTTP call from any client. Because Enzo Unified exposes these methods as HTTP calls, any client running on any operating system capable of making HTTP calls will be able to run this command.

Pre-Requisites

To successfully build this adapter, you will need the following:

  • Visual Studio 2012 or higher
  • SQL Server 2014 Express Edition or higher
  • Enzo Unified SDK (see download links below)

 

Create an Empty Adapter Project

After downloading and installing the SDK, and registering the Enzo Unified project template per installation instructions, you will be able to create a new Adapter project.

  • Start Visual Studio 2012, and select New Project
  • Select Enzo Unified DataAdapter (Annotated) under Templates –> Visual C# –> Enzo Unified
  • Enter the adapter name:  TimeServers
  • Verify the location of your adapter (it should be under the namespace you specified during installation) and click OK

image

You will need to change two settings:

  • Open TimeServers.cs
  • On line 5, replace the namespace by your namespace (MY is the namespace I entered)
  • Change the the project configuration from Any CPU to x64

image

  • Make sure the project compiles successfully
  • Press F5 to start the adapter (at this point, the adapter does nothing yet)
  • The Enzo Unified Development Host will start and a user interface will be displayed allowing you to test your adapter
  • Type this command, and press F5 to execute it: 

exec my..help

image

Stop the project. We are ready to add the method to this adapter.

Note that while you can use the Development Host for simple testing, it is recommended to use SQL Server Management Studio for better experience. We will review how to connect to Enzo Unified in more detail later.

Create the GetUSTime Method

Let’s replace the entire content of the TimeServers.cs file; we will review the implement next.

using BSC.Enzo.Unified;
using BSC.Enzo.Unified.Logging;
using System;
using System.Collections.Generic;
using System.IO;


namespace MY
{
    public class TimeServers : DataAdapter
    {
        // this is the main call to register your functionality with the Enzo instance
        public override void RegisterAdapter()
        {
            #region Descriptive details
            Title = "NIST Time Server";            // a summary of the adapter's operations
            Description = "Access the NIST time servers";    // more detailed info what your adapter does
            Author = "Me";                            // your or your company name
            AuthorCopyright =                             // your copyright details
                "Copyright (c) 2017 MyCo. All Rights Reserved.";
            AuthorUrl = "www.myco.com";                    // Link to your web site (optional)
            AuthorEmail = "me@myco.com";                // your contact email address
            #endregion


            LogSeverityThreshold = Severity.Debug;        // level of logging detail


            #region Register any configuration variables that can be referenced by your DataAdapter (optional)
           
            // This adapter will require a configuration setting, which can be different per Login/user
            RegisterConfig(
                    "int defaultTimeout|Default timeout in milliseconds||r",
                    "nistServer|The NIST main URI (time.nist.gov)||r"
                );
            #endregion


            #region Optionally register session variables to be automatically created upon client
            // We will keep the list of valid IP Addresses for the time servers as provided by NIST
            RegisterSessionVar("validIPAddress", "");
            #endregion

            #region Handler: GetUTCTime
            RegisterHandler(
                "GetUTCTime,UTCTime|Time",             // name info for your handler
                HandlerOptions.Select,                        // table operations, etc.
                "Obtains the current UTC Time using the NIST servers.",                    // a brief description of functionality
                new[] { "exec TimeServers.GetUTCTime", "exec TimeServers.GetUTCTime 5000" },    // one or more examples
                GetUTCTime,                                    // method used to process request
                new[]                                        // zero or more arguments method expects
                {
                    "int timeout|The timeout for this request"
                },
                new[]                                        // zero or more output columns of rows returned
                {
                    "datetime currentUTCTime|The NIST provided UTC time",
                    "lastIPAddress|The last IP Address used to get the time"
                });
            #endregion
        }

       
        #region Private helper methods

        internal static DateTime GetCurrentUTCDate(ExecEventArgs e, EventResult retval, int timeout, string nistServer, out string ipAddress)
        {
            // Get the last valid IP Address from this session
            string validIPAddress = e.GetVar("validIPAddress", null); // Did we already resolve the list of IP Addresses?
            ipAddress = null;

            int maxAttempts = (validIPAddress == null) ? 5 : 1; // Allow repeating only the first time we make this call in a session

            List<string> addresses = new List<string>();
            DateTime utcDate = DateTime.MinValue;

            string actualTime = "";
            string time = "";
            string lastIP = "";

            while(maxAttempts-- >= 0)
            {
           
                string NISTIP = ipAddress = validIPAddress ?? nistServer;   // Use the last valid IP Address, or the generic NIST endpoint

                retval.AddMessage("NIST Server/IP: " + NISTIP); // Add the server IP Address used to return this date

                try
                {
                    System.Net.Sockets.TcpClient tcp = new System.Net.Sockets.TcpClient(NISTIP, 13);    // Make sure your firewall allows this port through...
                    tcp.Client.ReceiveTimeout = timeout;
                    tcp.Client.SendTimeout = timeout;

                    using (StreamReader reader = new StreamReader(tcp.GetStream()))
                    {
                        time = reader.ReadToEnd();

                        lastIP = tcp.Client.RemoteEndPoint.ToString();
                        if (lastIP != "")
                            lastIP = lastIP.Substring(0, lastIP.IndexOf(':'));

                        reader.Close();

                        if ((time ?? "") != "")
                            break;
                    }
                }
                catch (Exception ex)
                {
                    retval.AddMessage("NIST Error: " + ex.Message); // Add an output to the message window
                }
                System.Threading.Thread.Sleep(timeout); // This is needed or we could get an error from NIST servers
            }

            try
            {
                actualTime = time.Substring(time.IndexOf(' ') + 1);
                actualTime = "20" + actualTime.Substring(0, actualTime.LastIndexOf(':') + 3);
                utcDate = Convert.ToDateTime(actualTime);
            }
            catch
            {
                string err = "Error with date received: " + time;
                throw new Exception(err);
            }

            if (actualTime == "")
                throw new System.Net.WebException("Could not connect to the NIST time server");

            if (validIPAddress == null && lastIP != "")
            {
                // Save the valid IP address in the session variable
                e.SetVar("validIPAddress", lastIP);
            }

            return utcDate;

        }


        #endregion
       

        #region Handler Function: MyHandler
        private EventResult GetUTCTime(object sender, ExecEventArgs e)
        {
            EventResult retval = new EventResult(e);

            int timeout = e.GetArg("timeout", e.GetSetting<int>("defaultTimeout")); // Timeout to use (the value provided as input parameter, or the default setting)
            string nistServer = e.GetSetting("nistServer");

            // Implement validation for the timeout value
            if (timeout == 0)
                return retval.SetResultCode("Timeout value of '{0}' is not accepted.", timeout);

            if (e.IsArgValidation)
                return retval;

            string lastIPAddress = null;
            DateTime now = GetCurrentUTCDate(e, retval, timeout, nistServer, out lastIPAddress);

            retval.AddResultRow(now, lastIPAddress);

            return retval;
        }
        #endregion
    }
}

This code implements the following functionality: Configuration Settings (RegisterConfig, e.GetSetting), Session Variables (RegisterSessionVar, e.GetVar, e.SetVar), a handler (i.e. the method: RegisterHandler) called GetUTCTime, response messages (AddMessage), result rows (AddResultRow) and method arguments (e.GetArg).

RegisterAdapter()

The RegisterAdapter() method is called automatically by Enzo Unified and setups the adapter configuration and methods. Once this method executes, the adapter is up and running.

Configuration Settings store configuration information for a given adapter; these settings are loaded per login. Each login can have one or more configuration settings, but only one can be set as the default configuration. We will create configuration settings soon using an SQL command. In our code, we are keeping two settings: a timeout value and the URL of the NIST server load balancer. The syntax for declaring configuration settings is documented in the help file; in our example we require that both values be set, but no default value is provided. We access these values by calling the GetSetting method on the ExecEventArg variable.

Next we define a session variable called validIPAddress. Session variables are created when a user logs in, and destroyed when logging off. This is typically used for performance reasons so that when a SQL login takes place variables can be stored and retrieved later. However session variables do not provide performance improvements for HTTP calls or over Linked Server because a new session is established every time. We get and set session variables using the GetVar and SetVar methods on the ExecEventArg variable.

Then the RegisterHandler method is called; this method creates a new handler that can listen for native SQL commands as well as HTTP requests. The method takes several parameters, including the actual method to call, the list of input parameters and output columns. In this case, we are creating a method called GetUTCTime and can be called with HTTP from code, and EXEC or SELECT operators with SQL Server.

GetUTCTime()

This method performs the actual operation for the GetUTCTime handler. Of importance, this method provides an ExecEventArg variable that gives detailed information about the incoming request. Feel free to inspect this object; it contains information about the caller, the operation being sent and the input parameters.

This method expects a EventResult object back. The EventResult object allows you to specify the return rows and optional messages to be returned to the caller. To add rows to the output, simply call the AddResultRow method and a list of values to the expected columns being returned. In our case, the RegisterHandler dictates that two columns need to be returned: a datetime and a string (currentUTCTime and lastIPAddress). Although not shown in this example, you can also dynamically define a list of columns to be returned if needed.

This method also calls GetCurrentUTCDate(), which implements the logic for fetching the current time from NIST servers. The logic itself isn’t very important; however you will notice calls to GetVar and SetVar to get/set the session variable created earlier.

First Time Setup

Now that the adapter is ready to go, run it by pressing F5; the Enzo Unified Development Host will start automatically.  It is time to configure our default settings; execute the following commands:

exec my.TimeServers._configCreate 'nistsettings', 1, 1000, 'time.nist.gov'
exec my.TimeServers._configUse 'nistsettings'

The first command creates a setting called ‘nistSettings’, sets it as the default configuration, with 1 second timeout and time.nist.gov as the initial NIST address to use.  Once these calls are made, you will no longer need to execute them.

Testing with SQL Commands

We will test our SQL commands using SQL Server Management Studio (SSMS). Start SSMS and connect to Enzo Unified (warning: you may be tempted to connect to SQL Server; you should instead connect to Enzo Unified); enter the following information:

  • Server Name:  localhost,9550
  • Authentication: SQL Server Authentication
  • Login: sa
  • Password: password

image

Let’s first examine the built-in help of our command:

exec  my.TimeServers.getutctime help

You will see a built-in help provided listing the input parameters and output columns:

image

Let’s execute the procedure called GetUTCTime using a SQL Stored Procedure call; we not passing the timeout value, so the default of 1 second will be used:

exec  my.TimeServers.getutctime

image

You can also try the SELECT command (note that the table name is a bit different, as specified in the RegisterHandler call):

SELECT * FROM my.TimeServers.utctime

Testing with REST

Before being able to run a command through REST, we first need to find out the AuthToken for the REST request for the ‘sa’ account.  The AuthToken is unique for each account in Enzo Unified; when a command is executed, Enzo Unified looks up the AuthToken provided and executes the request in the context of the associated user. To find the AuthToken, execute this SQL command from SQL Server; lookup the authToken column.

SELECT * FROM instance.security.accounts

To test our method from a REST client, let’s use Fiddler; we simply need to issue a GET request against the my/timeserver/getutctime method. Here is the full URL for the request; the port information was specified during installation of Enzo Unified.

http://localhost:19550/my/timeservers/getutctime

And the body of the request is as follows:

authToken: 701bc09f-8bed-4087-9c88-364f45fa83c0

If we want to specify a timeout value other than the default we would add this line to the payload:

timeout: 1000

image

Once executed, you will receive a JSON response with the current UTC time as expected.

 

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Multithreading with PowerShell using RunspacePool

If you are working with PowerShell, you may want to start running certain scripts in parallel for improved performance. While doing some research on the topic, I found excellent articles discussing Runspaces and how to execute scripts dynamically. However I needed to run custom scripts already stored on disk and wanted a mechanism to specify the list of scripts to run in parallel as an input parameter. 

In short, my objectives were:

  • Provide a variable number of scripts, as input parameter, to execute in parallel
  • Execute the scripts in parallel
  • Wait for completion of scripts, displaying their output
  • Have a mechanism to obtain success/failure from individual scripts

This article provides two scripts: a “main” script, which contains the logic for executing other scripts in parallel, and a test script that simply pauses for a few seconds.

Async Script

The following script is the main script that calls other scripts in separate processes running Runspace Pools in PowerShell.  This script accepts an input array of individual scripts, or script files; you might need to provide the complete path for script files. Then this script starts a new PowerShell process for each script and creates a reference object in the $jobs collection; the BeginInvoke operation starts the script execution immediately. Last but not least, this script waits for completion of all scripts before displaying their output and result.

It is important to note that this main script expects each script being called to return a specific property called Success to determine whether the script was successful or not. Depending on the script you call, this property may not be available and as a result this main script may report false negatives. There are many ways to detect script failures, so you can adapt the proper method according to your needs. A Test script is provided further down to show you how to return a custom property.

You can the below script like this, assuming you want to execute two scripts (script1.ps1 and script2.ps1):   ./main.ps1 @( {& "c:\myscripts\script1.ps1" }, {& "c:\myscripts\script2.ps1" })

Param(
    [String[]] $toexecute = @()
)

Write-Output ("Received " + $toexecute.Count + " script(s) to execute")

$rsp = [RunspaceFactory]::CreateRunspacePool(1, $toexecute.Count)
$rsp.Open()

$jobs = @()

# Start all scripts
Foreach($s in $toexecute) {
    $job = [Powershell]::Create().AddScript($s)
    $job.RunspacePool = $rsp
    Write-Output $("Adding script to execute... " + $job.InstanceId)
    $jobs += New-Object PSObject -Property @{
        Job = $job
        Result = $job.BeginInvoke()
    }
}

# Wait for completion
do
{
    Start-Sleep -seconds 1
    $cnt = ($jobs | Where {$_.Result.IsCompleted -ne $true}).Count
    Write-Output ("Scripts running: " + $cnt)
} while ($cnt -gt 0)

Foreach($r in $jobs) {
    Write-Output ("Result for Instance: " + $r.Job.InstanceId)
    $result = $r.Job.EndInvoke($r.Result)
   
    # Display complete output of script
    #Write-Output ($result)
               
    # We are assuming the scripts executed return an object
    # with a property called Success
    if ($result.Success) {
        Write-Output " -> Script execution completed successfully"
        # Use $result to print output of script
    }
    else {
        Write-Output " -> Script execution failed"
    }
}

 

Test Script

The script below is provided as a test script; this script waits for 5 seconds and returns an object as an output with a property called Success that the main script depends on.

Write-Output "Starting script..."

Start-Sleep -Seconds 5

$res = New-Object PSObject -Property @{
    Success = $true
}

return $res

Calling this test script twice, the output of the main script is as follows:

image

References

The above script was built by generalizing and slightly improving others’ excellent work:

Dynamic Code in Powershell, by Tome Tanasovski
https://powertoe.wordpress.com/2010/02/10/dynamic-code-in-powershell/

Example 1 of returning data back from a runspace, by Boe Prox
https://gist.github.com/proxb/803fee30f0df244fd850 

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Introducing the Azure Key Vault

If you need to store secrets in the Azure cloud, such as connection strings, passwords, or even keys, the Key Vault may be for you. In this post I will provide an overview of the Azure Key Vault and discuss certain implementation details to help you determine if the Key Vault is for you.

Service Overview

The Azure Key Vault is a highly secured service allowing you to store sensitive information and even help you with compliance initiatives. The Azure Key Vault is designed to store secrets and keys, including certificates, in a way that is highly secure. In fact, keys stored in the Key Vault are not visible to Microsoft. Developers can manage keys for development and test purposes, while giving control to security operations for production systems. In addition to storing keys and secrets, the Key Vault can also be used to perform cryptographic calculations, so that applications do not actually have access to the keys directly.

You should also note that all operations against the Key Vault are logged so that administrators can monitor usage of keys and secrets.

Scenario 1:  Store Secrets

Storing secrets is perhaps one of the most fundamental capability of the Key Vault; you can store sensitive information, and retrieve it by its name at a later time. You can also ensure that the user or application accessing the secret has the proper authority to doing so.

Scenario 2: Perform Cryptographic Operations

In this scenario, you can store a Key in the Key Vault, and perform Encryption/Decryption operations using the key. This allows an organization to encrypted data using encryption keys that the organization will never have access to. This is particularly interesting for SaaS vendors that are interested in storing encrypted customer data without ever being able to decrypt the information themselves.

Scenario 3: Manage Certificates

Another interesting scenario is the ability to store certificates and let Azure keep an eye on their expiration.

Managing Access to the Key Vault

Storing keys and secrets in the Azure Key Vault is not too hard, and can be performed programmatically or through the Azure Management Portal. In order to use keys and secrets stored in Key Vault, users and applications must be registered in Active Directory. Unlike other services, where a simple access token is all that’s needed to access a service, the Azure Key Vault requires consumers to authenticate against Azure Active Directory first. Once authenticated, consumers can access the Key Vault and perform the operations they are allowed to perform (such as List Keys, Read Secrets…).

To give access to the Key Vault, an administrator must add an Active Directory User, or Active Directory Application, to the list of users for the Vault. Once added, each user is given a set of Access Control List (ACL) with rights to perform certain actions on keys and secrets; there is a separate list of ACLs for keys and secrets. As a result, an application may have read rights to secrets but not keys.

Applications can access the Key Vault using two methods: a secret token, or a certificate. When using a secret token, the application authenticates to Active Directory using its name and the secret token (which acts as a form of password). However you should note that the token can only be valid for up to two years. Alternatively, an application can authenticate against Active Directory using an X.509 certificate, in which case the Active Directory Application must be associated to the X.509 certificate in Azure Active Directory, and the application must find the X.509 Certificate in its local certificate store (locating a X.509 certificate is usually performed by thumbprint). Here is a link to an article that discusses authenticating against Azure Active Directory using a certificate: https://blogs.msdn.microsoft.com/adrianpadilla/2016/03/31/certificate-authentication-into-azure-key-vault/.

However you should note at at this time it is not possible to set an ACL on specific secrets or keys directly. As a result the lowest ACL granularity is at the Key store or Secret store level.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

Accessing SalesForce Data Using SQL with Enzo Unified

In this article I will show you how to access SalesForce data to simplify data access and analysis, either in real-time or through a local cache, using SQL commands through the Enzo Unified data virtualization server. Most customers using SalesForce will need to access their data stored in SalesForce tables to run custom reports, or display the information on custom applications. For example, customers may want to build a custom monitoring tool to be notified when certain data changes are detected, while other customers want to keep a local copy of the SalesForce data for performance reasons.

High Level Overview

You may need to access SalesForce data for a variety of reasons: reports, data synchronization, real-time data access, business orchestrations... Most organizations facing the need to extract SalesForce data are looking for a way to get real-time data access into SalesForce, or access a local copy of the data for faster processing. Enzo Unified is a data virtualization server that gives you the ability to access SalesForce data in real-time, or automatically cached locally, either through SQL or REST commands.

Enzo Unified offers three data access and synchronization methods:

  • - Virtual Table
    A virtual table provides a view into a SalesForce object. The virtual table doesn’t actually store any data; it provides a simple way to get to the data through REST and/or native SQL requests.
  • - Snapshot
    A snapshot is created on top of a Virtual Table. Snapshots create a local copy of the underlying SalesForce data and make the data available to both REST and SQL requests. Snapshots buffer the data locally inside Enzo Unified, which provides a significant performance boost when querying data. Snapshots can be configured to be refreshed periodically.
  • - Integration
    When SalesForce data needs to be copied to an external system, such as SQL Server or SharePoint, Enzo Unified provides an Integration adapter that is designed to copy data changes to a destination. For example, with the Integration adapter, Enzo Unified can detect changes to an Account table in SalesForce and replicate changes made to that table in near-time to a SalesForce list. The Integration adapter will be covered in a future article.

For a broader overview of Enzo Unified, read this whitepaper. For a brief justification for this technology, see this article.

SalesForce Virtual Table

Let’s first create a Virtual Table in our SalesForce adapter; the virtual table is called Account1, and points to the Account table in SalesForce. Note that the adapter is already configured with my SalesForce credentials. SalesForce credentials are assigned to a Enzo login; in the example below, the credentials to connect to my SalesForce environment is attached to the ‘sa’ account in Enzo. Because multiple configuration settings can be saved, they are named; I am showing you the ‘bscdev’ configuration, and the (*) next to it means that it’s the default setting for this login.

image

To create a virtual table, let’s select the Virtual Tables tab. A virtual table in SalesForce is defined by a SOQL definition, which Enzo Unified runs behind the scenes to fetch the data. Clicking on the ellipses next to the SOQL statement allows you to edit the command. The edit window allows you to test your SOQL command. The columns of the virtual table are automatically created for you based on the command you specified.

 imageimage

Once the virtual table is created, retrieving data from the Account table is as simple as running this command:  SELECT * FROM SalesForce.Account1 – this executes the SOQL command against SalesForce behind the scenes, and once the data has been fetched it is returned to you.

You might wonder… where do I run this command from?  Since Enzo Unified is a data virtualization server, that understands native SQL Server requests, you can use SQL Server Management Studio (SSMS) to connect to Enzo Unified directly. Or you can use Excel, and connect to Enzo Unified as a SQL Server database. Or you could use ADO.NET from a .NET application for example, and declare a SqlConnection that points to Enzo Unified.

For example, here is the command executed from SSMS; the data came back live from SalesForce.

 image

And here is the data using Excel 2013, with the Connection Properties to Enzo Unified. As you can see, Excel sees the Account1 virtual table and is able to read directly from it.

image image

You can also use Visual Studio and create a SqlConnection to connect to Enzo Unified and fetch data directly using ADO.NET. For example the following command returns up to 100 records records from the Account1 virtual table and binds the result to a Data Grid:

SqlConnection conn = new SqlConnection(“server=localhost,9550;uid=enzo_login;pwd=enzo_password”);
conn.Open();
SqlCommand cmd = new SqlCommand(“SELECT TOP 100 * FROM SalesForce.Account1”, conn);
SqlDataReader reader = cmd.ExecuteReader();

DataSet ds = new DataSet();
DataTable dt = new DataTable("Table1");
ds.Tables.Add(dt);
ds.Load(reader, LoadOption.PreserveChanges, ds.Tables[0]);
dataGridViewResult.DataSource = ds.Tables[0];

conn.Close();

 

Virtual Table Snapshot

As described previously, a snapshot is a local copy of the data and kept in Enzo Unified for faster retrieval. This allows you to create a simple local cache of remote data; in this case we will store the SalesForce account table in a Snapshot called AccountCache. The Snapshot is defined on the Account1 virtual table. Using Enzo Manager, select the Account1 virtual table (in the SalesForce adapter), and select the Snapshot tab.  The Snapshot below shows that it is refreshed daily; note that you can enter a Cron schedule to refresh the Snapshot at specific intervals.

image

Once created, the Snapshot becomes accessible through Enzo Unified using an EXEC statement:  EXEC SalesForce.AsyncResult ‘AccountCache’

The main difference is that the data is now coming from a local cache instead of the SalesForce Account table directly. As a result, the data may be somewhat delayed; however since the Snapshot offers a schedule for data refresh, you have control over how old the data is.

image

You can also access the Snapshot data from code using ADO.NET. The following code shows you how (note the EXEC call to the AsyncResult object).

SqlConnection conn = new SqlConnection(“server=localhost,9550;uid=enzo_login;pwd=enzo_password”);
conn.Open();
SqlCommand cmd = new SqlCommand(“EXEC SalesForce.AsyncResult ‘AccountCache’”, conn);
SqlDataReader reader = cmd.ExecuteReader();

DataSet ds = new DataSet();
DataTable dt = new DataTable("Table1");
ds.Tables.Add(dt);
ds.Load(reader, LoadOption.PreserveChanges, ds.Tables[0]);
dataGridViewResult.DataSource = ds.Tables[0];

conn.Close();

 

Summary

This article shows you how to access SalesForce data using native SQL commands through Enzo Unified, using various tools and techniques, including Excel, SSMS and .NET code. Because Enzo Unified is a data virtualization server that understands native SQL and REST commands, anyone with the proper access rights can access SalesForce data without learning the underlying APIs. And as with most data virtualization platforms, Enzo Unified offers security options giving you the ability to control who can access which virtual tables and snapshots.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.v

Exploring Microsoft Azure DocumentDB

In this blog post, I will provide an introduction to DocumentDB by showing you how to create a collection (a collection is a container that stores your data), and how to connect to your collection to add and fetch data. DocumentDB is a newer no-sql data storage engine that is hosted in the Microsoft Azure cloud exclusively. Because DocumentDB stores records as JSON objects, it is a natural database engine for developers. Unlike other offerings however, it also offers key features such as automatic indexing, server-side triggers, functions and stored procedures (through Javascript).

Creating a DocumentDB Database

First, let’s create a new DocumentDB database so we can start exploring this service; three things need to be created:  an account, a database, and a collection (where data is actually stored). An account can host multiple databases, and each database can host multiple collections. From you Azure portal (https://portal.azure.com) find DocumentDB from the list of available services, and create a new DocumentDB account. The Resource Group is a logical container allowing to group, view, and manage related services. The screenshot below shows the information I have provided.

Once the account has been created, the screen changes to show you a menu of options from which you can create databases; of importance, DocumentDB allows you to change the default consistency of your no-sql databases (no-sql database consistency is an important concept as it can impact performance, availability and consistency – see Consistency levels in DocumentDB); we will keep the default setting. Also note an important configuration property: your keys. Locate the Keys configuration menu to reveal your access keys. Note that DocumentDB allows you to manage read-only keys as well.

Select Overview from the top of this menu, and click on Add Database and enter a database identifier (which is the database name; my database is called ‘testdb’), and click OK.

Once the database has been created, you will need to create a collection. Select the database to open up a new panel, and click on Add Collection. Enter the necessary information and click OK (see the information I provided below; my collection name is logdata; I also changed the Throughput to 400 to reduce the cost since this is a test collection).  At this point, we are ready to access this collection and start adding records (in proper no-sql speak, we will be adding documents).

Before jumping into the code, let’s make note of the following information since this will be needed later to connect to DocumentDB.

Configuration Value Comment
Database ID testdb The database “Id” is the name of the database we created
Collection Id logdata The collection “Id” is the name of the collection we created
Endpoint https://YOUR-ACCOUNT-NAME.documents.azure.com:443/ This is the URI of your DocumentDB service; use your account name
Auth Key {Look under Keys in your DocumentDB database} This is the Primary Key or Secondary Key of your DocumentDB account

 

Create a VS2015 Project

Let’s create a simple project using Visual Studio 2015 to access the DocumentDB collection. Note that the complete project and source code can be found here:  http://www.bluesyntaxconsulting.com/files/DocumentDBLogData.zip 

We will create a Console Application to perform a few simple operations. Once you have created the project, you will need to add the DocumentDB SDK.  To install the SDK, find the Microsoft Azure DocumentDB package, or use the following command in the Package Manager Console (if you download the sample code, the package will automatically be downloaded when you first compile):

Install-Package Microsoft.Azure.DocumentDB

Let’s create a class that holds a single log entry in our DocumentDB collection. The class name is LogEntry. We need to have a unique identifier for every document, and it must be called Id.

    public class LogEntry
    {
        public string Id { get; set; } // Guid of log
        public DateTime DateAdded { get; set; }
        public string Message { get; set; }
        public string Category { get; set; }    // Info, Warning, Error
        public int Severity { get; set; } // Low, Medium, High
        public string Source { get; set; } // Application name
    }

Then, we will create a simple Console application that does two things: it can add a new document in our collection, and it can list all the documents from the collection.  The following is the complete code for the console application; note the private variables on top that represent the configuration settings identified previously.

using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace DocumentDBLogData
{
    class Program
    {

        static string endpoint = "https://YOUR-ACCOUNT-NAME.documents.azure.com:443/";
        static string authKey = "YOUR-PRIMARY-OR-SECONDARY-KEY";
        static string databaseId = "testdb";
        static string collectionId = "logdata";

        static void Main(string[] args)
        {
            Console.WriteLine("STARTING DocumentDB Demo... ");
           
            while(true)
            {
                Console.Clear();
                Console.WriteLine("1: Create new message");
                Console.WriteLine("2: List all messages");
                Console.WriteLine("");

                switch(Console.ReadKey().KeyChar)
                {
                    case '1':

                        Console.WriteLine("");
                        Console.WriteLine("Adding a record to DocumentDB...");

                        AddLogInfoEntry();

                        Console.WriteLine("New record added in DocumentDB. Press ENTER to continue.");
                        Console.ReadLine();

                        break;
                    case '2':

                        Console.WriteLine("");
                        Console.WriteLine("Fetching DocumentDB records...");

                        DisplayLogInfoEntry();

                        Console.WriteLine("");
                        Console.WriteLine("Press ENTER to continue.");
                        Console.ReadLine();

                        break;
                }

            }
        }

        static void AddLogInfoEntry()
        {
            using (DocumentClient client = new DocumentClient(new Uri(endpoint), authKey))
            {
                var collection = UriFactory.CreateDocumentCollectionUri(databaseId, collectionId);
                LogEntry le = new LogEntry()
                {
                    Id = Guid.NewGuid().ToString(),
                    Category = "Info",
                    DateAdded = DateTime.UtcNow,
                    Message = "General message from Console App",
                    Severity = 1,
                    Source = "CONSOLE APP"
                };

                Document newDoc = client.CreateDocumentAsync(collection, le).GetAwaiter().GetResult();

            }
        }

        static void DisplayLogInfoEntry()
        {
            using (DocumentClient client = new DocumentClient(new Uri(endpoint), authKey))
            {
                var collection = UriFactory.CreateDocumentCollectionUri(databaseId, collectionId);

                var docs = client.CreateDocumentQuery<LogEntry>(collection).AsEnumerable();

                foreach(var doc in docs)
                {
                    string output = "{0}\t{1}\t{2}\t{3}\t{4}";
                    output = string.Format(output, doc.Id, doc.DateAdded, doc.Source, doc.Severity, doc.Message);
                    Console.WriteLine(output);
                }

                Console.WriteLine();

            }
        }

    }
}

By pressing 1, the console application connects to DocumentDB and adds a record to the LogData collection.  By pressing 2, the console application fetches all available documents and displays them on the screen. Note that if you have a large number of records, you will need to add logic to page the operation (let’s say 100 documents at a time for example), and handle retry logic operations if the service is too busy.

Conclusion

This simple introduction to DocumentDB provides a quick overview of the simplicity of this service, along with a sample project for creating and accessing documents. Although DocumentDB is very easy to configure and use in code, many advanced features (not covered in this introduction) are available around performance, security and availability. For a deeper understanding of DocumentDB, please refer to the online MSDN documentation, and the QuickStart provided in the DocumentDB menu inside the Azure Portal.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.v

HANDS-ON LAB: Emulating Devices for Azure IoT Hub with SQL Server

In this post, I will explain how you can easily simulate dozens (or hundreds) of IoT devices to test your Azure IoT Hub configuration. Simulating IoT devices with dynamic data can help you test your Azure IoT Hub configuration settings, along with downstream consumers such as Stream Analytics and Power BI reports. In order to facilitate the communication between SQL Server and an Azure IoT Hub, we will use Enzo Unified (http://www.enzounified.com) which abstracts the underlying cloud APIs, allowing native SQL commands to be executed against the Azure IoT Hub. With Enzo Unified, simulating dozens of IoT devices can be done with a simple T-SQL statement.

image

Pre-Requities

To emulate IoT devices, you will need a Microsoft Azure account, and have a desktop (or server) with the following technologies installed:

  • Windows 8 (or higher) with .NET 4.6 installed
  • SQL Server 2014 Express or higher
  • Enzo Unified 1.6 (please contact info@enzounified.com to obtain a download link and installation instructions)

We will use SQL Server to drive test cases (simulating devices) by running SQL scripts, and Enzo Unified as a bridge allowing the SQL scripts to send data to the Azure IoT Hub from SQL Server directly.

Configure Microsoft Azure Azure IoT Hub

Let's first configure our Azure IoT Hub.

image

  • Click on IoT Hub (under Internet of Things); another window will show up
  • Enter a name for the service (for example: IoTEnzoTest) - the name for your IoT Hub will be called YOUR_IOT_HUB_NAME
  • Specify a Resource Group name, such as EnzoTest
  • You will reuse this resource group later when creating a Streaming Analytics job
  • Change the pricing tier level to F1 if you can to minimize the cost of the Hub
  • Select the East US location
  • Check the Pin to Dashboard checkbox
  • Click Create

image

When the IoT Hub has been created (this make take a few minutes) click Settings on your IoT Hub page. Under Settings, select Shared Access Policies, and select the iothubowner policy; the Shared Access Keys will be displayed.
Save the Connection String Primary Key (you will use the entire connection string when configuring Enzo Unified later). The connection string to the IoT Hub looks like this:

HostName=YOUR_IOT_HUB_NAME.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=YOUR_SHARED_ACCESS_KEY

Configure Enzo Unified for the Azure IoT Hub

In this section, we will configure Enzo Unified to connect to the Azure IoT Hub service using SQL Server Management Studio (SSMS). Enzo Unified will serve as the bridge between SQL Server and the Azure IoT Hub environment by allowing SQL Server to send test messages to Azure.

  • Connect to Enzo Unified using SSMS
  • Explore some of the AzureIoTHub methods you have access to by running the built-in help command:

EXEC AzureIoTHub.help

image

One of the available methods is _configCreate; let's use this method to add your IoT connection string. To learn how to use this command, run the following statement:

EXEC AzureIoTHub._configCreate help

image

Let's create a configuration called "default" (replace the IoT Hub name and connection string with you values):

EXEC AzureIoTHub._configCreate 'default', 1, 'YOUR_IOT_HUB_NAME', 'YOUR_CONNECTION_STRING'

  • You are now ready to access the hub through Enzo Unified.
    • If you need to update the configuration settings, use the _configUpdate command (same parameters); then use the _configFlush command to apply the new settings. 
    • If you create multiple configuration settings, you can use the _configUse to switch the active configuration. 
    • To list all available configuration settings, use the _configList command.

Run the following command to confirm you can connect to the Azure IoT Hub; no records will be returned yet as we have not yet configured our monitoring environment.

EXEC AzureIoTHub.ListDevices

Each SQL Server (i.e. device) has its own Access Key into the IoT Hub; you do not need to remember this information as Enzo Unified will work from the device name (DEVLAP03-SQL2014 in my example).

Create Virtual IoT Devices

We are now ready to add virtual IoT Devices, so that the Azure IoT Hub can accept incoming data from your SQL Server. We will create a few virtual IoT devices for this test.

  • Connect to Enzo Unified using SSMS
  • Run the following command (replace YOUR_DEVICE_NAME with a friendly name for the SQL Server; I used 'DEVLAP03-SQL2014')
  • NOTE: The backslash '\' is not a valid character for a device name; you can use the dash '-' instead to specify a SQL Server instance name

EXEC AzureIoTHub.CreateDevice 'test1’

EXEC AzureIoTHub.CreateDevice 'test2'

EXEC AzureIoTHub.CreateDevice 'test3'

EXEC AzureIoTHub.CreateDevice 'test4'

EXEC AzureIoTHub.CreateDevice 'test5'

You should now see the devices when you run the ListDevices command:

image

Let's test our new IoT Device by sending a JSON document from SSMS (through Enzo Unified):

Run the following command:

EXEC bsc.AzureIoTHub.SendData 'test1', '{"deviceId":"test1", "location":"home", "messurementValue":700, "messurementType":"darkness","localTimestamp":"2016-4-14 16:35:00"}'

You will soon see the message count go up on your Microsoft Azure IoT Hub dashboard.

Simulating a Test From Multiple Virtual Devices

At this time we are ready to send multiple test messages on behalf of virtual devices to the Azure Hub. To achieve this, we will use the SendTestData method; this method allows you to send messages (with different values to simulate actual devices) from multiple devices. To obtain help on this method, you can run this command:

EXEC AzureIoTHub.SendTestData help

The following command sends at least 12 messages, with 100ms interval, using four virtual devices. One of the parameters of this method is the list of devices that will participate in the test. The message is built using the template provided, which uses these functions, so that every message sent has different data sets:

  • #deviceid() – the name of the device
  • #pick(home,work,car) – selects one of the values randomly
  • #rnddouble(a,b) – selects a double value randomly between [a,b[
  • #utcnow() – the current time in UTC

EXEC bsc.AzureIoTHub.SendTestData 
    12,
    100,
    'test1,test2,test3,test4',
    '{"deviceId":"#deviceid()", "location":"#pick(home,work,car)", "messurementValue":#rnddouble(400.0,700.0), "messurementType":"darkness","localTimestamp":"#utcnow()"}'

The above command will generate at least 12 messages and output the messages that were actually sent to the Azure IoT Hub.

image

Saving Output to SQL Server for Analysis

Last but not least, let’s create a similar test and save the output provided by the SendTestData method to a local table in SQL Server so that it can be analyzed later.  In order to do this, we will need to call Enzo Unified through a Linked Server. In order to configure Linked Server to Enzo Unified, follow the instructions provided in the help of Enzo Unified.

First, connect SSMS to your local SQL Server, and create a database with a table where the data will be stored.

IF (NOT EXISTS(SELECT * FROM master..sysdatabases WHERE name = 'mytestdb'))
CREATE DATABASE mytestdb
GO

IF (NOT EXISTS(SELECT * FROM mytestdb.sys.tables WHERE name = 'iotrawresults'))
  CREATE table mytestdb..iotrawresults (dateCreated datetime, id nvarchar(50), messageId nvarchar(50), data nvarchar(1024), props nvarchar(1024), durationms int)
GO

To save the output of the SendTestData method into the iotrawresults table previously created, you will run the following command:

INSERT INTO mytestdb..iotrawresults EXEC [localhost,9590].bsc.AzureIoTHub.SendTestData
    10,
    100,
    'test1,test2',
    '{"deviceId":"#deviceid()", "location":"#pick(home,work,car)", "messurementValue":#rnddouble(400.0,700.0), "messurementType":"darkness","localTimestamp":"#utcnow()"}'

We have created a simple way to simulate IoT devices and send random data to an Azure IoT Hub; to scale this test system you can improve this lab by adding the following items:

  • Build a SQL Server job to run the test on a schedule
  • Build multiple jobs/tests to increase the number of devices and send a different message mix

Conclusion

This lab introduces you to the Azure IoT Hub, Enzo Unified and its AzureHub adapter, and how to leverage SQL Server to create an ecosystem of virtual devices simulating data emission to the Microsoft Azure cloud.

About Herve Roggero

Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Enzo Unified (http://www.enzounified.com/). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, aPluralSight author, and runs the Azure Florida Association.