News

Twitter










Archives

Post Categories

.Net

Biztalk

Syndication:

Accessing WCF Services in the Cloud

 

Recently, I began work on building WCF services for a Silverlight application that will be hosted in Azure.  While I cannot discuss some of the details of the application, I can definitely talk about some of the gotchas I encountered.

 

One of the things that all developers writing WCF services for Silverlight will encounter is the dreaded cross domain access exception.  In general, this is the means by which Silverlight attempts to guard against cross site forgery and other security vulnerabilities.  There are several posts out there that provide the solution to this problem, but without a lot of detail.  Basically, you have to install a clientaccesspolicy.xml file in the primary Azure Role that hosts your WCF services.  The format of that file is this:

<?xml version="1.0" encoding="utf-8" ?> 
<access-policy>
  <cross-domain-access>
    <policy>
      <allow-from http-request-headers="SOAPAction">
        <domain uri="*"/>
      </allow-from>
      <grant-to>
         <resource path="/" include-subpaths="true"/>
      </grant-to>
    </policy>
  </cross-domain-access>
</access-policy>

Great!  But what does it all mean?  Fortunately, Microsoft has the details here (http://msdn.microsoft.com/en-us/library/cc197955(v=VS.95).aspx).  Basically, the important element is the <domain uri=””/>.  This defines which domains have access to the WCF services.  If you want only your Silverlight applications host on your network to have access, simply replace the * with http://contoso.com.

 

Another problem I encountered was trust.  It just so happens that my Silverlight control hosted by Azure was attempting to access the Lync client API.  Because of this, I had to put the url of my Azure application in my IE trusted sites list.  This would hold true for any application that attempted to initiate or access APIs hosted on the client.

 

Lastly, when I initially developed the Azure solution, I defined multiple web roles: the Silverlight application, a base web site with reporting and the WCF services.  However, this presented a problem in that when hosting multiple roles, Azure assigns them unique endpoints and therefore unique ports, the first on port 80, second port 81, and so on.  Working remotely, this had no impact on my development or access in the least.  However, when we turned it on for employees on the corporate network, traffic on ports 81 and 82 was blocked.

So, rather than expose the additional roles as Azure roles, I added them as VirtualApplications to the primary role with unique uris.  This is accomplished by modifying the ServiceDefinition.csdef file in the Azure project and adding <VirtualApplication /> elements to the <Site /> element.  In doing so, additional roles are accessed as subsites on the primary url all using port 80.

 

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="LyncingAzure" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="LyncingAzure.Main" vmsize="Small">
    <Sites>
      <Site name="Web">
        <VirtualApplication name="svc" physicalDirectory="F:\StlDayOfDotNet\2012\LyncingAzure\LyncingAzure.WCF\" />
        <VirtualApplication name="sl" physicalDirectory="F:\StlDayOfDotNet\2012\LyncingAzure\LyncingAzure.SL.Web\" />
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint1" />
        </Bindings>
      </Site>
    </Sites>
    <Endpoints>
      <InputEndpoint name="Endpoint1" protocol="http" port="80" />
    </Endpoints>
    <Imports>
    </Imports>
  </WebRole>
</ServiceDefinition>

In the example above, my WCF services can be accessed @ :80/svc">http://<primaryurl>:80/svc and my Silverlight application @ :80/sl">http://<primaryurl>:80/sl.

 

And, now, my application works perfectly with both my Silverlight and WCF services hosted in the cloud with an (almost) zero-touch installation on the client.

 

Ralph Wheaton
Microsoft Certified Technology Specialist
Microsoft Certified Professional Developer
Microsoft VTS-P BizTalk, .Net



Enterprise Integration: Can Companies Afford It?

Each year, my company holds a global sales conference where employees and partners from around the world some together to collaborate, share knowledge and ideas and learn about future plans.  As a member of the professional services division, several of us had been asked to make a presentation, an elevator pitch in 3 minutes or less that relates to a success we have worked on or directly relates to our tag (that is, our primary technology focus).  Mine happens to be Enterprise Integration as it relates Business Intelligence.  I found it rather difficult to present that pitch in a short amount of time and had to pare it down.  At any rate, in just a little over 3 minutes, this is the presentation I submitted.  Here is a link to the full presentation video in WMV format.

 

Many companies today subscribe to a buy versus build mentality in an attempt to drive down costs and improve time to implementation. Sometimes this makes sense, especially as it relates to specialized software or software that performs a small number of tasks extremely well.

However, if not carefully considered or planned out, this oftentimes leads to multiple disparate systems with silos of data or multiple versions of the same data. For instance, client data (contact information, addresses, phone numbers, opportunities, sales) stored in your CRM system may not play well with Accounts Receivables. Employee data may be stored across multiple systems such as HR, Time Entry and Payroll. Other data (such as member data) may not originate internally, but be provided by multiple outside sources in multiple formats. And to top it all off, some data may have to be manually entered into multiple systems to keep it all synchronized.

When left to grow out of control like this, overall performance is lacking, stability is questionable and maintenance is frequent and costly. Worse yet, in many cases, this topology, this hodgepodge of data creates a reporting nightmare. Decision makers are forced to try to put together pieces of the puzzle attempting to find the information they need, wading through multiple systems to find what they think is the single version of the truth. More often than not, they find they are missing pieces, pieces that may be crucial to growing the business rather than closing the business.

across applications. Master data owners are defined to establish single sources of data (such as the CRM system owns client data). Other systems subscribe to the master data and changes are replicated to subscribers as they are made. This can be one way (no changes are allowed on the subscriber systems) or bi-directional. But at all times, the master data owner is current or up to date. And all data, whether internal or external, use the same processes and methods to move data from one place to another, leveraging the same validations, lookups and transformations enterprise wide, eliminating inconsistencies and siloed data.

Once implemented, an enterprise integration solution improves performance and stability by reducing the number of moving parts and eliminating inconsistent data. Overall maintenance costs are mitigated by reducing touch points or the number of places that require modification when a business rule is changed or another data element is added. Most importantly, however, now decision makers can easily extract and piece together the information they need to grow their business, improve customer satisfaction and so on.

So, in implementing an enterprise integration solution, companies can position themselves for the future, allowing for easy transition to data marts, data warehousing and, ultimately, business intelligence. Along this path, companies can achieve growth in size, intelligence and complexity. Truly, the question is not can companies afford to implement an enterprise integration solution, but can they afford not to.

 

Ralph Wheaton
Microsoft Certified Technology Specialist
Microsoft Certified Professional Developer
Microsoft VTS-P BizTalk, .Net



Social Media 101A: Facebook Connect Redux

In my previous post on this particular topic, I had mentioned a problem I was having when attempting to post with a new application.

I did discover that not all applications are created equal.  When I created a second application specifically for this post, it would not work at all.  I have not yet contacted Facebook support to determine the root cause.  If you run into the issue where it appears that the application is being granted the proper permissions, but still cannot post, create another application and try it.

It turns out that there was, in fact, a larger issue to be surfaced.  Shortly after I completed this project, Facebook changed some of their APIs which then broke some of the RESTful APIs that are used by the Microsoft Facebook Developer Toolkit.  So, the following code no longer works.

// Request desired permissions.  If only updating status or content, need publish_stream
List<Enums.ExtendedPermissions> lst = new List<Enums.ExtendedPermissions>();
lst.Add(Enums.ExtendedPermissions.status_update);
session.RequiredPermissions = lst;

Instead, we must leverage some of the new Facebook Connect API methods to request permissions from the user.  There are only a couple of changes necessary.  First, remove the above code.  It will be replaced with a call to FB.Connect.showPermissionDialog.  Next, instead of reloading the browser window as part of the fb login-button control, invoke a new callback function.  We will define this function in a moment.

pnlFacebookConnect.Controls.Add(
    new Literal { Text = "<fb:login-button onlogin='ToInvoke.CallBack()'></fb:login-button>" });

 

Lastly, we need to define and emit the new callback function.  With its complexity, it is easier to see and read if we define it as a constant else where in the code.

 

Define the callback.  Note the call to showPermissionDialog.  Add the necessary permissions to the string delimited by commas.

        private const string LoginCallBackScript = @"
ToInvoke = {};
FB.ensureInit(function() {
ToInvoke.CallBack=function ()
{
    FB.ensureInit(
    function()
    {
        FB.Connect.showPermissionDialog(
        'publish_stream,status_update',  
        function(perms)
        { 
           window.location.reload();
        });  
    });
}
});";

 

 

And emit the code.

ClientScript.RegisterStartupScript(this.GetType(), "facebook_init", 
    string.Format("<script type='text/javascript'>FB.init('{0}', 'xd_receiver.htm'); {1} </script>", 
    APPLICATION_ID, LoginCallBackScript));

 

With that, our application should be up and running with the proper permissions.  One of the positives about this particular method is that it always checks for the requested permissions.  If they have been granted, no foul.  If not, or if the requested permissions have changed, it will re-prompt the user for the updated permissions.  So, our application and permissions should always be in sync.

 

I’ve uploaded an updated project to Windows SkyDrive here.

 

<edit - 08/09/2011>

One of the unfortunate side effects of the new Facebook APIs is that when asking for permission, first the user is asked to confirm basic permissions and then any additional permissions.  This cannot be avoided.

</edit>

 

Ralph Wheaton
Microsoft Certified Technology Specialist
Microsoft Certified Professional Developer



St. Louis Day of .Net

Well, the presentation at Day of .Net was a success.  I had over 50 attendees at my session on System.Diagnostics, some very good questions and I hope some good things will come out of the talk.  For those of you who went (or who were unable to go), I have uploaded my slide deck and demo project.  You can get them here.  If you have any questions about the presentation or project, contact me @ rwheatonjr@gmail.com.

 

Ralph Wheaton
Microsoft Certified Technology Specialist
Microsoft Certified Professional Developer



Social Media 101A: Leveraging Facebook Connect in ASP.Net

Over the past several years social media has grown from just a few sites to a booming industry.  With the wealth of information and entertainment available on sites like YouTube, Facebook and Twitter, it is easy to see where it has come from and why it has stuck.  Little wonder why people, developers and companies have sought to leverage the connections found on social media outlets to further careers, sell products or just plain connect to the rest of the world.

Recently, I had been asked to develop an API that would funnel posts through SharePoint to a Facebook page as status updates for a major corporation.  In researching its feasibility, I found a ton of information about posting to Facebook, if I was interested in using PHP.  Since I have graduated to real development languages and tools, I have no interest in doing so.  Unfortunately, examples of posting to Facebook using .Net are few and far between and most are incomplete or confusing at best.  This post is my attempt to provide a clear and complete example of posting to Facebook from a .Net application.


Now, I must admit that unless you are working with a corporation, developing a Facebook application or perhaps using Facebook as an authentication mechanism, if we’re being honest, the biggest reason to hook up to Facebook is to say you can.   Especially when you consider it may not even be necessary.  If you are simply interested in reading status updates on public pages, you can subscribe to RSS feeds of status updates using this URL where <FacebookPageId> is the Facebook unique identifier for the page.  An example is the Microsoft Visual Studio page found here.

http://www.facebook.com/feeds/page.php?format=atom10&id=<FacebookPageId>


Unfortunately, our first hurdle in posting to Facebook is that the API does not expose a means for logging in as a user or logging in a user.  I expect this is primarily as an anti-hijacking measure and to address privacy concerns.  Instead, we must use the Facebook Connect API which authenticates the user through a Facebook hosted login page and then passes control back to the ASP.Net application.  To make use of this API, we first have to do a couple of things.

First, we have to define an application in Facebook.  You start here to create an application.  There are a couple of restrictions on the name, specifically, it cannot include the word “face”.  Once created, Facebook assigns an application id, API key and a secret key.  These will be used in code later.  You will also need to configure the Facebook Connect URL for your application for cross domain posting.  This setting can be found on the Connect tab of the Edit Settings page.  If using the default web server built into Visual Studio, this URL will be something like http://localhost:<web server id>.  All Facebook applications can be managed here.

I did discover that not all applications are created equal.  When I created a second application specifically for this post, it would not work at all.  I have not yet contacted Facebook support to determine the root cause.  If you run into the issue where it appears that the application is being granted the proper permissions, but still cannot post, create another application and try it.

Next, we have to download the Facebook Developer Toolkit for .Net from CodePlex here.  This is the Microsoft “developed and supported” wrapper around the Facebook REST API.  Once installed, we are ready to begin developing.

 

Create a new ASP.Net Web Application in Visual Studio and add a reference to the Facebook.dll library from the Facebook Developer Toolkit.  Add a new HTML page called xd_receiver.htm to the application and drop the following code in it.  This defines the cross domain receiver used by the Facebook Connect API to post back to your application.  We’ll reference this file later.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">  
<html xmlns="http://www.w3.org/1999/xhtml">  
<body>  
    <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/XdCommReceiver.js"  
        type="text/javascript"></script>  
</body>  
</html>  

 

Copy the following to Default.aspx.  I’ve purposefully excluded any references to the Facebook Connect API and am injecting all necessary code from the server side.  This is a simple form with one label, one text  box and one button to update status.

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="FacebookDemo._Default" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <asp:Panel ID="pnlFacebookConnect" runat="server">
        </asp:Panel>
        <asp:Panel ID="pnlStatusUpdate" runat="server">
            <asp:Label ID="Label1" runat="server" Text="Label">Post status update to Facebook</asp:Label>
            <asp:TextBox ID="txtStatus" runat="server" Width="500px"></asp:TextBox>&nbsp;<asp:Button 
                ID="btnUpdate" runat="server" Text="Update" Width="100px" 
                onclick="btnUpdate_Click" />
        </asp:Panel>
    </div>
    </form>
</body>
</html>

Now, the magic happens.  In the Page_Load event, start wiring up the necessary javascript code to load and initialize the Facebook Connect API.

// Make sure the FB javascript is loaded and registered
if (!ClientScript.IsStartupScriptRegistered("facebook_init")) {
    // Inject feature loader javascript into page header
    var feature = new HtmlGenericControl("script");
    feature.Attributes.Add("src", "http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php");
    feature.Attributes.Add("type", "text/javascript");
    Page.Header.Controls.Add(feature);

    // Inject the facebook init into the body onload
    ClientScript.RegisterStartupScript(this.GetType(), "facebook_init", string.Format("<script type='text/javascript'>FB.init('{0}', 'xd_receiver.htm');</script>", APPLICATION_ID));
}

 

Next, create the facebook session and request the desired permissions.  There are multiple permissions that can be combined and requested, including photo_upload, publish_stream, etc.  However, we’re only interested in posting status updates so the permission status_update is sufficient.

// Create the facebook session
session = new ConnectSession(APPLICATION_ID, SECRET_KEY);

// Request desired permissions.  If only updating status or content, need publish_stream
List<Enums.ExtendedPermissions> lst = new List<Enums.ExtendedPermissions>();
lst.Add(Enums.ExtendedPermissions.status_update);
session.RequiredPermissions = lst;
session.SessionExpires = false;

 

At this point, we need to determine if the user has already logged on to Facebook (Or, since it is possible to store a logged in user’s session key, if the stored key is still valid).  If necessary, inject the facebook login button into the page.  If the user is connected, simply create an instance of the Facebook api.

// Verify connection
if (!session.IsConnected()) {
    // User is not authenticated, display facebook connect button
    pnlFacebookConnect.Controls.Add(new Literal { Text = "<fb:login-button onlogin='window.location.reload()'></fb:login-button>" });
    pnlFacebookConnect.Controls.Add(new Literal { Text = "<br />" });
    pnlFacebookConnect.Controls.Add(new Label { Text = "Please log in to Facebook ..." });

    // And disable the update panel
    pnlStatusUpdate.Enabled = false;
} else {
    // User is authenticated, instantiate the api
    api = new Api(session);
}

 

Once authenticated, posting status updates are pretty straightforward.  In the button click event, call the Status.Set method.

if (api != null&&!string.IsNullOrEmpty(txtStatus.Text)) {
    api.Status.Set(txtStatus.Text);
}

 

Navigate to your Facebook profile and you should see your status updates.  Congratulations!  Be warned, however, that some goofy relations may decide to make fun of your test statuses.  You can put them in their place by letting them know that you have a Facebook account for professional reasons, not some silly game (though I do like Castle Age and Haven!).

 

Full source (with application id and secret redacted) can be downloaded from here.

 

Ralph Wheaton
Microsoft Certified Technology Specialist
Microsoft Certified Professional Developer



Architecting Software Solutions Part II – Setting Your Limits

This is the second in a series on architecting software solutions where I discuss items that need to be discussed, addressed, resolved, etc. prior to finalizing any system designs..  Part I of this series “Know Your  Audience” can be found here.

One of the easiest pits to fall in when architecting software solutions is to attempt to design the solution to be all things to all people.  Unless the application is incredibly simple and/or has a very narrow use, it is possible to spend substantially more time designing the edge cases of an application than the core application itself.  I have seen instances in which an application became overly complex and difficult to maintain or, quite simply, fail, because too much emphasis was placed on extreme scenarios that could or would occur less than 0.1% of the time.  It is situations like this that make it absolutely necessary to set your limits when architecting a software solution.

 

Setting Your Limits

 

Now, as a software designer/developer, I must admit that I would prefer if i could design solutions that were all things to all people.  But my experience has found that over the past 20 year many times trying to do so is either:

  1. not possible
  2. not desirable
  3. too costly
  4. some or all of the above

With regards to not possible, applying business rules via software is a good example, especially when the user mentions the word “except” as it applies to a specific rule.  If a business rule cannot be expressed in a logical fashion with defined inputs and repeatable results, it is not possible to build that rule into a software solution.  If an HRMS does not track eye color of a companies employees, it is impossible to code a business rule that pays employees with hazel eyes a 3% premium on their paychecks.  I realize this is a somewhat spurious example, but I think you get the point.

As for undesirable, sometimes a “requirement” forces a specific design that is overly complex or has unexpected/unwanted side effects, such as extremely heavy web pages.  Several years ago, I was asked to develop a custom control for a .Net web order entry application that allowed for searching and selecting a medication.  The requirement given was that there were to be no postbacks during the search and selection.  So, I designed a mechanism by which all of the medications were loaded into the page on initial load as javascript objects and used client side calls to implement the functionality.  The control worked exactly as required with this one problem: the page as loaded was about 2MB in size.  Needless to say, page load performance was horrible and ultimately led to a change in the requirements.  Today, with AJAX being integrated into .Net, this requirement would not be a problem, but at the time, the accepted practices of .Net development did not include consideration of AJAX and JSON.

Lastly, sometimes requirements can be so narrowly focused or focused on the edge cases that they become too costly, either from a money perspective, resource perspective or both.  My experience has shown that generally the less volume addressed by a requirement (edge cases), the more significant the cost.  Most application designs should target a 75-90% solution.  That is, applications should be designed to provide functionality that addresses the majority of use cases from volume perspective.  As an example, let’s say that an application’s requirements can be broken down into 20 use cases, of which 12 provide 90% of total volume.  The remaining 8 use cases only comprise 10% of the total volume, but generally have a higher cost and increased complexity.  Any application design should target the 12 higher volume use cases and then possibly include some of the 8 edge cases if cost and complexity can be managed.  If an edge case only provides 2% additional volume but adds significant complexity or costs 40% of the total implementation, consideration must be given to leaving it on the table.

 

Now, as in my previous post, I would caution that these guidelines are by no means an absolute and, sometimes, the client wants what the client wants.  Still, by measuring requirements against complexity, desirability and cost, it is easier and more likely you can deliver better, reliable results on-time and within budget.

 

Now, I plan on continuing my series on Architecting Software Solutions.  In the meantime, I will be presenting my series on Adventures in System.Diagnostics at the St. Louis Day of .Net conference this August 20-21, 2010 at the Ameristar Casino in St. Charles, MO.  You can get more information, including registration details at their website.  Hope to see you there.

 

Ralph Wheaton
Microsoft Certified Technology Specialist
Microsoft Certified Professional Developer



The Unstable Mind is back

After a too long hiatus, The Unstable Mind of a .Net Developer is back.  In fact, a lot has changed since I last posted almost a year ago.  I plan on correcting that and over the next couple of days, I will be continuing my series on Architecting Software Solutions – Part II Setting Your Limits.  It will be good to be back.

 

In the meantime, I will be presenting my series on Adventures in System.Diagnostics at the St. Louis Day of .Net conference this August 20-21, 2010 at the Ameristar Casino in St. Charles, MO.  You can get more information, including registration details at their website.  Hope to see you there.

 

Ralph Wheaton
Microsoft Certified Technology Specialist
Microsoft Certified Professional Developer



Adventures in System.Diagnostics – The Intermission

To be honest, while I was writing the original Adventures in System Diagnostics post, I had no intentions of turning it into a series.  Since then, however, I have given consideration to implementing in a production environment (already written, the sequel) and also to developing custom listeners (not yet written, soon to be the threequel?).  With these last two titles, I had thought that would be the end of this topic.  It turns out, I was wrong.

Just this past week, we started seeing issues with an application in which I had used TraceSource extensively.  This particular application is long running and does a lot of work processing data within a database.  Because of this, a lot of exceptions are caught (insufficient privileges, missing tables, etc.), written to TraceSource and then subsequently ignored to be reviewed post process.  The issue we started seeing was that it would encounter one of these throwaway exceptions and then terminate execution.  The entire execution is handled in its own try..catch and we could see where there was a problem, but no exception details were given.  What was worse, when reviewing the code, the exception was occurring between a call to the TraceSource.TraceEvent method and a return statement.

 

 public bool SomeMethod() {
    try {
        // Do something }
    catch (Exception ex) {
        // Log the message to the trace and move on
       
ts1.TraceEvent(TraceEventType.Error, 0, "Log some exception");
        return false; <-- Error occurred between the statement above and here
    }

    return true;
}

 

It turns out that the unknown exception was occurring because of the TraceSource (Well, at least the TraceSource configuration).  The nature of the application requires messages be written to three different locations, a text file (verbose messages for later troubleshooting), the console (informational messages for users tracking progess) and the event log (error messages to be reviewed by network ops, dbas).  The following is a how the trace listeners are configured:

      <add name="fileTrace" type="System.Diagnostics.TextWriterTraceListener">
        <
filter type="System.Diagnostics.EventTypeFilter" initializeData="Verbose"/>
      </
add>
      <
add name="eventTrace" type="System.Diagnostics.EventLogTraceListener">
        <
filter type="System.Diagnostics.EventTypeFilter" initializeData="Error"/>
      </
add>
      <
add name="consoleTrace" type="System.Diagnostics.ConsoleTraceListener">
        <
filter type="System.Diagnostics.EventTypeFilter" initializeData="Information"/>
      </
add>
 

When you consider how the TraceSource is initialized, it makes sense these listeners are triggered in order.  So, in the case of the message above, it would be written to the text file first, the event log second and the console last.  Herein lies the problem and the solution.  The  message was being written to the text file, but not to the event log or console.  Likewise with the top level exception message.  Now, you may get this right away, but I must admit it took us a couple of moments to catch on to the only scenario (or at least one of very few) where the message was written successfully to the text file, but not to the event log or console and an exception was thrown in the interim.

5, 4, 3, 2, 1 …

That’s right, the event log was full!  Uugghhh!  The Framework was throwing an exception because the TraceSource listener could not write to the event log, which was then triggering the top level exception handler.  No details were provided because the TraceSource.TraceEvent method in the top level exception handler was also failing causing the TraceSource.TraceData method call to never be executed.

At this point, the event log was cleared out and the application restarted.  This time, when the application hit the same throwaway exception, it ignored it like it was designed to do and proceeded to hum along.  Problem solved!

So, when using TraceSource and the event log trace listener, be sure to check the permissions and configuration of the Application event log on the server where your application is deployed to avoid having the same or similar issues.

 

Ralph Wheaton
Microsoft Certified Technology Specialist
Microsoft Certified Professional Developer



Architecting Software Solutions Part I – Know Your Audience

This post marks the beginning of a series on architecting software solutions/designs.  Understand, this is not an attempt to explain, compare or endorse any of the many patterns and methodologies that already exist.  Nor am I proposing a new pattern or methodology.  I am attempting, however, to shed light on some items I have found over the course of my career to be significant stumbling blocks to the successful implementation and utilization of any software application.   These are items that need to be discussed, addressed, resolved, etc. prior to finalizing any system designs.

 

Know Your Audience

 

When architecting a software solution that has a user interface component, perhaps one of the most critical items is “know your audience”.  Are they “dumber than rocks” as I was told at a previous employer?  Or are they internet savvy, reasonably intelligent individuals?  Regardless of which is true, both can have a significant impact on any UI design.  If the targeted users are “dumber than rocks”, it would be necessary to create a simpler UI with fewer data entry fields and a lot more help text on the page versus a more complex UI with less help and more features for intelligent users.

With that in mind, here is a small sampling of questions that should be considered prior to designing a software solution and an idea of some of the design requirements that may apply:

  • Are the targeted users “dumber than rocks”?
    • Simpler UI
    • More point and click entry using specialized controls such as calendars, etc to avoid manual entry, mistakes
    • Fewer data entry fields.  More complex entry may be driven by wizards with extra validations
    • More space dedicated to helps explaining entry points
  • Are the targeted users internet savvy, reasonably intelligent?
    • More complex UI
    • More features such as drag and drop, tabs
    • More data entry fields
    • Less space dedicated to helps, using click to help and mouse over features instead
  • Are the targeted users high volume data entry personnel?
    • A strong keyboard interface
    • Fewer field level validations with delayed feedback
    • More code or value entry (such as entering MO instead of Missouri, 4 instead of Legal Entity)
    • Less fluff (eye candy) on the page
  • Are the targeted users low volume data entry personnel (such as HR, Benefits)?
    • A relatively strong keyboard interface
    • More field level validations with instant feedback
    • More textual entry (selecting Legal Entity or Missouri from a drop down)
    • Less fluff (eye candy) on the page
  • Are the targeted users network/infrastructure personnel or developers?
    • Can be command-line driven interface
    • More complex UI
    • Less fluff (eye candy) on the page
  • Are the targeted users local, regional, national or international?
    • Could affect uptime requirements and offline processing
    • Could require localization of labels/text displayed to the user, date/time or monetary formatting
  • Are the targeted users internal (local to the network) or external (from the internet)?
    • Could affect the way users are authenticated to the application (if at all)
    • Could affect where the application is deployed and where it can be accessed from
      • For instance, an internal application accessed from outside the local network may require a VPN connection or RSA authentication.  Particularly for web applications, this could affect how pages are accessed, external links are presented, etc

 

Keep in mind that this list is by no means comprehensive.  It should, however, sufficiently illustrate the necessity to “know your audience”, especially as it pertains to user interface design, when architecting a software solution.  If a solution’s user interface is poor or unusable by its targeted users, the solution design is unsuccessful, even if it was flawlessly executed.

Next up in the series, Setting Your Limits.

Ralph Wheaton
Microsoft Certified Technology Specialist
Microsoft Certified Professional Developer



Adventures in System.Diagnostics – The Sequel

On 09/09/09, I blogged on the System.Diagnostics namespace and specifically the TraceSource class.  I wanted to follow up that discussion with just a little more information about using TraceSource in production applications.

One of the things mentioned in my original post is that in order for Trace to function within an application, that application has to be compiled with the TRACE constant.  This will add some overhead to execution as the compiler will not be able to fully optimize the code.  It appears that TraceSource also requires the TRACE constant, though I have yet to find documentation that confirms this.  It then becomes important to decide if the benefits of using TraceSource outweigh any performance degradation.  For discussion purposes, let us assume that the benefits outweigh the detriments.

So, how do you minimize the overhead when using TraceSource?  Even if you use TraceSource, you still want to be prudent with resources.  The answer requires an understanding as to how TraceSource works (simplified for brevity).

  1. The application calls one of the trace methods (TraceEvent, TraceInformation, etc.) on the TraceSource, specifying the source level (Critical, Information, etc.) and trace message
  2. The framework determines whether to publish the trace based upon the source level and the trace level configured on the SourceSwitch associated with the TraceSource.  If the source level is less than or equal to the trace level, the trace is published
  3. Listeners subscribe to these traces and are configured for a specific destination and source level.  If traces are published with a source level less than or equal to the trace level, the listener consumes the trace and writes it to the appropriate destination

As you can see from step 2, the SourceSwitch serves as a gatekeeper (I am the keymaster, are you the gatekeeper?) of sorts.  Traces with a source level higher than the SourceSwitch are not published, reducing overhead.  You can maximize the savings by setting the trace level of the SourceSwitch to Off.  This prevents any traces at all from being published.  If you want only exceptions to be published, set the trace level to Error and so on.

    <switches>
      <clear/>
      <add name="sourceSwitch" value="Off"/>
    </switches>

BTW, adjusting the source level of the listeners (or removing them altogether) would also reduce overhead incurred using TraceSource.  However, the traces will still be published, incurring that overhead, even if there are no listeners to subscribe to the traces.  And, let us not forget the DefaultTraceListener.

Finally, by leveraging this feature, we can now deploy trace enabled applications to production without fear that performance would be seriously degraded.  Furthermore, when the need arises, trace can be turned on by one simple change in the application’s .config file.  And, it is all built in to the .Net framework.  Cool.

Ralph Wheaton
Microsoft Certified Technology Specialist
Microsoft Certified Professional Developer