I am moderating all of my comments, so you are wasting your time and mine by spamming my blog.

Also, if you do bother to spam my blog at least fein something nice about my posts, not like this guy:

"This article could have been a little more detailed, because there are a lof of things to say about it… but this is good for a start... THANKS..."

So in response to some comments on my original post on this topic, I started playing around with some other ideas.  I figured it would be cleaner to post code here, rather than the comments.

How many variations on the theme can we really have?

As suggested in the comments on the previous post:

   1: public static string Join1<T>( this IEnumerable<T> items, string delimiter, Func<T,string> converter )
   2: {
   3:     return string.Join( delimiter,
   4:                         items
   5:                             .Select( converter )
   6:                             .Where( s => !string.IsNullOrEmpty( s ) )
   7:                             .ToArray() );            
   8: }

Using Aggregate Linq extension:

   1: public static string Join2<T>( this IEnumerable<T> items, string delimiter, Func<T, string> converter )
   2: {
   3:     return items
   4:         .Aggregate( string.Empty,
   5:                     ( agg, next ) =>
   6:                         {
   7:                             var sNext = converter( next );
   8:  
   9:                             return string.IsNullOrEmpty( sNext )
  10:                                        ? agg
  11:                                        : agg + delimiter + sNext;
  12:                         } );
  13: }

Aggregate again:

   1: public static string Join3<T>( this IEnumerable<T> items, string delimiter, Func<T, string> converter )
   2: {
   3:     return items
   4:         .Select( converter )
   5:         .Aggregate( ( agg, sNext ) => string.IsNullOrEmpty( sNext )
   6:                                           ? agg
   7:                                           : agg + delimiter + sNext );
   8: }

And again:

   1: public static string Join4<T>( this IEnumerable<T> items, string delimiter, Func<T, string> converter )
   2: {
   3:     return items
   4:         .Select( converter )
   5:         .Where( s => !string.IsNullOrEmpty( s ) )
   6:         .Aggregate( ( agg, next ) => agg + delimiter + next );
   7: }

In the comments for my previous post, there was a decent discussion about the performance of string aggregation, and the suggestion of using StringBuilder instead.  I’m not too concerned about it.  I’m not dealing with large data sets.

Tags: ,

I’ve had a lot of feedback on my Oracle Update with Join post.  The most common problem people have encountered with this approach is the dreaded “SQL Error: ORA-01779: cannot modify a column which maps to a non key-preserved table”.  Tom Kyte has a detailed explanation of this at Ask Tom.  However, I thought I would take a moment to address my example scenario.

There are actually a couple of ways to update based on data joined from multiple tables.  The first is outlined in my first post.  But, let’s dig a little deeper into that approach.  The “key-preserved” problem comes from joining to a table without some sort of unique lookup into that table.  So, the result of the join would not guarantee zero or one entries for each row I want to update.

I’m going to add a little to my test scenario.  Instead of setting the bonus to a constant value, I am going to set it to a value from a bonus rates table.  Let’s suppose we have:

   1: drop table employees cascade constraints;
   2: drop table employee_bonus cascade constraints;
   3: drop table bonus_rates cascade constraints;
   4:  
   5: create table employees
   6: ( employee_id int primary key,
   7:   bonus_eligible char(1),
   8:   rate_id int
   9: )
  10: /
  11:  
  12: create table employee_bonus
  13: ( bonus_id int primary key,
  14:   employee_id int,
  15:   bonus_date date,
  16:   bonus number
  17: )
  18: /
  19:  
  20: create table bonus_rates
  21: ( rate_id int primary key,
  22:   min_bonus number,
  23:   max_bonus number
  24: )
  25: /
  26:  
  27: insert into employees values ( 100, 'N', 101 );
  28: insert into employees values ( 200, 'Y', 102 );
  29: insert into employee_bonus values ( 101, 100, '15-DEC-09', 0 );
  30: insert into employee_bonus values ( 102, 200, '15-DEC-09', 2000 );
  31: insert into bonus_rates values ( 101, 0, 0 );
  32: insert into bonus_rates values ( 102, 500, 2000 );
  33: commit;

Now let’s update the employee_bonus table by copying the minimum bonus from the employees table:

   1: update (select b.bonus as bonus,
   2:                br.min_bonus as new_bonus
   3:           from employee_bonus b
   4:          inner join employees e on b.employee_id = e.employee_id
   5:          inner join bonus_rates br on e.rate_id = br.rate_id
   6:          where e.bonus_eligible = 'Y' ) t
   7: set t.bonus = t.new_bonus

The query runs just fine and updates the employee_bonus record for the "eligible” employee.  The reason it works is because to update the employee_bonus table, I am looking up the employee info based on a unique index (primary key in this case), and Oracle is certain that only one employee record will be returned given the employee_bonus record.

Just for grins, here is the other approach for updating with a join.  Keep in mind, though that the approach above should be your first try.

   1: update employee_bonus b
   2:    set b.bonus = (select br.min_bonus
   3:                     from employees e
   4:                    inner join bonus_rates br on e.rate_id = br.rate_id
   5:                    where b.employee_id = e.employee_id
   6:                      and e.bonus_eligible = 'Y' )
   7: where exists (select e.employee_id
   8:                 from employees e 
   9:                where b.employee_id = e.employee_id
  10:                  and e.bonus_eligible = 'Y' )

By the way, we are using a correlated sub query in this case.  For a correlated query, any join to the outer table, must be done in the where clause and not within the “inner join … on” syntax.  Otherwise you will get an “ORA-00904 … invalid identifier”.

Okay, so that’s it for a scenario that works fine because the “lookup” table (employees) is key-preserved.  So what about a scenario where the lookup table is not key preserved?  Let’s suppose the employee table has a bonus field that grabs the bonus from the employee_bonus table.  (Please keep in mind that this whole scenario is totally contrived, so I may be doing some things here that I wouldn’t normally.)

First, we need to add the bonus column:

   1: drop table employees cascade constraints;
   2: create table employees
   3: ( employee_id int primary key,
   4:   bonus_eligible char(1),
   5:   rate_id int,
   6:   bonus number
   7: )
   8: /
   9:  
  10: insert into employees values ( 100, 'N', 101, null );
  11: insert into employees values ( 200, 'Y', 102, null );
  12:  
  13: commit;

Now, let’s try to update the employees.bonus from the employee_bonus table:

   1: update (select e.bonus,
   2:                b.bonus as new_bonus
   3:           from employees e
   4:          inner join employee_bonus b on b.employee_id = e.employee_id) t
   5:    set t.bonus = new_bonus

Oops, that didn’t work:

image

Again, the problem is that there is no unique constraint on the lookup table (in this case employee_bonus) by the key we are joining to (employee_id).  So we could resolve the error by adding a unique constraint to the employee_id column.  However, there could be any number of reasons why we can’t.  Perhaps it is a “legacy” table/database that we have no control over.  Perhaps, there really is no guarantee that there would be only one entry per employee.  The bonus_date certainly seems to imply that.

Let’s take a look at the second approach:

   1: update employees e
   2:    set e.bonus = (select b.bonus
   3:                     from employee_bonus b
   4:                    where b.employee_id = e.employee_id)
   5: where exists (select b.employee_id
   6:                 from employee_bonus b
   7:                where b.employee_id = e.employee_id)

Actually this approach works fine with our given data.  However, if we had more data and there were an employee with more than one bonus, the query would fail.  Let’s see…

   1: insert into employee_bonus values ( 103, 200, '15-DEC-08', 1000 );
   2:  
   3: commit;

and now if we run the same update statement:

image

I believe that older versions of Oracle would not produce this error, they would simply update the employee record twice, yielding unpredictable results.  I’m using 10g in my tests.

Let’s review what we’ve seen so far.  Basically, no matter how I cut it, the update I am attempting is flawed.  I’m trying to update one table data from another that is not guaranteed to be unique.  Maybe we “know” it always will be unique, but it would be best if we go ahead and add the necessary constraint.  In my scenario, there is something obviously wrong with my logic.  Perhaps, I was trying to get the total bonus or the max bonus.  Then, updating the employee table might make more sense.  But even updating from an aggregate sub query has its quirks.

Another scenario you may encounter is updating based on a join to a view.  It is highly unlikely that you will be able to “guarantee” (from Oracle’s perspective) uniqueness from the view.  The second approach is probably your only option in this case.

Anyway, I hope that I’ve gone into enough detail to help those who have been having trouble implementing the approach from my original post.  I’ll update this one if other common scenarios pop up.

Happy updating.

Tags: ,

Okay, so recently I was working on a new suggestion for my client.  It was a fairly simple request; create a snapshot of data to be used in monthly calculations, with the option to regenerate the snapshot at a later time.  The snapshot was easy.  I thought regenerating the snapshot would be easy too, but somehow I found a way to make it hard.

The first mistake I made was not writing a test first.  Really this had a lot to do with laziness.  The data that I am capturing is only available via a view into a proprietary database.  Somehow, I needed to figure out how to modify the data behind the view.  It seemed really hard, so I skipped it.

That decision came back to bite me.  I submitted the changes to the customer and soon they reported that they weren’t working.  Perhaps there was a little more than laziness here (perhaps arrogance).  I manually tested my changes, but I didn’t cover every aspect.  As it turns out, the snapshot was being updated correctly, but, the monthly calculations weren’t getting updated.

So now that I got a feature returned as a failure, I decided I better write that test.  In fact, following TDD principles, I knew I should write a test that would fail due to the reported defect.  I started on the test and hit a roadblock and was about to give up on it again.  I mean, this testing stuff is hard!

I chatted with a coworker asking his advice on how to test the data change behind the view.  He provided a simple and elegant solution.  We set up a test data script that would create a “test” table that duplicates the structure of the view.  Then we simply redirected the synonym from the view to the test table.  Now we essentially have a fake view.  And since it is a table, we can manipulate the data to our hearts content.

From this point I was able to continue writing my test.  Everything looked good.  I was sure to test all of the aspects of the new requirements that I could think of.  When I was done, I ran the tests and verified the defect.

Finally, I could fix the problem.  I figured out the problem and was able to fix it fairly easily.  Well, not so fast.  My test was still failing!  I spent hours on the issue.  I knew that the fix was correct.  I tried all kinds of debugging attempts.  I even changed to code to force it to be wrong for a different reason.  Everything seemed to be fine, except my test was still failing.  I was beginning to think that there was a bug in the test itself.

As it turns out, there was more than one problem with the original code.  The second issue was that I had an update statement that performed a join to the snapshot data, but was missing a critical condition in the join.  To uniquely identify the snapshot data, I needed to join to two fields and I forgot one.  So, the update was actually executing multiple times and the last time wasn’t the one I was expecting.

Had I persisted in my laziness and simply added the fix and sent it to the customer, I’d be embarrassed yet again.  But this time, the test saved me.  How many times do I have to learn this lesson?

Tags: ,

Recently, Ayende Rahien (Oren Eini) blogged about Planning for Rhino Mocks 4.0.  He’s calling for all of us to contribute our ideas on how to improve Rhino Mocks.  He set up a forum where we can submit our ideas and vote on them as well.

This lead me to review some of my more complex tests involving Rhino Mocks (I’m on version 3.6).  Were there mocking difficulties that I could propose a solution to?  Let’s find out…

A frequent pattern in my tests is verifying that my controllers pull properties from the views correctly.  The controller would construct a new entity object, providing the values from various view properties.  The entity object then would be passed to the data provider to save the entity.

The first attempt to test that a particular property was passed through correctly went something like this:

   1: _model.AssertWasCalled( m => m.Add<Job>( null ),
   2:                         o => o.Constraints(
   3:                                  Is.Matching<Job>( j => j.Account.Id == _accountId ) ) );

Fairly straight forward, but the failure message doesn’t provide much information:

   1: failed: Rhino.Mocks.Exceptions.ExpectationViolationException : IMyModel.Add<Job>(Predicate (TestClass.<Test>b__12(obj);)); Expected #1, Actual #0.

Marginally better is the use of the Arg helper class:

   1: var arg = Arg<Job>.Matches( j => j.Account.Id == _accountId );
   2: _model.AssertWasCalled( m => m.Add<Job>( arg ) );

…with this failure message:

   1: failed: Rhino.Mocks.Exceptions.ExpectationViolationException : IMyModel.Add<Job>(j => (j.Account.Id = (value(TestClass)._accountId ))); Expected #1, Actual #0.

This gave me a better idea as to the check that was failing (honestly that is what my test method name is for), but without helpful details.

So I changed the original approach slightly by adding an assertion within the constraint delegate, like so:

   1: _model.AssertWasCalled( m => m.Add<Job>( null ),
   2:                         o => o.Constraints(
   3:                                  Is.Matching<Job>( j =>
   4:                                      {
   5:                                          j.Account.Id.ShouldEqual( _accountId );
   6:                                          return true;
   7:                                      } ) ) );

Now I am using an explicit “ShouldEqual”, and I get:

   1: failed: 
   2:   Expected: 143
   3:   But was:  142

But, that “return true” is just ugly, and there’s a lot going on just to test that the account id was passed through correctly.

Then I discovered the Rhino Mock extension method GetArgumentsForCallsMadeOn:

   1: var arg = (Job)(_model.GetArgumentsForCallsMadeOn( m => m.Add<Job>( null ) )[0][0]);
   2: arg.Account.Id.ShouldEqual( _accountId );

After all, I didn’t really want to assert that the Add method was called.  I wanted to assert that the method was called with the correct parameters, but, with each parameter under a different test:

   1: [Specification]
   2: public void the_job_should_be_add_to_the_queue()
   3: {
   4:     _model.AssertWasCalled( m => m.Add<Job>( Arg<Job>.Is.Anything ) );
   5: }
   6:  
   7: [Specification]
   8: public void the_selected_account_id_should_be_used()
   9: {
  10:     var jobArg = (Job)(_model.GetArgumentsForCallsMadeOn( m => m.Add<Job>( null ) )[0][0]);
  11:     jobArg.Account.Id.ShouldEqual( _accountId );
  12: }
  13:  
  14: [Specification]
  15: public void the_selected_from_date_should_be_used()
  16: {
  17:     var jobArg = (Job)(_model.GetArgumentsForCallsMadeOn( m => m.Add<Job>( null ) )[0][0]);
  18:     jobArg.FromDate.ShouldEqual( _fromDate );
  19: }
  20:  
  21: [Specification]
  22: public void the_selected_to_date_should_be_used()
  23: {
  24:     var jobArg = (Job)(_model.GetArgumentsForCallsMadeOn( m => m.Add<Job>( null ) )[0][0]);
  25:     jobArg.ToDate.ShouldEqual( _toDate );
  26: }
  27:  
  28: [Specification]
  29: public void the_selected_line_items_should_be_used()
  30: {
  31:     var jobArg = (Job)(_model.GetArgumentsForCallsMadeOn( m => m.Add<Job>( null ) )[0][0]);
  32:     jobArg.LineItems.Select( li => li.LineItem.Id ).ShouldHaveSameElements( _lineItems );
  33: }

That’s good enough for me.  I guess I will have to search for some other ideas to improve Rhino Mocks.

A few weeks ago, I blogged about building an alternative to Dynamic Data.  I have to say, it’s pretty sweet.  I added a new entity type yesterday, including a couple of minor overrides and I was done.

Here are the steps I had to take (testing left out for clarity):

  • Create the entity:
  •    1: public class ExcludedCounterparty : IAuditedWithId
       2: {
       3:     public virtual int Id { get; set; }
       4:     public virtual Counterparty Counterparty {get; set;}
       5:     public virtual string CreatedByUser { get; set; }
       6:     public virtual DateTime? CreatedOnDate { get; set; }
       7:     public virtual string ModifiedByUser { get; set; }
       8:     public virtual DateTime? ModifiedOnDate { get; set; }
       9: }
  • Override the Fluent NHibernate conventions:
  •    1: public void Override( AutoMap<ExcludedCounterparty> mapping )
       2: {
       3:     mapping.References( ec => ec.Counterparty, "FK_CounterpartyId" );
       4: }
  • Override the UI conventions:
  •    1: public override void Override()
       2: {
       3:     Config.AllowEdit = false;
       4:     Config.AllowDelete = true;
       5:     Field( ec => ec.ModifiedByUser ).ShouldDisplay = false;
       6:     Field( ec => ec.ModifiedOnDate ).ShouldDisplay = false;
       7: }
  • Add the menu configuration
  •    1: <siteMapNode url="~/../ManageEntities.aspx?type=ExcludedCounterparty" title="Counterparty Exclusions" roles="…"></siteMapNode>
  • Update security
  • Done.

Notice that nowhere did I say that I had to create a aspx page or ascx control.  How sweet is that!  Also, both Fluent and my UI automation automatically pick up the new entity based on the assembly configuration.  It’s days like this that makes software development fun.

A couple of weeks ago I was searching for an alternative to Dynamic Data.  My motivation was simple.  Integrating Dynamic Data with Oracle and/or NHibernate would be challenging.  I was amazed at how little I could find that would solve this problem.  Surely Dynamic Data wasn’t the first or only.  Eventually, I gave up and worked with a coworker to develop our own alternative.

Fresh off a Fluent Nhibernate implementation, we decided to take the convention-over-configuration approach.  The Fluent interface worked very well for us, in particular the auto-mapping feature.  This is what we wanted to accomplish with our UI automation.

Configuration Points:
First we had to identify the configuration points.  Things like, label or header text, visibility, formatting, editor, among others.  We introduced two basic configuration objects, one for the entity and one for the fields (properties).  Our goal was to have a simple CRUD user interface that would be completed automated by reading information from these configuration objects.  Everything from subtitles, to actions, to data grids would be automated.

Conventions:
To avoid wasting time configuring individual entity classes, we decided to “auto-map” our UI.  We started off with simple conventions.  For example, deriving the subtitle from the class name.  The subtitle for “BookStore” would be “Book Stores” by convention.  We handled the singular case as well to support our actions, like “Add Book Store” or “Edit Book Store”.  Also, fields would follow a similar convention for labels and headers.

Eventually as our needs grew, we defined more conventions.  The nice thing about this approach is that we are following the Open/Closed Principle.  If we need something new, we simply add another convention.

 

Overrides:
Clearly, there will be exceptions to these conventions.  We took another page out of the Fluent Nhibernate book and implemented an Override model.  We simple allow implementations of IOverride<T> that provide an opportunity to change the auto-mapping behavior.   For example, IOverride<Party> might include “config.PluralLabel = "Parties"”.  When the automation service is initialized, we register the assembly containing our overrides.

 

Consumption:
We created a few automation aware web controls that bring our entities to life.  Really all we needed was a custom grid view and data entry table.  Presenting the data in the grid view was really quite simple since grid views already support simple data binding.  Data entry turned out to be not so difficult as well, we simply kept track of a list of properties and values.  Then it was a matter of getting values from the entities with reflection, and later setting those values before persisting the entity. 

Of course, the fact that we are using Nhibernate with methods like Save<T>( object ) helped with generic CRUD a lot.

 

Cost Effectiveness:
Many will question, “Why reinvent the wheel?”  Others will ask, “Have you really saved that much development time by building this framework?” 

First, I felt that Dynamic Data would not meet our needs and I searched long and hard for an alternative to Dynamic Data and couldn’t find one.  So, I don’t feel like I am reinventing the wheel here.  Perhaps there are Nhibernate integrations now available for Dynamic Data, but there weren’t at the time.  Also, even if there were, being an infant, Dynamic Data would likely have problems that would take too long to resolve.  With our in house solution, we built only what we need.  And with the tests in place, we feel confident that we can make a change quickly if necessary.

Second, we already experienced productivity gains more than once since adding the UI automation.  After defining a new entity, a simple one line override, and a menu item we were able to perform all CRUD on that entity because of the automation both on the UI side and the data side.  We didn’t have to write a single additional line of ASP.Net code.  In fact, my coworker wrote a prototype that takes an existing Java application (not web) and port it to our web interface.  He basically plugged in the new web application to a Java web service and the rest was cake.  When my boss saw the prototype, he about fell out of his chair.

Tags: ,

image [Oct ’09 Update: web views have approximately doubled since May.]

I was looking through my blog history to see which posts were visited the most.  To give you a point of reference, the next closest post to has around 3000 visits.  I’ve had more comments on “Oracle Update with Join” than any other as well.

Ironically, I’ve been using NHibernate more heavily and these types of Oracle statements/queries hold less value in my daily activities.

Tags:

It’s a little hard to believe that I haven’t posted a blog in over a month.  Where have I been?  Frankly, doing my job and trying to keep up with my personal life at the same time.  I’ve been hesitant to include personal jabbering in my mostly technical blog.  But, I want to express what is most important to me.  I hope that most of you have similar priorities.

At the beginning of the year, the church I attend lost it’s music director.  While we were looking for a replacement, I filled in as temporary director.  Suddenly, I had added a part-time job to my already busy schedule.  For Lent, I chose to participate in a bible study each Monday evening.  Also, I am involved with the Pastoral Council and we have been recently working very hard to prepare for discernment of new members. 

Finally, I ran completely out of one of our most precious resources, time.  I had been singing with the Archdiocesan Choir for special liturgies throughout the year.  I also, volunteered to direct music for a men’s retreat at our church.  When, the music director left our church, I became so overwhelmed, that I could not fulfill my obligations to both the Archdiocesan Choir and the men’s retreat.

All the while, I was trying to ensure my face was familiar to my family.  A few years ago, I returned from a business trip, at which time my younger daughter looked at me very inquisitively.  The look on her face is something I will scarcely forget.  It was the look of “who are you?”  I knew then that I needed to find a different job.

I’ve held two jobs since then.  Both have been great (so far).  I have been able to spend more time with my family.  I’ve been able to meet my daughters’ teachers, and attend other school and church functions.  I even directed the children’s choir (at church) for a month or so. 

One of the most rewarding things I’ve been able to do is go on daddy-daughter walks weekly, with my girls (individually).  It’s special time for them and for me.  Unfortunately, the events over the last couple of months put a strain on that time together, and I missed a few weeks.

I couldn’t have a better wife.  She is the most patient person I know.  And she is always looking out for me and my health.  Without her support, I suspect the stress would have overwhelmed me.  It sounds cliché, but she truly does make me a better person, and makes me want to be a better person.

On top of all of this, at work, I completed one project in late February, and started another at the same time.  The last project was pretty much a solo endeavor.  I am pleased that I’ve got a teammate for the new one.  We’ve been able to accomplish things in the last couple of months that I had put off on the previous project.  In the next few weeks I hope to share some of the things we have been working on. 

We recently incorporated a more BDD style to our testing, using SpecUnit.  Also, we are now using StructureMap to help manage dependencies.  We are using Nhibernate (including the Fluent interface) for the first time on our new project.  We started building our own brand of UI automation/templates for CRUD (Dynamic Data just didn’t cut it for us).  Finally, we decided to incorporate the Enterprise Library Validation Block as well.  We brought all of this together and still hit our deadline for the first milestone of the project.

The order in which I covered things in this blog was very purposeful.  What should be clear is that this blog is not one of my top priorities.  In fact, my job and career are, at best, third on my list.  So the question is, how do you keep it all together?  Can you give each aspect of your life adequate attention?  I think the key is to give proper attention at the proper time.  My wife humors me when I talk shop at home, but I know she appreciates it much more when I leave work at the office.  Similarly, taking breaks at work to talk about your personal life is fine, just don’t overdo it.

Live Better, Code Better.

About a year ago, I started to evaluate ASP.Net MVC, but decided to put it aside because it was still in heavy-duty development.  With the official release a couple of weeks ago, I decided to check it out once again.

MVC has a lot of things to offer, but there are a few critical things that we couldn’t overlook.  We have a few web applications that all have quite a lot invested in server controls (both third-party and developed in-house). 

In ASP.Net WebForms, for something as simple as a GridView with a ObjectDataSource, there is a lot handled behind the scenes.  The time it would take to mimic the necessary behavior in MVC appears to be unmanageable at this point.  I’ve done my share of HTML, and am comfortable doing it again, but it certainly feels like a step backward.  We would have to write a lot of HTML and JavaScript (with the help of jQuery) to accomplish what takes just a few lines with server controls.

Another issue that we cannot overcome is the fact that WebParts are basically server controls and are mostly incompatible with MVC.  One of our applications is highly dependent on WebParts.  Again, there is a lot of work handled for us by simply leveraging the WebPartManager.

So, it was easy to rule MVC out in regard to existing applications.  But, what about something new?  Well, for all the same reasons we’ve decided to stick with ASP.Net WebForms.  Basically, we have invested a lot in server controls, like menus, tree views, grid views, and in-house controls to promote common look and feel between our applications.  Moving to MVC would be no small task.

There is one place that I am considering MVC within our existing framework.  I’m wondering if we can leverage MVC to better handle our Ajax needs.  In a couple of occasions, we are using WebForms with no markup, that simply return xml data.  We also use Page Methods in a couple of places.  It seems like MVC could handle these types of requests more cleanly.

Perhaps in a few months the community will develop some slick MVC controls that save a lot of work.  Unfortunately, I don’t have the time (money) to build these controls myself.

Anyway, if I am totally off the mark, please someone pick me up and set me straight.