Posts
48
Comments
148
Trackbacks
0
November 2009 Entries
Differences between using the ‘Convert’ class vs. casting (Part 1 of 2)

In the previous post "Why does SubSonic's SimpleRepository Add<T> Method return a decimal instead of an int?" I talked some about what appeared to be some odd behavior regarding the return value of the ‘Add<T>’ method in SubSonic’s SimpleRepository. In a nutshell, I expected the object that this method returned to have an underlying type of ‘int’ since the type that it was dealing with had an int as the primary key / identity field. Instead, the method was returning an object with an underlying type of ‘decimal’ and my cast to an int was failing.

Long story short, I determined one course of action would be to use the Convert.ToInt32() on the return value to safely convert it to an int. This raised a question: why did Convert.ToInt32() work but a simple cast wouldn’t?

Let’s boil it down to some simple examples. This code compiles and runs with no errors:

decimal decNumber = 2.2M;
int intNumber = (int)decNumber;
Console.WriteLine("intNumber = {0}", intNumber);

The ‘WriteLine” call will show that the ‘intNumber’ variable equals 2. So it’s OK to cast from a decimal to an int, even if it results in a loss of precision. The above code, however, doesn’t accurately reproduce the scenario we were having with the ‘Add<T>’ method. This method doesn’t return a decimal, it returns an object with an underlying type of decimal. The following code more accurately shows the cast that I was originally trying to do:

decimal decNumber = 2.2M;
object number = decNumber;
int intNumber = (int)number;
Console.WriteLine("intNumber = {0}", intNumber);

This code compiles fine, but will result in an ‘InvalidCastException’ at runtime. So if directly casting from an decimal to an int is allowed (as illustrated in the first snippet) why can’t I cast an object with an underlying type of decimal to an int?

The answer can be found in the CSharp Language Specifications section 4.3:

For an unboxing conversion to a given non-nullable-value-type to succeed at run-time, the value of the source operand must be a reference to a boxed value of that non-nullable-value-type. If the source operand is null, a System.NullReferenceException is thrown. If the source operand is a reference to an incompatible object, a System.InvalidCastException is thrown.

We’re only allowed to unbox an object to a variable of the same type that was originally boxed. That makes sense, so how do we get the decimal in an object’s clothing returned by ‘Add<T>’ into an integer variable?

If we make one small change to the previous code snippet and use ‘Convert.ToInt32’ instead of the ‘(int)’ cast, the runtime error goes away and we get the same output as our first code snippet from above.

decimal decNumber = 2.2M;
object number = decNumber;
int intNumber = Convert.ToInt32(number);
Console.WriteLine("intNumber = {0}", intNumber);

This works and solves the immediate problem that we were trying to solve, but I couldn’t help but wonder why. Why does Convert.ToInt32 work here but (int) doesn’t?

We’ll break out Reflector and take a closer look in the next post.

 

Posted On Monday, November 30, 2009 7:58 PM | Comments (0)
The Future of Television?

Updated on 01/05/2010 I was talking about this with some folks over the Christmas holiday and formulated some new thoughts on it.

Several months back I finally became frustrated enough with paying roughly $150 per month for my cable television and internet service. I had been slowly adding packages to my cable service over the years (DVR, digital cable, HD channels, etc.) and it had all gotten out of control. I had several hundred channels to watch but maybe watched 10-15 of them on a regular basis (or at all, for that matter). When I started looking into cutting back my service I discovered there seemed to be a huge chasm in pricing between the absolute bare bones basic cable and the minimum that you needed to get high-definition channels. I ultimately decided to bite the bullet and cut back to true basic cable. My bill monthly bill went from $150-$160 to roughly $70.

To help fill the gap I also put together a home theater PC roughly following Jeff Atwood’s guidelines in his excellent post: Building Your Own Home Theater PC. By adding a TV tuner to this setup it gave me the ability to run Windows Media Center as a DVR for the few basic cable channels that we did still get, and even let me tune in some local channels in High Definition via Clear QAM. In addition, software like Hulu Desktop let us get a few shows that we couldn’t get through our cable service anymore. It took some adjusting, but we hardly miss having a huge bloated digital cable service package now. It was expensive and it encouraged spending a lot of time cycling through the channel guide looking for something to waste time watching.

We’ve been pretty happy with this setup for a few months now and have recently started watching more and more shows on Hulu, even when we have the ability to DVR them via our basic cable package. In most cases the shows are available on Hulu the day after they air (though there are many shows that are delayed) and oftentimes the video quality that we get off of Hulu desktop in full screen to our television is better than the DVR recording off of an analog cable station. Throw in the fact that you know Hulu will never accidentally cut off the end of a show due to a timing issue or a football game running long and it becomes an increasingly more attractive option over the DVR. Some television networks also allow streaming of shows in HD via flash players on their websites and the quality is really quite good; almost indistinguishable from watching an HD broadcast via cable. I would say that we currently watch about half to two thirds of our usual shows via some online streaming service as opposed to watching it live or via the DVR. The other day we were watching something when Hulu launched into one of its in-show advertisements. I instinctively reached for the remote control to hit fast-forward, thinking that I was watching a DVR recording. Obviously this didn’t work, as Hulu won’t let you fast forward through ads. Here’s where I realized something pretty important:

I didn’t care.

I don’t mind watching one thirty-second to one minute ad during the ad breaks of a show on Hulu. I don’t really mind ads that much at all, honestly. If it means that I can get decent quality video streamed to my television via a home theater PC without giving a cable company more of my money I’m even happy to watch an advertisement or two. Then it hit me: this model of delivering content is the way that everything will eventually move. Bandwidth to the home is getting faster and cheaper and people are becoming increasingly used to being able to watch things on their own schedule due to the prevalence of DVRs. A DVR essentially takes all the control away from the people who deliver the content (television stations and advertisers) and gives it all to the consumer. Once recorded, the consumer has a lot of control over that content including the ability to circumvent advertisements by fast forwarding through them.

The Hulu model is completely different. They don’t let you “own” the content because they simple serve it up to you directly and remove the option of recording/downloading it for later viewing. They can force you to watch the advertisements and I don’t think that consumers will mind. I’m waiting for the first “mainstream” television show to come out with an online-exclusive air schedule. It just makes sense: make it available online when you would normally want to air it live, let people watch it when they want, and subsidize the bandwidth with advertisements. It’s more or less the original television revenue model with the main difference being that today’s technology allows people to watch the shows whenever they want. You reach a larger audience this way and can show any kind of advertisements you want.

Some folks might argue that people will continue to prefer the DVR model because it allows them to fast forward through commercials. I say that DVRs became popular not because they let people fast forward through commercials, but because they provided an easy means for people to watch shows the like on their own schedule. Fast forwarding through the commercials is just icing on the cake. Services like Hulu let me watch things on my own schedule but force me to watch the commercials, which is fine by me. I think the majority of television watching public will agree.

I think that certain types of programming will likely remain on broadcast television like news shows or anything else that could be time sensitive. I don’t think I’d care about watching the morning news from yesterday. That type of programming is just stuff that you turn on when you have a few minutes and are getting ready for work or cooking dinner; it’s not something that you necessarily sit down specifically to watch.

Posted On Thursday, November 26, 2009 8:35 PM | Comments (1)
Why does SubSonic’s SimpleRepository ‘Add<T>’ return a decimal instead of an int? (Part 2)

Last time I was taking a look at SubSonic’s SimpleRepository functionality and wondering about the return value of the ‘Add<T>’ method. More specifically, I was wondering why the ‘object’ instance being returned was typed as a decimal rather than an int when the object I was persisting had a primary key field that is typed as an int.

I had discovered that the while Add<T> was returning a decimal, it was also updating the primary key field (PostID on my Post class instance in this case) with the same value; essentially I was getting the correct number back both from the return value and the PostID field but the values were being typed differently.

Since SubSonic is an open source project I have the luxury of pulling down the code and having a look for myself. This was easily accomplished by doing a git clone of the repository at github: git://github.com/subsonic/SubSonic-3.0.git to a folder on my local machine. Opening the solution and building the source code only took a second and I was then able to directly reference the newly built SubSonic.Core assembly in the bin folder of my local SubSonic copy from my sample project so that I can easily F11 into the Add<T> method and see what’s going on.

This little block of code appeared to be the origin of the return value for the method:

object result = null;
using(var rdr = item.ToInsertQuery(_provider).ExecuteReader())
{
    if(rdr.Read()) 
    result = rdr[0];
}

 

The ‘rdr’ variable above is a simple System.Data.IDataReader, the run-time instance of which is a SqlDataReader in this case. To try and understand what’s happening it’s helpful to know what the SQL that the data reader is reading from looks like. Drilling down a little bit further into the ‘ToInsertQuery’ method can show us the SQL that’s being built up at run time:

image

 

Pulling that SQL out and formatting it some gives us this:

INSERT INTO [Posts](
  [Posts].[Title],
  [Posts].[Body],
  ...)
VALUES (
  @ins_PostsTitle,
  @ins_PostsBody,
  ...)

SELECT SCOPE_IDENTITY() as new_id

 

So our data reader is reading back the result of ‘SELECT SCOPE_IDENTITY()’. Taking a quick look at the ‘Posts’ table that the SimpleRepository created in my database reveals that the PostID column is indeed set as the primary key/identity and is typed as an int. Some poking around in the MSDN documentation reveals what’s going on:

This article on SCOPE_IDENTITY shows that that it has a return type of ‘NUMERIC(38,0)’ while this article on SQL Server Data Type Mappings in ADO .NET shows us that numeric SQL Data Type gets converted to the ‘decimal’ .NET Framework type automatically. So that mystery is solved, but this still doesn’t explain how the integer typed ‘PostID’ field was correctly updated with the new identity value while the return type remained a decimal. The next code block down in the Add<T> shows us how:

if (result != null && result != DBNull.Value) {
  try {
    var tbl =  _provider.FindOrCreateTable(typeof(T));
    var prop = item.GetType().GetProperty(tbl.PrimaryKey.Name);
    var settable = result.ChangeTypeTo(prop.PropertyType);
    prop.SetValue(item, settable, null);

    } catch(Exception x) {
      //swallow it - I don't like this per se but this is a convenience and we
      //don't want to throw the whole thing just because we can't auto-set the value
    }
}
return result;

The point of this is to provide convert the ‘result’ variable to the type of the primary key of the object being persisted. I’m not sure that I agree with swallowing the exception here, though that’s easy for me to say as an outside observer who hasn’t poured hours and hours into this code. The main reason I don’t agree with it is that I feel like the calling method really needs to be able to rely getting back the identity of the newly created record and if something blows up I would want to know about it. You can always still rely on the ‘result’ object that gets returned, but as we’ve already seen you can’t simply cast that to an int as you might think you could. I’m kind of curious to know under what circumstances this code throws an exception and see if there’s any way to make it more reliable. I can see that the definition of the ‘ChangeTypeTo’ extension method can explicitly throw an exception when the underlying database is SQLLite, but the exception thrown also hints at the workaround for that issue. I suppose this kind of thing can be the cost of doing business when you’re trying to support multiple database platforms; not everyone has the luxury of being “SQL Server only”.

Oh, and the (Exception x) isn’t needed in this case since we’re not doing anything with the caught exception object; I think a simple ‘catch’ would do just fine, but I digress. :-)

It wouldn’t be very difficult to modify this code to attempt to return a properly typed ‘object’ variable since we’re already trying to convert the ‘result’ object for the purposes of setting the primary key field in the provided ‘Post’ instance by trying to do something like this:

object typedResult = null;
if (result != null && result != DBNull.Value) {
  try {
    var tbl =  _provider.FindOrCreateTable(typeof(T));
    var prop = item.GetType().GetProperty(tbl.PrimaryKey.Name);
    var settable = result.ChangeTypeTo(prop.PropertyType);
    typedResult = settable;
    prop.SetValue(item, settable, null);

    } catch(Exception x) {
         //swallow it - I don't like this per se but this is a convenience and we
                    //don't want to throw the whole thing just because we can't auto-set the value
    }
 }

 return typedResult ?? result;

 

This would attempt to return a properly typed result if the type conversation was successful but would return the original result if needed. This would let the calling code look closer to what I originally expected would work:

SimpleRepository repo = new SimpleRepository("SampleDB", SimpleRepositoryOptions.RunMigrations);
object returnValue = repo.Add<Post>(newPost);
int newPostID = (int)returnValue;
 

The problem here is that since the type conversion could swallow an exception there’s no guarantee that our cast to an int would work at runtime. This could pose an issue if you were relying on being able to determine the newly created ID of the object immediately after its edited. For example, you might want to take the user to the ‘view’ screen for the post right after they create it. So what’s the solution? I’m not sure that I have the right answer, especially if you don’t have any other means to uniquely identify your records. I think it’s pretty safe to assume that you’re going to get some kind of return value back. The calling code could ‘ToString()’ the returned object and use Int32.Parse but that kind of smells to me. You could also use ‘Convert.ToInt32’, which I think I like better. I think I would also be in favor of removing the empty catch block to be able to rely on the type conversion when the method returns. I think it partially comes down to whether or not you think these potentially platform-specific quirks should be the burden of the library or the library’s consumer. Given that the consumer is always going to be in a better position to know about the specific needs of context in which the library is going to be used I think I’m leaning toward the latter.

That said this is open source so you can always fork it and modify it for your own purposes which is part of the fun. :-)

Posted On Wednesday, November 18, 2009 8:16 PM | Comments (2)
Why does SubSonic’s SimpleRepository ‘Add<T>’ return a decimal instead of an int? (Part 1)

I’ve been spending some time lately digging into SubSonic 3 and have really enjoyed working with it so far. I love the how “low friction” it is to get up and running. I’ve been particularly impressed with the SimpleRepository in this regard. It definitely lives up to it’s name by providing truly simple data access functionality in a pretty sane and straightforward way. That said I don’t think it’s the best choice for every project, but if you don’t have to care much about the implementation details of your database then I can see it being a really useful tool.

For whatever reason I’ve been choosing a “blog engine” as my domain of choice lately when working up sample code (I guess I finally got tired of endlessly building ‘employee management’ or ‘product inventory’ models) and decided to see how I might be able to leverage SimpleRepository in a scenario like this.

I won’t go too far into the mechanics of using SimpleRepository (great walkthroughs are available at the SubSonic project site) but the basic idea is that you create POCO classes to define the data that you want to be able to persist. For the purposes of this example I started with a very simple ‘Post’ entity that ended up looking like this:

    public class Post
    {
        public int PostID { get; set; }

        public string Title { get; set; }

        public string Body { get; set; }

        public string AuthorName { get; set; }

        public DateTime PublishedOn { get; set; }

        public DateTime CreatedOn { get; set; }
    }

SubSonic can take an object like this and automatically create a corresponding table in SQL Server on-the-fly. Taking a “convention over configuration” philosophy, SubSonic sees an int field on the ‘Post’ class called ‘PostID’ and makes that both the primary key and identity on the corresponding ‘Post’ table. When creating a new post it’ll also help you out by bringing back the newly created ‘PostID’ from the database. Creating a new post might look something like this (assuming a valid connection string present in the config file named ‘SampleDB’):

   1:          public int CreatePost(string title, string body, string authorName, DateTime publishDate)
   2:          {
   3:              Post newPost = new Post
   4:              {
   5:                  Title = title,
   6:                  Body = body,
   7:                  AuthorName = authorName,
   8:                  PublishedOn = publishDate,
   9:                  CreatedOn = DateTime.Now
  10:              };
  11:   
  12:              SimpleRepository repo = new SimpleRepository("SampleDB", SimpleRepositoryOptions.RunMigrations);
  13:              int newPostID = (int)repo.Add<Post>(newPost);
  14:              return newPostID;
  15:          }

The ‘Add<T>’ method of the SimpleRepository returns an ‘object’ which I presumed would be an integer containing the new PostID. I wanted to be able to cast that to a local variable and then return the new ID to the caller of this method. I was somewhat surprised to see that this code blows up on line 13 with an ‘InvalidCastException’. Apparently the object being returned isn’t directly cast-able to an integer. Making a quick change to the code, setting a breakpoint, and digging in some with the ‘Immediate’ window revealed the following:

image

The ‘Add<T>’ method was returning a decimal instead of an int. The ‘newPost.PostID’ and ‘returnValue’ were both being set to 7 (which was the correct value after looking in the database) but they were typed differently. Now, if I were smart I’d just resign myself to simply using the ‘PostID’ to determine the new identity of the created Post, but where’s the fun in that? ;-)

I decided instead to dig down into SimpleRepository source code and take a closer look at  the ‘Add<T>’ method definition. More on those findings in the next post…

Posted On Sunday, November 15, 2009 6:47 PM | Comments (1)
Resolving Dropbox hanging when relocating local folder

 

* Let me begin this post with saying that I’m a huge fan of Dropbox. It’s a fantastic service that is dead simple and “just works”. That said, I hit a snag with it recently that was a bit frustrating and took me some time to get straightened out.

Dropbox is a service that lets you install a small client application that watches a folder on your computer. Whenever anything is added, removed, or changed within that folder it automatically uploads that change to its servers. You can then access those files via their web interface, or on any other computer with the client installed that is associated with your Dropbox account. In this way it not only gives you an “off-site backup” of your stuff, but also lets you keep some files synced between multiple machines. I use it to keep certain files synced between my laptop, our home theater PC, and my work PC.

When I first started using Dropbox, it would set it’s root synchronization folder as ‘<user>\My Documents\My Dropbox\’. I understand this decision as most users likely put “documents” into this folder; making a sub-folder within “My Documents” a logical location choice. It was a bit of a drag for me though, as I sometimes wanted to keep code in my Dropbox. My Visual Studio solution and project names can sometimes be pretty long because I like fairly verbose namespaces and this was causing the full paths for some of my files to exceed the maximum allowed length.

Dropbox later introduced a feature allowing you to specify where you wanted the root drop box folder to live on your hard drive. This let you have a much shorter root path like “C:\My Dropbox\” which is much more conducive to my project naming conventions :-)

When you install the Dropbox client for the first time it gives you the option to choose where you want the ‘My Dropbox’ folder to live. You can also access the ‘Preferences’ dialog at any time following the installation to change this location. This kicks off a process where Dropbox attempts to move all of your files from the current location to the new location. I performed this on my work PC the other day and encountered an error about 2/3 of the way through the process. The error informed me that I would need to re-associate my computer with my Dropbox account. After clicking ‘OK’ on the error dialog the Dropbox process hung and gave me an hourglass pointer. After a minute or two of this I opted to uninstall the Dropbox client from Control Panel and reboot my machine.

After rebooting I attempted to re-install Dropbox only to have it hang again. I opened Task Manager to try and kill the process directly but received an error stating that the ‘operation was invalid’. Performing an uninstall was the only way to get that process to stop. My next attempt was to kick off the installation process and let it run when I went home for the night on the off chance that it just needed time. I arrived the next morning to find it still hung and I was still unable to kill the process. I uninstalled again and tried using the ‘System Restore’ to roll back my settings to the day before I first encountered this issue but even that didn’t help.

I was left very frustrated seemingly unable to get Dropbox installed. On a hunch I decided to see if my uninstalls were cleanly removing all Dropbox related files. On my Windows XP machine the Dropbox installation folder was:

C:\Documents And Settings\jesse.taber\Local Settings\Application Data\Dropbox\’

I found that there seemed to be a number of files left in this folder. Trying to delete them resulted in an error indicating that at least one of them was currently in use. I checked to see if Dropbox.exe was currently running and found that it wasn’t. After drilling down a bit more I found that the offending file that couldn’t be cleaned up was ‘DropboxExt.3.dll’. It was then I had to go grab Process Explorer and do a File/Handle search to figure out that 'explorer.exe’ had the handle to that file.

I tried using Process Explorer to close the handle, but was still unable to delete the file. I used Process Explorer to kill ‘explorer.exe’ (I love doing that) and open a command prompt. I was then finally able to delete the Dropbox folder completely. I closed out that the command prompt window, started explorer.exe again and gave the Dropbox installation one last whirl. It installed without a hitch!

I hope this helps anyone else who has this problem and stumbles across this post. For what it’s worth I later tried to re-create this issue and was unable to; the Dropbox ‘move’ operation worked perfectly on the second try.

Posted On Friday, November 13, 2009 7:04 PM | Comments (2)
Meta
Tag Cloud