Totzkeeeeee's Blog

Just because I can...

  Home  |   Contact  |   Syndication    |   Login
  211 Posts | 4 Stories | 345 Comments | 321 Trackbacks

News


My blog is worth $14,678.04.
How much is your blog worth?

Tag Cloud


Article Categories

Archives

Post Categories

Image Galleries

Blog Roll

Cool Sites

Wednesday, August 28, 2013 #

The day has finally arrived.  My good friend and colleague, Lori Lalonde and I have co-authored a book for Apress that is focused on developing applications for Windows Phone 8.  It’s been quite the adventure and was a lot more work than I ever would have expected.

Windows Phone 8 Recipes is a problem-solution based guide to the Windows Phone 8 platform. Recipes are grouped according to features of the platform and ways of interacting with the device. Solutions are given in C# and XAML, so you can take your existing .NET skills and apply them to this exciting new venture.

  • Not sure how to get started? No need to worry, there’s a recipe for that!
  • Always wondered what it takes to add cool features like gesture support, maps integration, or speech recognition into your app? We've got it covered!
  • Already have a portfolio of Windows Phone 7 apps that needs to be upgraded? We have a recipe for that too!

The book starts by guiding you through the setup of your development environment, including links to useful tools and resources. Core chapters range from coding live tiles and notifications to interacting with the camera and location sensor. Later chapters cover external services including Windows Azure Mobile Services, the Live SDK, and the Microsoft Advertising SDK, so you can take your app to a professional level. Finally, you'll find out how to publish and maintain your app in the Windows Phone Store.

Whether you're migrating from Windows Phone 7 or starting from scratch, Windows Phone 8 Recipes has the code you need to bring your app idea to life.

What you’ll learn
  • Set up your development environment with the Windows Phone 8 SDK.
  • Upgrade your existing Windows Phone 7 apps to Windows Phone 8.
  • Meet and try out the new features provided in the Windows Phone 8 SDK.
  • Bring your apps to life with live tiles, notifications, and cloud services.
  • Discover the easy steps to setting up your own Windows Phone Store account.
  • Learn how to submit your apps for publication to the Windows Phone Store.
  • Use Windows Azure Mobile Services to extend your application to the cloud.
Who this book is for

Windows Phone 8 Recipes is for the developer who has a .NET background, is familiar with C# and either WPF or Silverlight, and is ready to tap into a new and exciting market in mobile app development.

 

I hope that you will check the book out and that it serves you well.  Should you have any questions or concerns you can always reach Lori or myself via an email address we’ve created just for the book at wp8recipesbook@outlook.com.  You can purchase the book via all of the usual outlets as well as directly from Apress.

Dave
Just because I can…


Wednesday, March 14, 2012 #

Don’t ask me why I did it because I don’t have a good answer. Just because I can probably.

I have a solution with several DLL projects some of which depend on others. They all reference (via project references) a common “Library” project for instance. All of these are then referenced by the main application project. It occurred to me to set the “Copy Local” property of references to things like the Library to False. Why have all that junk copied to your output folder when it’s not really needed there? The main executable has a reference to all of them anyways so it just works. Besides, all of the .NET Framework references are set to false.

Well, mostly works.

The solution also contains a Unit Test project. This project has a reference to the Library as well as the DLL under test. (let’s call that one “Bob.dll”) The Unit Test project will not compile. It gives the error that it could not load the file or assembly Bob.dll or one of its dependencies blah blah blah. To which I reply, “F**K you Bob.”

You can also end up with misleading errors stating that namespaces don’t exist for your using statements for assemblies that are unrelated to Bob. It all depends on which file in what order they can’t be loaded or maybe on how many angels can dance on the head of pin. Probably the latter.

After making sure that the Unit Test project had a reference to all of the same things that Bob is dependent upon and that they were set to copy them locally, it still wouldn’t work. Same error. The only thing that fixed it was to set “Copy Local” to true in all projects. Further experiment shows that this only need be done for Project references. If you are referencing a third-party binary, it appears to be ok with that whether it’s in the GAC or not.

The real bitch of it is that the whole solution compiles and runs fine with the exception of the Unit Test project.

I know I sort of asked for it but still, that is messed up.

Dave
Just because I can…


Thursday, February 10, 2011 #

I’m currently working for a client on a PowerBuilder to WPF migration.  It’s one of those “I could tell you, but I’d have to kill you” kind of clients and the quick-lime pits are currently occupied by the EMC tech…but I’ve said too much already.

At approximately 3 or 4 pm that day users of the Batman[1] application here in Gotham[1] started to experience problems accessing the application.  Batman[2] is a document management system here that also integrates with the ERP system.  Very little goes on here that doesn’t involve Batman in some way.  The errors being received seemed to point to network issues (TCP protocol error, connection forcibly closed by the remote host etc…) but the real issue was much more insidious.

Connecting to the database via SSMS and performing selects on certain tables underlying the application areas that were having problems started to reveal the issue.  You couldn’t do a SELECT * FROM MyTable without it bombing and giving the same error noted above.  A run of DBCC CHECKDB revealed 14 tables with corruption.  One of the tables with issues was the Document table.  Pretty central to a “document management” system.  Information was obtained from IT that a single drive in the SAN went bad in the night.  A new drive was in place and was working fine.  The partition that held the Batman database is configured for RAID Level 5 so a single drive failure shouldn’t have caused any trouble and yet, the database is corrupted. 

They do hourly incremental backups here so the first thing done was to try a restore.  A restore of the most recent backup failed so they worked backwards until they hit a good point.  This successful restore was for a backup at 3AM – a full day behind.  This time also roughly corresponds with the time the SAN started to report the drive failure.  The plot thickens…

I got my hands on the output from DBCC CHECKDB and noticed a pattern.  What’s sad is that nobody that should have noticed the pattern in the DBCC output did notice.  There was a rush to do things to try and recover the data before anybody really understood what was wrong with it in the first place.  Cooler heads must prevail in these circumstances and some investigation should be done and a plan of action laid out or you could end up making things worse[3].  DBCC CHECKDB also told us that:

repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB

Yikes.  That means that the database is so messed up that you’re definitely going to lose some stuff when you repair it to get it back to a consistent state.  All the more reason to do a little more investigation into the problem.  Rescuing this database is preferable to having to export all of the data possible from this database into a new one.  This is a fifteen year old application with about seven hundred tables.  There are TRIGGERS everywhere not to mention the referential integrity constraints to deal with.  Only fourteen of the tables have an issue.  We have a good backup that is missing the last 24 hours of business which means we could have a “do-over” of yesterday but that’s not a very palatable option either.

All of the affected tables had TEXT columns and all of the errors were about LOB data types and orphaned off-row data which basically means TEXT, IMAGE or NTEXT columns.  If we did a SELECT on an affected table and excluded those columns, we got all of the rows.  We exported that data into a separate database.  Things are looking up. 

Working on a copy of the production database we then ran DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS and that “fixed” everything up.   The allow data loss option will delete the bad rows.  This isn’t too horrible as we have all of those rows minus the text fields from out earlier export.  Now I could LEFT JOIN to the exported data to find the missing rows and INSERT them minus the TEXT column data.

We had the restored data from the good 3AM backup that we could now JOIN to and, with fingers crossed, recover the missing TEXT column information.  We got lucky in that all of the affected rows were old and in the end we didn’t lose anything.  :O  All of the row counts along the way worked out and it looks like we dodged a major bullet here.

We’ve heard back from EMC and it turns out the SAN firmware that they were running here is apparently buggy.  This thing is only a couple of months old.  Grrr…. They dispatched a technician that night to come and update it .  That explains why RAID didn’t save us.

All-in-all this could have been a lot worse.  Given the root cause here, they basically won the lottery in not losing anything.

Here are a few links to some helpful posts on the SQL Server Engine blog.  I love the title of the first one:

Which part of 'REPAIR_ALLOW_DATA_LOSS' isn't clear?

CHECKDB (Part 8): Can repair fix everything?
(in fact, read the whole series)

Ta da! Emergency mode repair
(we didn’t have to resort to this one thank goodness)

 

Dave
Just because I can…

 

[1] Names have been changed to protect the guilty.

[2] I'm Batman.

[3] And if I'm the coolest head in the room, you've got even bigger problems...


Monday, December 6, 2010 #

Let’s say that you have a list of objects that contains duplicate items and you want to extract a subset of distinct items.  This is pretty straight forward in the trivial case where the duplicate objects are considered the same such as in the following example:
    List<int> ages = new List<int> { 21, 46, 46, 55, 17, 21, 55, 55 };
    IEnumerable<int> distinctAges = ages.Distinct();
    Console.WriteLine("Distinct ages:");
    foreach (int age in distinctAges)
    {
        Console.WriteLine(age);
    }
    /*
    This code produces the following output:
    Distinct ages:
    21
    46
    55
    17

    */

What if you are working with reference types instead?  Imagine a list of search results where items in the results, while unique in and of themselves, also point to a parent.  We’d like to be able to select a bunch of items in the list but then see only a distinct list of parents.  Distinct isn’t going to help us much on its own as all of the items are distinct already.  Perhaps we can create a class with just the information we are interested in like the Id and Name of the parents. 
    public class SelectedItem 
    {
        public int ItemID { get; set; }
        public string DisplayName { get; set; }

    }

We can then use LINQ to populate a list containing objects with just the information we are interested in and then get rid of the duplicates.

    IEnumerable<SelectedItem> list =
        (from item in ResultView.SelectedRows.OfType<Contract.ReceiptSelectResults>()
            select new SelectedItem { ItemID = item.ParentId, DisplayName = item.ParentName })
            .Distinct();

Most of you will have guessed that this didn’t work.  Even though some of our objects are now duplicates, because we are working with reference types, it doesn’t matter that their properties are the same, they’re still considered unique.  What we need is a way to define equality for the Distinct() extension method.

IEqualityComparer<T>

Looking at the Distinct method we see that there is an overload that accepts an IEqualityComparer<T>.  We can simply create a class that implements this interface and that allows us to define equality for our SelectedItem class.

    public class SelectedItemComparer : IEqualityComparer<SelectedItem>
    {

        public new bool Equals(SelectedItem abc, SelectedItem def)
        {
            return abc.ItemID == def.ItemID && abc.DisplayName == def.DisplayName;
        }

        public int GetHashCode(SelectedItem obj)
        {
            string code = obj.DisplayName + obj.ItemID.ToString();

            return code.GetHashCode();

        }

    }

In the Equals method we simply do whatever comparisons are necessary to determine equality and then return true or false.  Take note of the implementation of the GetHashCode method.  GetHashCode must return the same value for two different objects if our Equals method says they are equal.  Get this wrong and your comparer won’t work .  Even though the Equals method returns true, mismatched hash codes will cause the comparison to fail.  For our example, we simply build a string from the properties of the object and then call GetHashCode() on that.

Now all we have to do is pass an instance of our IEqualitlyComarer<T> to Distinct and all will be well:

IEnumerable<SelectedItem> list =
    (from item in ResultView.SelectedRows.OfType<Contract.ReceiptSelectResults>()
        select new SelectedItem { ItemID = item.dahfkp, DisplayName = item.document_code })
                        .Distinct(new SelectedItemComparer());

 

Enjoy.

Dave
Just because I can…

Technorati Tags: ,

Wednesday, September 29, 2010 #

Watch THIS.

There’s no better description of it than the one posted by the person that uploaded the video.

Microsoft sent this tape to retailers to explain the benefits of Windows 386. Boring until the 7 minute mark when the production is taken over by crack-smoking monkeys.

Watch it from the start to get the full effect but be warned:  The goggles do nothing!

If you didn’t just jump ahead to the 7 minute mark but watched it right through then guess what?

Dave
Just because I can…


Tuesday, September 28, 2010 #

You’re reading a post over on The Old New Thing (first clue) and come across the following comment (second clue, you’re actually reading the comments) from Jason Truesdell:

Redmond, WA, circa 2010. Forced to deal with misbehaving apps that use undocumented methods to force invocation of application compatibility shims, the app compatibility team created a shim to simulate the application compatibility shim mechanism. Shortly thereafter, the universe ceased to exist due to the hitherto-unknown metashim paradox.

Not only do you get the joke but you also laugh.  Out loud.

 

Dave
Just because I can…


Thursday, September 23, 2010 #

I’ve posted in the Entity Framework forum about this but I wanted to document it here as well because when I was researching this problem I found very little information.  Here’s the deal:

I have a single project with multiple models.  Some of the models have entities that are mapped back to the same tables in the store.  (i.e. two models have a Customer entity that is mapped back to the Customer table.)  I am using the Self-Tracking entity templates and each template is in it's own folder with its own namespace and the template code points to the actual edmx file that is in another project.  Those templates are all in a separate assembly from the project that contains the models and the context templates.  The project with the models and contexts has a reference to the assembly containing the generated self-tracking entities.  Everything generates and compiles fine.

At runtime I get the following error when I try to use one of the generated contexts:

--------------------------------------

Schema specified is not valid. Errors:

The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type 'Customer'. Previously found CLR type 'MyProject.Common.Contracts.Data.OrderManagementModel.Customer', newly found CLR type 'MyProject.Common.Contracts.Data.CustomerManagementModel.Customer.

--------------------------------------

Lingzhi Sun, one of the Microsoft moderators, asked for some clarification so I elaborated for him:

I have discovered that this is a known issue since the beginning of EF.  I'm sorry I don't have a link for you.  I'm not sure if this has been filed as a bug in connect or anywhere.  Daniel Simmons of the EF team can likely confirm this for you.  I'll elaborate more here as in my search I found little out there that describes this exact problem.

You are correct that the STE classes are in a separate assembly and each tt file is in its own directory so that the generated classes are in a separate namespace from the classes generated for other models.  The same is true for the models in a separate assembly. For example:

Model structure:

MyCompany.MyApplication.Server.Data.Customers.CustomerModel.edmx
MyCompany.MyApplication.Server.Data.Orders.OrderModel.edmx

Shared contract library structure:

MyCompany.MyApplication.Common.Contracts.Data.Customers.CustomerSte.tt
MyCompany.MyApplication.Common.Contracts.Data.Orders.OrderSte.tt

The server-side data assembly has a reference to the shared contract assembly.  I've modified the STE tt code to fix up the reference to the edmx file.  I've also modified the server-side tt files that generate the ObjectContext to include an additional using statement so that they know about the STEs in the shared assembly.  As I mentioned, at compile time, everything is happy.

The problem, as I understand it, is that at runtime when CreateObjectSet<SomeSteType>("SomeSteType") gets called only the type name is used to find the CLR type.  Namespaces are completely ignored.  Even if you changed the code-gen to produce:

CreateObjectSet<MyCompany.MyApplication.Common.Contracts.Data.Customers.CustomerSte>("CustomerSte")

The namespace is still ignored.  The upshot of this is that if you have two classes within the same assemble that have the same name, you will get the error in my above.  The irony of the whole thing is that the error message tells you the FQN of the types it found but can't distinguish between them.  Perhaps there is some technical reason why the namespace is ignored when resolving the types but I can't think of one.  It seems obvious that to be certain you get the right type, you need to use the FQN. 

The easiest and most obvious work-around is to simply make sure that every type in the same assembly has a unique name.  Since something like Customer would be common in both the CustomerModel and the OrderModel you can just rename one of them to be Customer1.  At least in Intellisense then you still type Customer and in context you would only see one or the other so it's not too terrible.  It's just messy.

The other work-around is to separate them into different assemblies.  This is also less than optimal as I have 10 different models and I'm not even done yet. (there are hundreds of tables in the database)

So there in a nutshell is the problem.  I'd be interested to know if  (a) this is an actual bug being tracked by the EF team, (b) is it on the roadmap for a future update and (c) what time frame for a fix can we expect?

Dave
Just because I can…


Monday, August 30, 2010 #

On its face, enabling the account lockout policy seems like a good idea.  Get the password wrong (n) times and you’re ok, get it wrong (n + 1) times and your account is locked out for a period of time; typically 30 minutes.  Sometimes a call to the help desk is required to reset the password if the lockout duration has not been defined.  Even if it is defined, most users are not likely aware of the policy and call anyways.  Even if they are aware of it, how many users do you know that can afford to sit around doing nothing for 30 minutes?

The policy assumes that an attacker has some form of physical access to the network and is trying to brute-force guess passwords.  Again, this sounds like something you’d like to slow down.  But what if the goal isn’t to guess passwords but simply to effect a denial-of-service?  We don’t even have to assume a malicious “hacker”.  Maybe somebody just wants to hassle one of their co-workers.  All they need to do is try and logon as them 4 or 5 times and their account gets locked out.  The neat part is that the target of the attack won’t notice until the next time they logon or try to access a network resource for the first time.  At that point they will probably think they did it to themselves.

I’ve actually been doing it to myself this week.  I am at a client site where I’ve been before but my password has been reset since I was here last.  I use my own equipment and am not part of the domain so everything I access here uses pass-through authentication and cached credentials.  Credentials are cached on a per resource basis.  Connecting to a file share on one machine with a domain account will not automatically allow me to connect to other resources even though I may have access permission.  I connect to 4 or 5 different machines here at one time or another.  The first time I try to connect to a resource windows tries to use cached credentials which fail because they have the old password.  Maybe I get the password wrong one myself and suddenly even when I type it really, really slowly I still get denied.

There’s not shortage of legitimate sources of account lockout either.  Here is a bunch of other ways in which account lockout can be triggered:

  • Applications using cached credentials that are stale.
  • Stale service account passwords cached by the Service Control Manager (SCM).
  • Stale logon credentials cached by Stored User Names and Passwords in Control Panel.
  • Scheduled tasks and persistent drive mappings that have stale credentials.
  • Disconnected Terminal Service sessions that use stale credentials.
  • Failure of Active Directory replication between domain controllers.
  • Users logging into two or more computers at once and changing their password on one of them.

You can view and manage your stored credentials using the Credential Manager in Windows Vista/7 (Control Panel –> User Accounts and Family Safety –> Credential Manager or via the User Accounts Control Panel applet in Windows XP.  (Control Panel –> User Accounts –> [User Account] and select Manage My Network Passwords in the Related Tasks)

Well, my account should be enabled again now.  Hopefully I’ve taken care of all of the cached credentials that were giving me trouble.

Dave
Just because I can…
(Woo hoo!  I’m in!)


Tuesday, August 24, 2010 #

You’ve heard good things about Visual Studio 2010 but you still need to answer the question “What’s in it for me?”  You need to justify the cost the new software as well as the not insignificant cost of migration.  The Entity Framework is a powerful tool for creating a conceptual model of your data store and abstract away the details of data access.  Exposing these conceptual entities over the wire to client applications mean one of two things:  ADO.NET Data Services (now called WCF Data Services) or hand rolling your own web services. 

An ADO.NET Service is simple to implement but consuming them required a fairly deep understanding of how the entity tracking worked internally requiring you to manually manage changes to relationships between objects. The choice of REST (REpresentational State Transfer) for addressing and ATOM publishing as the wire format led to some serious performance limitations for all but the simplest of interactions.  Other issues are revealed later in this post.

Rolling your own custom web services is “non-trivial” at best.  Implementing an advanced pattern such as self-tracking entities is not for the faint of heart and probably not worth the effort.  The reasons for this are described below.

I’ve put together this information to try and distill out the key benefits of the new features in EF 4 that make implementing web services much easier.  Most of the available information gets rather technical so I’ve tried to point out the sections that are most relevant to the decision process.  You can then have your technical staff delve deeper if they so desire.  As you’ll see, the new tools and features available in VS2010 and Entity Framework 4 offer a much better alternative and plenty of flexibility.

First up is an article describing the various n-tier patterns that are in general use.

http://msdn.microsoft.com/en-us/magazine/ee321569.aspx#id0090078

The article describes four different patterns: Change Set, DTOs (Data Transfer Objects), Simple Entities and Self-Tracking Entities.  Self-Tracking Entities provide the best balance of architectural goodness and ease of implementation however, simple entities are the only realistically viable choice for use with the .NET 3.5 SP1 version of Entity Framework.  This is described in the section entitled "Implementing N-Tier with the Entity Framework" but the key phrase is:

"The EF can be used to implement any of the four patterns described earlier, but various limitations in the first release of the Framework (shipped as part of Visual Studio 2008 SP1/.NET 3.5 SP1) make patterns other than the simple entities pattern very difficult to implement."

And from "Patterns Other Than Simple Entities in .NET 3.5 SP1":

"Unfortunately, self-tracking entities is the hardest pattern to implement in the SP1 release for two reasons. First, the EF in .NET 3.5 SP1 does not support POCO, so any self-tracking entities that you implement have a dependency on the 3.5 SP1 version of .NET, and the serialization format will not be as suitable for interoperability. You can address this by hand writing proxies for the client, but they will be tricky to implement correctly. Second, one of the nice features of self-tracking entities is that you can create a single graph of related entities with a mix of operations—some entities can be modified, others new, and still others marked for deletion—but implementing a method on the mid-tier to handle such a mixed graph is quite difficult. If you call the Attach method, it will walk the whole graph, attaching everything it can reach. Similarly, if you call the AddObject method, it will walk the whole graph and add everything it can reach. After either of those operations occurs, you will encounter cases in which you cannot easily transition some entities to their intended final state because the state manager allows only certain state transitions. You can move an entity from unchanged to modified, for instance, but you cannot move it from unchanged to added. To attach a mixed graph to the context, you need to shred your graph into individual entities, add or attach each one separately, and then reconnect the relationships. This code is very difficult. "

Skipping ahead to the last section entitled "API Improvements in .NET 4" reveals the following key statement (emphasis mine):

"In the upcoming release of the EF, which will ship with Visual Studio 2010 and .NET 4, we have made a number of improvements to ease the pain of implementing n-tier patterns—especially self-tracking entities. I'll touch on some of the most important features in the following paragraphs."

The next article focuses on "Building N-Tier Apps with EF4"

http://msdn.microsoft.com/en-us/magazine/ee335715.aspx

First up is a list of key new features that support N-Tier development:

  1. New framework methods that support disconnected operations, such as ChangeObjectState and ChangeRelationshipState, which change an entity or relationship to a new state (added or modified, for example); ApplyOriginalValues, which lets you set the original values for an entity; and the new ObjectMaterialized event, which fires whenever an entity is created by the framework.
  2. Support for Plain Old CLR Objects (POCO) and foreign key values on entities. These features let you create entity classes that can be shared between the mid-tier service implementation and other tiers, which may not have the same version of the Entity Framework (.NET 2.0 or Silverlight, for example). POCO objects with foreign keys also have a straightforward serialization format that simplifies interoperability with platforms like Java. The use of foreign keys also enables a much simpler concurrency model for relationships.
  3. T4 templates to customize code generation. These templates provide a way to generate classes implementing the Self-Tracking Entities or DTOs patterns.

And here's a big benefit in the first sentence after that list:

"The Entity Framework team has used these features to implement the Self-Tracking Entities pattern in a template, making that pattern a lot more accessible,...:

They've done pretty much all of the work for you in a code generation template!

Figure 1 in the article shows a graphic comparison of the various patterns for N-Tier architectures.  Higher up on the y-axis means more architectural goodness and further right on the x-axis means easier to implement.  You can see from the graph that self-tracking entities offer the best balance.

The rest of the article just goes into more details of the actual implementation.

As you can see from this information, moving to VS 2010/.NET 4 can provide enormous value to a project.  I'd guess that roughly 90% of the code you'd have to write (hundreds and likely thousands of lines) gets automatically generated for you and the resulting objects will work with any version of .NET from 2.0 on.

 

Dave
Just because I can…


Friday, July 2, 2010 #

Just started working with the Entity Designer Database Generation Power Pack and I must say it is a very solid first offering. Model first design is something new in EF4/VS 2010 and it's great to see a tool available that has evolved alongside VS 2010.

I was very happy to see that it comes with the ability to synchronize with a database project right out of the box. In my opinion, this is the most valuable feature. In the past, the actual production database was considered to be the "truth". As our tools have evolved, this has shifted to where the model, however stored, is the "truth".

The conceptual entity model that you create is an abstraction over the actual physical storage of the data. As such, there is no way to model such things as indexes, UDFs, permissions, users, and other strictly database level objects nor should there be. We do not have the whole truth and nothing but the truth (couldn't resist that one) if we rely solely on our conceptual model. Tracking these additional types of objects is the domain of the database and server projects in Visual Studio.

This is the reason I feel that the database project synchronization is such an important feature. We can model our conceptual entities in the EF designer and then synchronize those entities with the rest of the database objects and then keep the whole thing in source control to protect the truth.

I have only one issue at the moment and that is the use of the Entity Set name when generating the SSDL and DDL for the database.  Database entities should have singular names and by using the entity set name they become pluralized.  If fact, due to the logic built into the designer that automatically pluralizes entity set names you get some interesting outcomes such as a Person entity becoming a People table.

Overall, this is an excellent V1 product and I look forward to its enhancement.

Dave
Just because I can…


Thursday, April 22, 2010 #

Instead of just letting your application crash, you can attach a method to the DispatcherUnhandledExceptionEventHandler and one to the AppDomain.Current.UnhandledException.  You wire these up in the code behind of your application which by default is App.xaml.cs.  You can log these errors or throw up a message box and tell the user what happened.  Then you shut down the app gracefully.  You shut it down because something bad happened that you weren’t expecting and at this point there is no guarantee as to the state of the stack or memory or anything really.  All bets are off.

If, on the other hand, the method for the UnhandledException is empty and the method for the DispatcherUnhandledEventHandler ends up in a call to a method called LogError() and the LogError() method is FUCKING EMPTY, and you just swallow the exceptions and keep on running, then, not so much.  I spent nearly a day trying to track down a bug that would have been obvious had something been logged or if it just crashed. 

It’s my own fault I suppose.  I knew these were hooked up.  I just never suspected that there wouldn’t be any implementation at all.  Live and learn.

Customs Man at Heathrow: Anything to declare, Sir?
Jekyll and Hyde: Man has not evolved an inch from the slime that spawned him.
Customs Man at Heathrow: Very Good, Sir.

I tend to agree.

Dave
Just because I can…


Wednesday, April 21, 2010 #

I'm looking at an xml document that gets passed to a COM object (yes, I said the "C" word) to save a new record.  You can tell by the "new|" at the top of the file before the xml declaration.  If we were saving, there would be "edit|" at the top.  Couldn't you just have a root element with something like:

<myRootElement mode="new">

Ah, here's why that won't work...

There's no single root element but that's ok because next we find that this document is actually several documents.  <?xml version="1.0"?> appears several times.  The final document opens with <myElementStart> and closes with <myElementEnd> so it's not even well-formed.

This isn't a style thing.  This is broken.  I mean, basic well-formed XML only has two rules; three if you count the xml declaration but it works as a document for DTO purposes without it.

  1. One root element.
  2. Close all elements with a matching tag.

As a result, both ends of this conversation need to speak the same dialect of broken XML in order to communicate.  To join the conversation, you must also learn pidgin XML.

How can you start out so right - XML being the obvious choice in this instance - and then go so horribly wrong?

Dave
Just because I can…


Monday, April 5, 2010 #

Exploits to jailbreak the iPhone are well known.  The iPad runs on the iPhone 3.2 firmware.  What this means is that the iPad was shipped with known security vulnerabilities that would allow someone to gain root access to the device.

Nice.

It’s not like these are security vulnerabilities that are known but have no exploits.  The exploits are numerous and freely available.

Of course, if you fit the demographic, you probably have nothing to worry about.

Magical and Revolutionary?  Hardly.

Dave
Just because I can…


Wednesday, March 24, 2010 #

I mean seriously.  Let’s imagine for a moment that by some stroke of luck or genius or cosmic accident that you come to be the owner of sex.com.  You’d think you had won the lottery.  That would be like having a license to print money.  I mean really.  Sex is the most searched term on the entire Internet.  Even without any SEO you’d think that your site would show up on the first page of results on Google.

You would think that; and you’d be wrong.  At least in the case of the current owners of that domain name anyways.  The details can be found here but suffice it to say that Escom LLC has managed to fuck it up.  They’ve been forced into bankruptcy by their creditors. 

Something doesn’t smell quite right with the whole thing.  Some guy named Mike Mann (please God, don’t let it be this Mike Mann) is an investor in all three creditors.  WTF?

Seriously.  How hard can it be?

Dave
Just because I can…


Tuesday, March 23, 2010 #

I have a headache and it’s not even 9AM yet.  Well, ok, it’s nearly ten here now in GMT –5 but it’s before nine somewhere still.

Sometimes people will miss the point of something so utterly and completely that one is left wondering how such a person can even dress themselves.

Writing an application using WPF and the Composite Application Library (Prism) means that one must learn the various programming idioms common to these frameworks.  The Windows Forms event driven model simply will not suffice.  You need to come to grips with the idea of a very loosely coupled application.  Concepts that must be absorbed and internalized include Data Binding, Control and Data Templates, Commands, Dependency Injection, and Inversion of Control, as well as the Supervising Controller, Presentation Model and Model-View-View-Model patterns.

It is as simple as that.  Not to embrace these concepts is to invite pain.  It is to invite noodles; and not the holy kind.

Someone actually said to me that “just because it’s not WPF, doesn’t mean it’s wrong.”  And he’s right.  Unless, of course, you are writing a WPF application and especially if you are using the Composite Application Library.

In simple terms then; YOU’RE DOING IT WRONG!

 

Dave
Just because I can…