Freestyle Coding

Programming in the Real World

  Home  |   Contact  |   Syndication    |   Login
  56 Posts | 0 Stories | 41 Comments | 0 Trackbacks

News

Tag Cloud


Archives

Post Categories

Charity

Conferences

Professional

Projects

Social Networks

Friday, October 24, 2014 #

Well, it's crunch time. In less than 24 hours, I will begin my 24 hour journey into the world of Extra Life.

Sure, there will be good times. I mean, let's face it. I'm going to be playing games for 24 hours. However, when was the last time you did one thing for 24 hours?

Imagine the last road trip you took, especially if you were the driver. Sitting in one place for hours on end; only the occasional rest break or meal stop to break up the monotony. Imagine that you did a "full day" of 8 hours of driving. Remember how tired you were by the end. The restlessness combined with the physical fatigue. The mental fatigue of focusing on one task for hours upon hours. Even with other people with you in the car, the overriding task of driving the vehicle wears on you.

Now, imagine doing that for 24 hours.

Now, imagine the children and families that do that for months on end in Children's of Alabama. Only, this time, you're not the one in control of the vehicle. You don't have the luxury of taking a break for the situation. You're not sure when the journey will end, and, in some cases, the end of the journey may contain one of the most unimaginable horrors you could conceive.

For Children's Miracle Network hospitals, 62 families begin this journey every minute. This means 4.5x the population of Huntsville-Decatur-Albertville Combined Statistical Area enter treatment every month.

I know budgets are tight right now. I know we are all felling the pressures of reality. However, my reality is much better than the reality of the 3,000 families that started their journey in the time it took me to craft this blog post. I use my time to speak for these children. I spend hours upon hours preparing for this event. I've made press releases. I've created teams. I've secured a public venue. I've become a voice for those too weak to speak.

I've found room in my budget to chip in what I can. The saving you can make by eating one meal at home, with your families, instead of eating out can make a huge difference. A small donation doesn't sound like much, but a community of people giving a small donation can make a huge difference.

I hope you can find the 5 minutes of time to make a contribution to these families. Perhaps you can even find some time tomorrow to jump online a play a game with me. If you're local to the north Alabama area, you can come downtown to 125 Northside Square, Suite 200, to join myself and my team for a game or two.

Don't do it for me, do it for the kids.

http://www.extra-life.org/participant/cgardner


Sunday, September 28, 2014 #

I woke up this morning with a very interesting tweet waiting for me. I was asked what the best path for Microsoft certification would be for a Visual Basic programmer. I was forced to reply with "that was not a 140 character answer." This post aims to offer guidance to that process.

Microsoft learning removed the VB paths to certification with the .NET 4 MCPD. The current batch of tests require a language test. You must pass either 70-480 to prove you are a HTML5 specialist or 70-483 to prove you are a C# specialist. Now, if you currently have a MCPD in .NET 4.0 for Windows development, you can skip the language specialist test if you update to MCSD:Windows Store Apps in C#.

This leads to the first issue to address. You must pass a language test before you can continue. To do this, you have to ask yourself where you are more comfortable. C# is closer to VB, but HTML is, arguably, an easier and more ubiquitous language.nfortunately, I can't really suggest one path over the other. I passed both the tests because of my high degree of familiarity with both languages. My instincts would tell me that it may be easier to transition to C# from VB. You can prepare for the test by converting your existing code. Most of the .NET-ness will be standardized. This will allow you to focus on the different syntax.

If you don't have any sufficiently small projects on had to try to convert, begin by creating a small, trivial app in VB. It doesn't have to be anything too fancy. Once you have created the app, convert it to C#. This will get you thinking about language without worrying about the actual problem you're trying to solve. After you do this, go back to the original project, and add a new little feature. Then, convert it.

At some point, add a few more features without starting in the VB project. You'll still be thinking in VB, but you're get familiar with the syntax. Once you're comfortable with the basic syntax, then you should be ready to start addressing the exam objectives. Just starting playing with the features that are being tested on the exam. You can supplement this with some Microsoft Virtual Academy videos.

Once you pass the language test, you're golden. You are going to have to take the following tests in whatever language you chose for the specialist test. However, you can use a very similar approach to study for the remaining tests. The MSDN documentation will generally have .NET documentation for every language. You can begin by studying the feature using VB. As you start to understand the concept, you can apply it whichever language you want. The important part is to understand the concept. Most .NET will look familiar when crossing languages. Remember, all .NET is compiled into IL and interpreted by the CLR. It is in the best interest of the language designer to keep everything as close to the target as possible.

The path to certification from VB is surely not the most direct. However, if you address this difference, you can learn a new language and advance your VB skills at the same time.


Wednesday, August 20, 2014 #

Processing Kinect v2 Color Streams in Parallel

I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative.

The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum:

using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) {
if( null == _ColorFrame ) return;
 
BitmapToDisplay.Lock();
_ColorFrame.CopyConvertedFrameDataToIntPtr(
BitmapToDisplay.BackBuffer,
Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ),
ColorImageFormat.Bgra );
BitmapToDisplay.AddDirtyRect(
new Int32Rect(
0,
0,
_ColorFrame.FrameDescription.Width,
_ColorFrame.FrameDescription.Height ) );
BitmapToDisplay.Unlock();
}
With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second.

After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case.

Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread.

I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all?

The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work.

After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread.

Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way.

The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code:

private byte ClipToByte( int p_ValueToClip ) {
return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) );
}
 
private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) {
if( null == e.FrameReference ) return;
 
// If you do not dispose of the frame, you never get another one...
using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) {
if( null == _ColorFrame ) return;
 
byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel];
byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight];
_ColorFrame.CopyRawFrameDataToArray( _InputImage );
 
Task.Factory.StartNew( () => {
ParallelOptions _ParallelOptions = new ParallelOptions();
_ParallelOptions.MaxDegreeOfParallelism = 4;
 
Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => {
int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16;
int _U = _InputImage[( _Index << 2 ) + 1] - 128;
int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16;
int _V = _InputImage[( _Index << 2 ) + 3] - 128;
 
byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 );
byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 );
byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );
 
_OutputImage[( _Index << 3 ) + 0] = _B;
_OutputImage[( _Index << 3 ) + 1] = _G;
_OutputImage[( _Index << 3 ) + 2] = _R;
_OutputImage[( _Index << 3 ) + 3] = 0xFF; // A
 
_R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 );
_G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 );
_B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );
 
_OutputImage[( _Index << 3 ) + 4] = _B;
_OutputImage[( _Index << 3 ) + 5] = _G;
_OutputImage[( _Index << 3 ) + 6] = _R;
_OutputImage[( _Index << 3 ) + 7] = 0xFF;
} );
 
Application.Current.Dispatcher.Invoke( () => {
BitmapToDisplay.WritePixels(
new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ),
_OutputImage,
BitmapToDisplay.BackBufferStride,
0 );
} );
} );
}
}
This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer.

Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this:

private byte ClipToByte( int p_ValueToClip ) {
return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) );
}
 
void CompositionTarget_Rendering( object sender, EventArgs e ) {
using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) {
if( null == _ColorFrame )
return;
 
byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel];
byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight];
_ColorFrame.CopyRawFrameDataToArray( _InputImage );
 
ParallelOptions _ParallelOptions = new ParallelOptions();
_ParallelOptions.MaxDegreeOfParallelism = 4;
 
Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => {
int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16;
int _U = _InputImage[( _Index << 2 ) + 1] - 128;
int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16;
int _V = _InputImage[( _Index << 2 ) + 3] - 128;
 
byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 );
byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 );
byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );
 
_OutputImage[( _Index << 3 ) + 0] = _B;
_OutputImage[( _Index << 3 ) + 1] = _G;
_OutputImage[( _Index << 3 ) + 2] = _R;
_OutputImage[( _Index << 3 ) + 3] = 0xFF; // A
 
_R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 );
_G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 );
_B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );
 
_OutputImage[( _Index << 3 ) + 4] = _B;
_OutputImage[( _Index << 3 ) + 5] = _G;
_OutputImage[( _Index << 3 ) + 6] = _R;
_OutputImage[( _Index << 3 ) + 7] = 0xFF;
} );
 
BitmapToDisplay.WritePixels(
new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ),
_OutputImage,
BitmapToDisplay.BackBufferStride,
0 );
}
}

Wednesday, May 28, 2014 #

As some of you may know, I recently accepted a position to teach an undergraduate course at my alma mater. Yesterday, I had my first day in an academic classroom. I immediately noticed a difference with the interactions between the students. They don't act like students in a professional training or conference talk. I wanted to use this opportunity to enumerate some of those differences.

The immediate thing I noticed was the lack of open environment. This is not to say the class was hostile towards me. I am used to entering the room, bantering with audience, loosening everyone a bit, and flowing into the discussion.

A purely academic audience does not banter. At least, they do not banter on day one.

I think I can attribute this to two factors. This first is a greater perception of authority. In a training or conference environment, I am an equal with the audience. This is true even if I am being a subject matter expert. We're all professionals. We're all there to learn from each other, share our stories, and enjoy the journey. In the academic classroom, there was a distinct class difference. I had forgotten about this distinction; I had the professional familiarity with the staff by the time I completed my masters.

This leads to the other distinction. These was an expectation of performance. At conference and professional training, there is generally no (immediate) grading. This may be a preparation for a certification exam, but I'm not the one responsible for delivering the exam. This was not the case in the academic classroom. These students are battling for points, and I am the sole arbiter. These students are less likely to let the material wash over them, applying the material to their past experiences. They were down taking notes.

I don't want to leave the impression that there was no interact in the classroom. I spent a good deal of time doing problems with the class on the whiteboard. I tried to get the class to help me work out the steps. This opened up a few of them.

After every conference or training class, I always get a few people that will email me afterward to continue the conversation. I am very curious to see if anybody comes to my office hours tomorrow.

However, that is a curiosity that will have to wait until tomorrow.


Friday, May 23, 2014 #

Now that teched has come and gone, I thought I would use this opportunity to do a little post-mortem on The Krewe app. It is one thing to test the app at home. It is a completely different animal to see how it responds in the environment TechEd creates.

At a future time, I will list all the things that I would like to change with the app. At this point, I will find some good way to get community feedback.

I want to break all this down screen by screen. We'll start with the screen I got right. The first of these is the events calendar. This is the one screen that, to you guys, just worked. However, there was an issue here. When I wrote v1 for last year, I was lazy and placed everything in CST. This caused problems with the achievements, which I will explain later. Furthermore, the event locations were not check-in locations. This created another problem with the achievements.

Next, we get to the Twitter page. For what this page does, it works great. For those that don't know, I have an Azure Worker Role that polls Twitter pretty close to the rate limit. I cache these results in my database, and serve them upon request. This gives me great control over the content. I just have to remember to flush past tweets after a period, to save database growth.

The next screen is the check-in screen. This screen has been the bane of my existence since I first created the thing. Last year, I used a background task to check people out of locations after they traveled. This year, I removed the background task in favor of a foursquare model. You are checked out after 3 hours or when you check-in to some other location. This seemed to work well, until those pesky achievements came into the mix. Again, more on this later.

Next, I want to address the Connect and Connections screens together. I wanted to use some of the capabilities of the phone, and NFC seemed a natural choice. From this, I came up with the gamification aspects of the app. Since we are, fundamentally, a networking organization, I wanted to encourage people to actually network. Users could make and share a profile, similar to a virtual business card.

I just had to figure out how to get people to use the feature. Why not just give someone a business card?

Thus, the achievements were born. This was such a good idea. It would have been a great idea, if I have come up with it about two months earlier...

When I came up with these ideas, I had about 2 weeks to implement them. Version 1 of the app was, basically, a pure consumption app. We provided data and centralized it. With version 2, the app became a much more interactive experience. The API was not ready for this change in such a short period of time.

Most of this became apparent when I started implementing the achievements. The achievements based on count and specific person when fairly easy. The problem came with tying them to locations and events. This took some true SQL kung fu. This also showed me the rookie mistake of putting CST, not UTC, in the database.

Once I got all of that cleaned up, I had to find a way to get the achievement system to talk to the phone. I knew I needed to be able to dynamically add achievements. I wouldn't know the precise location of some things until I got to Houston. I wanted the server to approve the achievements. This, unfortunately, required a decent data connection. Some achievements required GPS levels of location accuracy in areas of network triangulation.

All of this became a huge nightmare. My flagship feature was based on some silly assumptions. Still, I managed to get 31 people to get the first achievement (Make 1 Connection.) Quite a few of those managed to get to the higher levels.

Soon, I will post a list of the feature and changes that need to happen to the API. This includes things like proper objects for communication, geo-fencing, and caching. However, that is for another day.


Tuesday, May 20, 2014 #

It appears my good buddies in The Krewe have created The Krewe Summer Blogging Challenge. The challenge is to write at least two blog posts a week for 12 weeks over the summer.

Consider this challenge accepted.

So, what can we expect coming up?

  • I still have the Kinect v2 Alpha kit. Some of you may have seen me use it in talks.
  • I need to make some major API changes in The Krewe WP8 App. Plus, I may have Xamarin on board to help with getting the app to the other platforms.
  • I am determined to learn F#, and I'm taking all of you with me.
  • I am teaching a college course this summer. I want to post some commentary on that side of training.
  • I am sure some biometric stuff will come up.
  • Anything else you guys may want.

I have created tasks on my schedule to get a new blog post up no later than every Tuesday and Friday. We'll see how that goes.

Wish me luck.


Tuesday, May 6, 2014 #

I recently published version 2.0 of The Krewe Windows Phone app. The app is meant to facilitate social interactions at conferences, primarily TechEd North America.

Version 1 of the app, published for last year, was primarily meant as a consumption app. We provided a list of events, a view of a Twitter feed, and the ability to view where other users are located. The location view used a "checkin" mechanism, similar to FourSquare, where people could check into a location. The location would then display the number of currently people checked into the location.

As with any app, version 1 had a few issues. However, the app was functional and, overall, well received. Most of the issue were addressed with a version 1.1 and 1.2 maintenance release.

Slightly before 1.2 was published in mid-April, I came up with the idea to gamify the app. The entire essence of The Krewe is networking and community. I wanted to find a way to embrace and encourage these principles. To this end, I wanted to give people the opportunity to easily swap information.

Currently, the app targets Windows Phone 8(.1). With this current requirement, I knew that almost every device would have NFC capabilities. If our members are connecting, why don't we allow them to REALLY connect?

This gave me the idea to allow users the option to create a personal profile. This profile included a display name, email address, and generic job role. There was also some optional fields for a Twitter handle and a personal message.

Once you have created your profile, you can use NFC to swap your profile with other members. Once you have their profile, I add the date, time, and location you connected. Any time you view their profile, you can see the last place they checked in. This can help you connect with people at a later time.

Now that we have this interactive behavior, I wanted to add some fun to the process. This is where the achievements come into play. The achievements come in 5 basic categories. This first category is "other." This category has no relation to connections. As such, I will stop talking about them.

The easiest set of achievements to acquire are the connection count achievements. As you connect with people, I check the length of your list of connections. As you reach certain thresholds, you will unlock the achievement.

The next set of achievements to acquire are tied to locations and events. Anytime you are at a location or event, you need to check in to the venue. If you are checked into the venue and make a connection, the achievement will unlock. For locations, you just have to be checked into the location. For events, you have to be checked into the location during the event.

If you already have the app, you have not yet seen any of the location achievements. I will create these on Saturday when I get to Houston. These are tied to areas in the convention center. Since I don't know the layout of these locations, such as the Hand-On Lab or TLG, I can't create the achievement.

The final set of achievements are tied to specific people. All you have to do is find the person and connect with them. However, if an achievement is tied to a person, there is generally an ulterior motive. For example, connecting with me will get you an achievement. When you do this, I will ask for feedback on the app.

Some of you may be wondering how I will add these achievements. All the achievements are in the Azure database. The list of achievements are synced at app startup. As such, I can add or modify any achievements on the server.

There are a few quirks that I should mention. Unlocking achievements only happen when you make a connection. The app will request an unlock when it detects you may qualify. The final decision goes to the server. If some piece of information is missing from the server, such as if you did not have a data connection when you made a previous connection, the server would see the correct count. I'm trying to find a balance between the amount of data sent and keeping things up to date. There may be a maintenance release between now and Tech Ed to ensure you get everything.

Finally, you may be asking why you should bother? We are working hard without sponsors to get some special swag for people. I do not have the full details on this, at this time. All I know is that we want to reward the people that embrace networking and community.

If you have any questions about the app that I have not addressed here, feel free to drop me a line. I created the app to help people get the most out of events. The app is going to grow over time. We are going to help facilitate more events. We are (eventually) going to expand to other devices.

Over the next couple of months, I am going to be making changes to the app and API to facilitate these changes. As I do this, I am going to keep updates posted here on the blog. One of the first major updates will be completely to the API. As I mentioned before, the original app was created for consumption. As the app became more interactive, the previous API design showed it's flaws.

After the API is fixed, I need to update the UI for the app. The pivot control used for the original app was great for the 3 views that were used. As the app expanded, It now is cumbersome to use the pivot control. This is for both me, as a developer, and as a user. I, too, as a user, feel your pain over the number of swipes it takes to get to some of the information.

Finally, the great port to other devices will begin. This part both excites and frightens me. I know the Windows Phone SDK well. However, I have never owned an iPhone. I haven't owned an Android device since Android 2.3.

This project started as a way for me to help the community. I have worked word, on my free time, to provide an app that is completely free for people. My company, T&W Operations, Inc., has graciously allowed me to host the cloud services on our Azure account. I am determined to keep the app free of advertisements and clutter. However, things like licenses to Xamarin and publisher accounts to App Stores cost money. Please be patient with some of the great ideas that will cost me money. I promise I will get to all of these ideas as the means become necessary.

Again, I hope everyone enjoys the updated app. I look forward to meeting all of you next week in Houston.


Wednesday, April 16, 2014 #

I, unfortunately and unwittingly, started a minor Twitter flame war earlier today. Of course, there is only one way that could turn out.

The subject was on the importance of JavaScript. The original tweet was:

“By 2017, JavaScript will be the most in-demand language skill in application development (AD).”

— Forrester Research 2014
I will leave out who provided the tweet. Yes, I will forward them a link to my article. If they post a reply, I will update this post with a link.

Before I begin, let me say a few things. First, I did not read the referenced article. This is because a) I couldn't find it, and b) every report on Forrester's site seems to cost 499USD. Second, I do actually like JavaScript and have been using it for a VERY long time. Finally, as longtime readers will know, I am a proponent of learning your language, not your extension framework.

My response to the comment was "if this comes true, we have failed our customers and users..." This spawned an epic, 3 hour conversation thread that I'm not going to fully recount. I will make sure all of their points are addressed.

My response was fueled by two primary things. The first was a couple "interactions" I've had with Douglas Crockford. The most important of these was the DevLink 2012 closing panel. During this panel, Crockford replied to a question about the longevity of JavaScript with the following response:

God, I hope not. If it turns out that JavaScript is the last programming language, that would be really sad. But unfortunately, because of its dominance in the web, it is now moving into virtually every place else. It has become a tragically important language, and we’re going to be stuck with it for a time...
This has, and still does, mirror my feelings. This took place at about the same time WinJS was starting to come into fashion. At that time, I had some informal numbers telling me that WinJS was not as prevalent as Microsoft was letting on. Unfortunately, I do not have current numbers as Microsoft doesn't publish this type of information.

That leads to my second point. I do NOT like the fact that the quote says "application development." Yes, web apps are apps. However, I am afraid that is not the intent of the statement. I fear that this statement is referring to JavaScript as a golden hammer. There are cases where JavaScript is the right tool for the job. However, JavaScript is not the only tool for the job.

I think back to the early 2000s. Perl was ALL the rage. It was used for everything, including webpages. The problem was that Perl was originally written for a specific purpose. It was written to be a VERY efficient string parser to enable Linux admins automate command line tasks that involved scrubbing log files. To this day, it still's still widely in use doing just that.

Soon, however, people said "PERL ALL THE THINGS!" We began to see webpages that were Perl scripts constructing a page in real-time. We began to see command-line apps that were Perl scripts that just automated calls to other command-line apps.

In the long run, Perl had this dirty little secret. It was made to be very easy to use, to be marginally forgiving, and to offer great flexibility. Larry Wall, the father of Perl, described a good programmer as being lazy. In the end, this led to bad habits, unmaintainable code bases, and large amounts of effort (and money) being spent to go back and do it right.

If, when reading that last paragraph, you didn't mentally swap Perl for JavaScript, go back and try again.

During all this talk, people professed the joy of all things JavaScript and how I was categorically WRONG. The biggest, recurring theme was "but what about node.js". Let me sum this up in 3 parts. First, it says .js. Thus, it's just JavaScript on the client. Second, the server side is a WHOLE bunch of C++. Third, It's not like it has a dependency to openssl or anything... (Too soon?)

As a bonus fourth, this argument would have been about Ember last year. Knockout the year before that. jQuery the year before that... If my timeline is wrong on the previous next-big-things, that's because I never used any of them. I wrote JavaScript, using the as is was defined in the ECMAScript spec. Interacting with the DOM in a method that was outlined in the DOM spec. Writing carefully engineered client code to do only want I needed without fragile, outside dependencies.

Guess what, that code had generally worked on all browsers, of all types, without side effects. I say generally because you do have to add CSS into the mix, and no browser universally uses it correctly.

Given all that, I would never dream of writing an application in JavaScript. I consider it a tool that I can exploit in the correct circumstances. People say, “but, it’s ubiquitous. Why not use it?” Of course, their bosses said that two decades ago about Java.

Technology changes faster than we realize. I can’t tell you want language I will be using in 3 years. I can’t tell you want languages will still be around. People still make a fortune maintaining COBOL. What I can tell you is that technology is run by neither merit nor artistry. There is a little of both, but something else. Once the true engineers get a hold of JavaScript and make people do it right, the lemming will find the next easiest path.

My only hope is that this happens sooner, rather than later.


Thursday, April 10, 2014 #

As I was working on updates to The Krewe Windows Phone App, I ran into a very aggravating situation. I needed to blow away my Azure deployment project (more on a later date) and recreate a new one. When I did this, the project was named the ever-so-helpful "TheKreweAPI.Azure2". I know, pure poetry...

After this happened, I took the time to rename everything to something useful. I manually edited solution and project files. I scrubbed output directories. I made everything look pretty.

I my quest for glory, I removed something I didn't mean to remove. When the project was created, VS12 automatically added the project to source control. I had the little + next to the project name. After I renamed everything, my little + went away.

That made me a sad panda.

After a full day of shouting obscenities, I finally fixed it. Here is what I had to do to fix it:

  • Undo the "add" changes.
  • Remove the project from the solution
  • Rename the original directory
  • Manually create the directory in the Source Control Explorer
  • Copy the files into the new, correctly named directory
  • Re-add the project to the solution.
  • Add the files to source control
  • Check everything in
  • Party like it's 1999

I don't know if you realized this, but that's a LOT of steps for a simple rename. Hopefully, this will keep you from the same spew of obscenities that I was forced to use.


Monday, April 7, 2014 #

Greetings. I Have returned from the brink of madness. Of course, by madness, I mean academia...

It's time to kickstart this thing back into full swing. I have tasked myself to keep this thing updated on a semi-regular basis. What does semi-regular mean? Well, I'm hoping to hit at least on post about every 2 weeks.

Of course, the trick to this is that my role at work has significantly changed. I'm still coding and doing research and development. However, I am now in a much larger supervisory role. As such, topics may expand into these areas. Of course, I have always posted some soft skills topics. Those just seem to be the ones that nobody reads.

Any who, it feels good to be posting something into a public forum again. I hope to hear, read, or see you all soon.