Most New Landing Pages Look the Same
I’ve been hitting the web more than usual the past few months as part of R&D
for various needs. I’ve come to realize that the winds of web fashion have taken
a considerable turn in the past few years. Almost cyclical, in my opinion.
Take a look at these examples:
It doesn’t take too long to start identifying some new patterns
General Page Layout
I hit these pages and I feel like I’m at the impulse buy section
of the grocery store. So clearly, it works!
Big Poofy Text
Crank up the font size on your home screen if you want to match
today’s trends. And you’re bound to get bullied in the website owners’ locker room
if you’re caught using a font with
Fat ‘n Fluffy Buttons
Thanks to ubiquitous libraries like jQueryUI
those basic, blocky HTML buttons are a thing of the past! And the buttons
are getting larger and larger, and enact a satisfying *fwump!*
feeling when you click them.
It’s not just the frames surrounding our content anymore; you may notice that even
the textboxes these days have clean rounded corners.
Punchy 3D Effects
Flat layouts are falling out of favor. Shadows and other tricks
to make the webpage elements pop out are becoming more popular.
Out with the New, In with the Old
I find these trends intriguing. In my opinion, it’s a phase we’ve hit after our
rebellion against our awkward geocities adolescense. To clarify,
our evolution followed these stages:
- Early Web Man – We created shameless crude sites with childish
fonts (Comic Sans Serif), obnoxious
repeating star backgrounds, marquees, MIDI players, frames, and big open spaces.
- Rebellion – Embarassed by the tactlessness of his elders, Web Man
created webpage layouts with tightly-wound text, no borders, miniscule fonts, and
frightfully small radio buttons to click on.
- Renaissance – Web Man today.
Page layouts have expanded, padding and margins have returned, font sizes are increase
again. This is all a good thing, in my opinion. Remember how our webpages had tedious,
pointless blurbs just to fill space? : “Welcome to the GeneralTech home page! On
the left you’ll see links to click on! If you click on them, you’ll go to other
So, it looks like the statute of limitations has finally effected our shameful early
web days and can begin to harvest some of the more natural ideas that spurred from
them. However, Comic Sans Serif
better stay dead (see
Johnny, the Endangered Keyboard-Driven Windows User
Some of my proudest, obscure Windows tricks are losing their relevance. I know I’m not alone.
Keyboard shortcuts are going the way of the dodo. I used to induce fearful awe by slapping Ctrl+Shift+Esc in front of the lowly, pedestrian Windows users. No windows key on the keyboard? No problem: Ctrl+Esc. No menu key on the keyboard: Shift+F10. I am also firmly planted in the habit of closing windows with the Alt+Space menu (Alt+Space, C); and I harbor a brooding, slow-growing list of programs that fail to support this correctly (that means you, Paint.NET).
Every time a new version of windows comes out, the support for some of these minor time-saving habits get pared out. Will I complain publicly? Nope, I know my old ways should be axed to conserve precious design energy. In fact, I disapprove of fierce un-intuitiveness for the sake of alleged productivity. Like vim, for example. If you approach a program after being away for 5 years, having to recall encyclopedic knowledge is a flaw. The RTFM disciples have lost.
Anyway, some of the items in my arsenal of goofy time-saving tricks are still relevant today. I wanted to draw attention to one that’s stood the test of time.
Remember Batch Files?
Yes, it’s true, batch files are fading faster than the world of print. But they're not dead yet.
I still run into some situations where I opt to use batch files. They are still relevant for build processes, or just various development workflow tools. Sure, there’s powershell, but there’s that stupid Set-ExecutionPolicy speed bump standing in your way; can you really spare the time to A) hunt down that setting on all machines affected and/or B) make futile efforts to convince your coworkers/boss that the hassle was worth it?
When possible, I prefer the batch file wild card. And whenever I return to batch files, I end up researching some of the unintuitive aspects such as parameters, quote handling, and ERRORLEVEL. But I never have to remember to use “REM” for comment lines, because there’s a cleaner way to do them!
Double Colon For Eye-Friendly Comments
Here is a very simple batch file, with pretty much minimal content:
REM This is a comment
ECHO This batch file doesn’t do much
If you code on a daily basis, this may be more suitable to your eyes:
:: This is a comment
ECHO This batch file doesn’t do much
Works great! I imagine I find it preferable due to the similarity to comments in other situations: // or ; or #
I’ve often make visual pseudo-line breaks in my code, and this colon-based syntax works wonders:
:: Do stuff
ECHO Doing Stuff
:: Do more stuff
ECHO This batch file doesn’t do much
Not only is it more readable, but there’s a slight performance benefit. The batch file engine sees this as an invalid line label and immediately reads the following line. Use that fact to your advantage if this trick leads you into heated nerd debate.
Two Pitfalls to Avoid
Be aware of that there are a couple situations where this hack will fail you. It most likely won’t be a problem unless you’re getting really sophisticated with your batch files.
Pitfall #1: Inline comments
IF EXIST C:\SomeFile.txt GOTO END ::This will fail
Unfortunately, this fails. You can only have whitespace to the left of your comments.
Pitfall #2: Code Blocks
IF EXIST C:\SomeFile.txt (
:: This will fail
Code blocks, such as if statements and for loops, cannot contain these comments. This is ultimately due to the fact that entire code blocks are processed as a single line.
I originally learned this from Rob van der Woude’s site. He goes into more depth about the behavior of the pitfalls as well, if you are interested in further details.
I hope this trick earns you serious geek rep!
From a consumption point of view, tech blogging is a great resource for one-off articles on niche subjects. If you spend any time reading tech blogs, you may find yourself running into several common, useless types of posts tech bloggers slip into. Some of these lame posts may just be natural due to common nerd psychology, and some others are probably due to lame, lemming-like laziness.
I’m sure I’ll do my fair share of fitting the mold, but I quickly get bored when I happen upon posts that hit these patterns without any real purpose or personal touches.
1. The Content Regurgitation Posts
This is a common pattern fueled by the starving pan-handlers in the web traffic economy. These are posts that are terse opinions or addendums to an existing post. I commonly see these involve huge block quotes from the linked article which almost always produces over 50% of the post itself.
I’ve accidentally gone to these posts when I’m knowingly only interested in the source material. Web links can degrade as well, so if the source link is broken, then, well, I’m pretty steamed.
I see this occur with simple opinions on technologies, Stack Overflow solutions, or various tech news like posts from Microsoft. It’s not uncommon to go to the linked article and see the author announce that he “added a blog post” as a response or summary of the topic.
This is just rude, but those who do it are probably aware of this. It’s a matter of winning that sweet, juicy web traffic. I doubt this leeching is fooling anybody these days.
I would like to rally human dignity and urge people to avoid these types of posts, and just leave a comment on the source material.
2. The “Sorry I Haven’t Posted In A While” Posts
This one is far too common. You’ll most likely see this quote somewhere in the body of the offending post:
I have been really busy.
If the poster is especially guilt-ridden, you’ll see a few volleys of excuses. Here are some common reasons I’ve seen, which I’ll list from least to most painfully awkward.
- Out of town
- Vague allusions to personal health problems (these typically includes phrases like “sick”, “treatment'”, and “all better now!”)
- “Personal issues” (which I usually read as "divorce”)
- Graphic or specific personal health problems (maximum awkwardness potential is achieved if you see links to charity fund websites)
I can’t help but to try over-analyzing why this occurs. Personally, I see this an an amalgamation of three plain factors:
- Life happens
- Us nerds are duty-driven, and driven to guilt at personal inefficiencies
- Tech blogs can become personal journals
I don’t think we can do much about the first two, but on the third I think we could certainly contain our urges. I’m a pretty boring guy and, whether or I like it or not, I have an unspoken duty to protect the world from hearing about my unremarkable existence. Nobody cares what kind of sandwich I’m eating. Similarly, if I disappear for a while, it’s unlikely that anybody who happens upon my blog would care why.
Rest assured, if I stop posting for a while due to a vasectomy, you will be the first to know.
3. The “At A Conference”, or “Conference Review” Posts
I don’t know if I’m like everyone else on this one, but I have never been successfully interested in these posts. It even sounds like a good idea: if I can’t make it to a particular conference (like the KCDC this year), wouldn’t I be interested in a concentrated summary of events?
Apparently, no! Within this realm, I’ve never read a post by a blogger that held my interest. What really baffles is is that, for whatever reason, I am genuinely engaged and interested when talking to someone in person regarding the same topic.
I have noticed the same phenomenon when hearing about others’ vacations. If someone sends me an email about their vacation, I gloss over it and forget about it quickly. In contrast, if I’m speaking to that individual in person about their vacation, I’m actually interested.
I’m unsure why the written medium eradicates the intrigue. I was raised by a roaming pack of friendly wild video games, so that may be a factor.
4. The “Top X Number of Y’s That Z” Posts
I’ve seen this one crop up a lot more in the past few of years. Here are some fabricated examples:
- 5 Easy Ways to Improve Your Code
- Top 7 Good Habits Programmers Learn From Experience
- The 8 Things to Consider When Giving Estimates
- Top 4 Lame Tech Blogging Posts
These are attention-grabbing headlines, and I’d assume they rack up hits. In fact, I enjoy a good number of these. But, I’ve been drawn to articles like this just to find an endless list of identically formatted posts on the blog’s archive sidebar. Often times these posts have overlapping topics, too.
These types of posts give the impression that the author has given thought to prioritize and organize the points as a result of a comprehensive consideration of a particular topic. Did the author really weigh all the possibilities when identifying the “Top 4 Lame Tech Blogging Patterns”? Unfortunately, probably not. What a tool.
To reiterate, I still enjoy the format, but I feel it is abused. Nowadays, I’m pretty skeptical when approaching posts in this format. If these trends continue, my brain will filter these blog posts out just as effectively as it ignores the encroaching “do xxx with this one trick” advertisements.
To active blog readers, I hope my guide has served you precious time in being able to identify lame blog posts at a glance. Save time and energy by skipping over the chaff of the internet!
And if you author a blog, perhaps my insight will help you to avoid the occasional urge to produce these needless filler posts.
For any of those in the Kansas City area, I recommend Coders For Charities as a great once-a-year event! It’s a weekend code-a-thon in which small, quickly-assembled teams of software engineers construct as much as possible for a charity in need. Contributors include anybody with relevant experience such as software developers and graphic designers. The projects seem to typically involve creating a website.
I personally was only available for about half of the total event, but I contributed to a new website for Truman Neurological Center: http://www.tnccommunity.com/
The goal of the project was a full-service website creation, including:
- Identifying technologies to use, depending on the need (our team identified WordPress as a viable solution)
- Registering for a web host fitting the requirements (in this case, free hosting donated from DiscountAsp.net)
- Theme selection and customization
- Website configuration and setup
- Assistance with content creation
- Retirement of previously-existing website
Achieving this in one weekend is quite the feat! Everyone did quite well to manage themselves and prioritize their time in order to get the most bang for their buck, which is of course is typically the same mentality that makes us valuable in our day jobs.
There were some technical considerations we identified resulting from our selection of Wordpress, which I would recommend weighing if you find yourself involved in a similar project:
- Wordpress updates are a large driving factor. These include updates to wordpress core, themes, and plugins. It is imnportant to ensure that updates go smoothly for your new website owner.
- To update/customize a base theme, use child themes. Do not modify a theme directly, or your theme changes will be lost at update time.
- Use plugins sparingly. Adding to the obvious round peg + square hole technical considerations, some plugins are not updated frequently. Plugins, if not updated by the author, can be broken from core WordPress updates. Plugins that fail to update frequently can still be used with upgraded WordPress cores, but they will do so without a “guarantee” of it being functional. Be sure to consider these factors.
- Limit the technical affinity required by your new website owner for their content updates. For example, your new website owner will have access to the flexibility of the HTML editor for all content updates, but this is likely going to be a last resort. The human-friendly WYSIWYG editor should be left capable for the vast majority of changes. Try to get hammer out your site CSS such that all DOM elements of a typical content update are automatically styled to their previously-identified preferences. Do not require the user to specify specific CSS classes or style attributes.
So not only did I get to help the good folks at Truman Neurological Center, but I also gained stronger understanding of proper website development. And the free soda was the icing on the cake! I will definitely return for Coders For Charities 2013!
I got a compilation error in my ASP.NET MVC3 project that tested my sanity today. (As always, names are changed to protect the innocent)
The type or namespace name 'FishViewModel' does not exist in the namespace 'Company.Product.Application.Models' (are you missing an assembly reference?)
Sure looks easy! There must be something in the project referring to a FishViewModel.
The Confusing Part
The first thing I noticed was the that error was occuring in a folder clearly not in my project and in files that I definitely had not created:
%SystemRoot%\Microsoft.NET\Framework\(versionNumber)\Temporary ASP.NET Files\
I also ascertained these facts, each of which made me more confused than the last:
- Rebuild and Clean had no effect.
- No controllers in the project ever returned a ViewResult using FishViewModel.
- No views in the project defined that they use FishViewModel.
- Searching across all files included in the project for “FishViewModel” provided no results.
- The build server did not report a problem.
The problem stemmed from a file that was not included in the project but still present on the file system:
(By the way, if you don’t know this trick already, there is a toolbar button in the Solution Explorer window to “Show All Files” which allows you to see files all files in the file system)
In my situation, I was working on the mission-critical Fish view before abandoning the feature. Instead of deleting the file, I excluded it from the project.
However, this was a bad move. It caused the build failure, and in order to fix the error, this file must be deleted.
By the way, this file was not in source control, so the build server did not have it. This explains why my build server did not report a problem for me.
So, what’s going on? This file isn’t even a part of the project, so why is it failing the build?
This is a behavior of the ASP.NET Dynamic Compilation. This is the same process that occurs when deploying a webpage; ASP.NET compiles the web application’s code. When this occurs on a production server, it has to do so without the .csproj file (which isn’t usually deployed, if you’ve taken your time to do a deployment cleanly). This process has merely the file system available to identify what to compile.
So, back in the world of developing the webpage in visual studio on my developer box, I run into the situation because the same process is occuring there. This is true even though I have more files on my machine than will actually get deployed.
I can’t help but think that this error could be attributed back to the real culprit file (Fish.cshtml, rather than the temporary files) with some work, but at least the error had enough information in it to narrow it down.
I had previously been accustomed to the idea that for c# projects, the .csproj file always “defines” the build behavior. This investigation has taught me that I’ll need to shift my thinking a bit to remember that the file system has the final say when it comes to web applications, even on the developer’s machine!
Occasionally I run into a problem difficult to reproduce in that it takes a laborious amount of tinkering. Sometimes I’ve gone through the work of plugging away at the user interface or forcefully moving the current statement manually around if statements. In these situations, I become very attached to my debugging session (no pun intended).
I am secure in my nerdiness enough to admit that there have been instances where I have changed code for testing purposes and forgotten to revert those changes before checkin time. Because of this I’ve developed an aversion to making such changes to the code. I recently discovered one additional tool in my arsenal to futz with the running code without changing it.
Abuse a breakpoint condition!
You would usually use a condition to specify when the breakpoint should hit, but you can enter any valid code as the condition. In my case, I used this:
(addressState = "AL") == null
The null check was just to ensure that the breakpoint would not actually break the debugger. You could use “!= null” if you still wanted the breakpoint to hit.
Ultimately this aids the same benefit you get when setting values in local or watch windows. Using this trick, you can “automate” these actions if you need to.
I ran into a visual issue using DirectX (in C# via SlimDX) in which the texture filtering was not aligning properly. This is difficult to describe in words, so read the fully explanation below. The solution is pretty simple, so I decided to post about it in hopes that it is use to another DirectX neophyte in the future.
Like most normal people, in my spare time I work on an open-source emulator for the long-lost 3DO multiplayer video game system (see fourdo.com for all your 3DO emulation needs!).
The emulator’s user interface uses Windows Forms (WinForms), and the first revisions of the emulator were just doing a simple blit to the screen using ((Graphics)g).DrawImage. This was unnecessary drain on the CPU, which is the key resource that make emulators thrive! Thus, at some point I enabled DirectX rendering which would draw the bitmaps to the screen instead, pushing that drawing operation onto the GPU’s shoulders. This was just done using two triangles via a trianglestrip, and a single texture for the screen’s image
Worked great! However, when using nearest-point sampling (a.k.a. Nearest Neighbor, or “None”) for image stretching, there was an oddity down the diagonal / hypotenuse of the polygons. To clarify, I have some sample images.
| || || |
What happened to this man? Well, Nearest Point filtering has shifted the bottom portion of his body causing a collapsed lung, paralysis, and slight indigestion.
This is due to floating point inaccuracies with the rasterization and texture mapping. Linear interpolation (smooth scaling) is not subject to these inaccuracies. Microsoft has a good explanation of the cause and pushes you in the right direction:
The most obvious solution is to just not use Nearset-Point sampling. Generally everything uses at least linear interpolation; nobody would play a game these days without it. However, to the retro gaming goons like myself, these problems of olde bubble back to the surface. Sometimes we genuinely want those ugly sharp edged pixels!
The previously-linked article suggests the following:
When you must use it, it is recommended that you offset texture coordinates slightly from the boundary positions to avoid artifacts.
To accomplish this, I first split my full-screen triangle strip into two literal triangle, and shifted the texture mapping in the second triangle. I was careful to make the shift amount less than the amount necessary to shift my image by a full pixel. I decided to shift about 10% a pixel, and my native image is always 1024 x 512. 1 / 1024 / 10 = 0.0001
new TexturedVertex(new Vector3(-1.0f, 1.0f, 0.0f),
new Vector2(0.0f, 0.0f))
,new TexturedVertex(new Vector3( 1.0f,-1.0f, 0.0f),
new Vector2(maximumX/2, maximumY/2))
,new TexturedVertex(new Vector3(-1.0f,-1.0f, 0.0f),
new Vector2(0.0f, maximumY/2))
,new TexturedVertex(new Vector3( 1.0f,-1.0f, 0.0f),
new Vector2(maximumX/2 - .0001f, maximumY/2 + .0001f))
,new TexturedVertex(new Vector3(-1.0f, 1.0f, 0.0f),
new Vector2(0.0f - .0001f, 0 + .0001f))
,new TexturedVertex(new Vector3( 1.0f, 1.0f, 0.0f),
new Vector2(maximumX/2 - .0001f, 0 + .0001f))
This was enough to ensure that the texture mapping in the second polygon was not misaligned.
The key characters
You may very well run into the same issue I did if you are using the following technologies:
- Team Foundation Server (TFS) with Team Foundation Build (TFB) - in my case, 2010
- Visual Studio Unit Tests - also 2010, in my case
- Reliance on loading assemblies at runtime: Assembly.Load()
How might someone run into #3, you might ask? It's pretty common these days teeming with reliance on reflections and other fancy run-time logic. In my case, it was Enterprise Library
I was seeing an issue in which the code was succesfully building and passing tests on all development machines, and meanwhile were building but failing tests on the server.
In the unit test logs, the server was claiming that it could load find "Microsoft.Practices.EnterpriseLibrary.Logging.dll".
Anytime when dealing with assembly load failures, I like to enable more verbose logging on the issue. There is a handy trick of adding a value in the registry. Add Key: HKLM\Software\Microsoft\Fusion Value: EnableLog (DWORD) with a value of 1. As a result, I found where the unit tests were searching for the missing assembly, and therefore I could identify where they were running:
C:\Builds\8\TheProject\ContinousBuildForWeb\TestResults\TFSBUILD01$_TFSBUILD01 2012-02-17 09_09_33_Any CPU_Debug\Out
In this directory, it was plain to see that the assembly in question(Microsoft.Practices.EnterpriseLibrary.Logging.dll) was missing.
Oddly enough, this assembly was added as a dependency in the project for the unit test's assembly. Additionally, CopyLocal
was true. The missing assembly was correctly showing up in the bin\$(Configuration) directory on the build server. It seems that Team Foundation Build is not able to figure out that the unit tests are dependent on the missing assembly. The key to this oddity is that the assembly is a run-time dependency. I noticed that ILSpy
is also unable to identify dependencies like these. I didn't figure this out on my own, other folks on the 'net have run into failed unit tests in Team Foundation Build to missing assemblies:http://tempuri.org/tempuri.html
There are a couple solutions I'm aware of, but I'm afraid they're both hacks.
One option is to force Team Foundation Build to copy these assemblies explicitly. One way to accomplish this is to make use of unit test settings. In these, you can define additional files and directories to deploy with your unit tests (look in the Deployment section of the configuration).
Hack B: (the one I chose)
Another way to ensure Team Foundation Build will copy these run-time dependencies is to turn them into hard, compile-time dependencies. And that's just what I did by adding a bogus unit test:
public void CopyAssemblyHackTest()
Microsoft.Practices.EnterpriseLibrary.Logging.ContextItems nullItems = null;
Needless Philosophical Reflection
It would be easy to blame Team Foundation Build for this, which I ultimately do, of course. I'm unsure if there would be a better way to handle this. Given just the unit test assemblies, it genuinely may not be possible to determine these run-time dependencies. It could optionally just copy everything out the bin directories, but if someone is doing non-clean builds, this may copy more than necessary.