geek yapping

Blog Moved -

KISS & AJAX Deletes

My Automated NuGet Workflow

When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application.


  • Setup the library with automatic versioning and a nuspec
    • Setup library assembly version to auto increment build and revision
      • AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")]
        • This autoincrements build and revision based on time of build
      • Major & Minor
        • Major should be changed when you have breaking changes
        • Minor should be changed once you have a solid new release
        • During development I don’t increment these
    • Create a nuspec, version this with the code
      • nuspec - set version to <version>$version$</version>
      • This uses the assembly’s version, which is auto-incrementing Smile
  • Make changes to code
  • Run automated build (ruby/rake)
    • run “rake nuget”
    • nuget task builds nuget package and copies it to a local nuget feed
      • I use an environment variable to point at this so I can change it on a machine level!
      • The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file
    • $projectSolution = 'src\\Library.sln'
      $nugetFeedPath = ENV["NuGetDevFeed"]
      msbuild :build => [:clean] do |msb| :configuration => :Release
        msb.targets :Build
        msb.solution = $projectSolution
      task :nuget => [:build] do
        sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath
  • Setup the local nuget feed as a nuget package source (this is only required once per machine)
  • Go to the consuming project
  • Update the package
    • Update-Package Library
    • or Install-Package
  • TLDR
    • change library code
    • run “rake nuget”
    • run “Update-Package library” in the consuming application
    • build/test!

If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake.


  • Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed.
  • Pick the package out of your local feed, and copy it to a public / shared feed!
    • I have a script to do this where I can drop the package on a batch file
    • Replace apikey with your nuget feed's apikey
    • Take out the confirm(s) if you don't want them
    • @ECHO off
      echo Upload %1?
      set /P anykey="Hit enter to continue "
      nuget push %1 apikey
      set /P anykey="Done "
  • Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish
  • TLDR
    • consider version number
    • run command to copy to public feed

Commit Review Questions

Note: in this article when I refer to a commit, I mean the commit you plan to share with the rest of the team, if you have local commits that you plan to amend/combine, I am referring to the final result.

In time you will find these easier to do as you develop, however, all of these are valuable before checking in!  The pre commit review is a nice time to polish what might have been several hours of intense work, during which these things were the last things on your mind!  If you are concerned about losing your work in the process of responding to these questions, first do a check-in and amend it as you go (assuming you are using a tool such as git that supports this), rolling the result into one nice commit for everyone else. 

Did you review your commit, change by change, with a diff utility?

  • If not, this is a list of reasons why you might want to start!

Did you test your changes?

  • If the test is valuable to be automated, is it?
  • If it’s a manual testing scenario, did you at least try the basics manually?

Are the additions/changes formatted consistently with the rest of the project?

  • Lots of automated tools can help here, don’t try to manually format the code, that’s a waste of time and as a human you will fail repeatedly.
  • Are these consistent: tabs versus spaces, indentation, spacing, braces, line breaks, etc
  • Resharper is a great example of a tool that can automate this for you (.net)

Are naming conventions respected?

  • Did you accidently use abbreviations, unless you have a good reason to use them?
  • Does capitalization match the conventions in the project/language?

Are files partitioned?

  • Sometimes we add new code in existing files in a pinch, it’s a good idea to split these out if they don’t belong
  • ie: are new classes defined in new files, if this is something your project values?

Is there commented out code?

  • If you are removing an existing feature, get rid of it, that is why we have VCS
  • If it’s not done yet, then why are you checking it in?
    • Perhaps a stash commit (git)?

Did you leave debug or unnecessary changes?

Do you understand all of the changes?

Are there spelling mistakes?

  • Including your commit message!

Is your commit message concise?

Is there follow up work?

  • Are there tasks you didn’t write down that you need to follow up with?
  • Are readability or reorganization changes needed?
  • This might be amended into the final commit, or it might be future work that needs added to the backlog.

Are there other things your team values that you should review?

Programming doesn’t have to be Magic

Computer Locke

In the show LOST, the Swan Station had a button that “had to be pushed” every 100 minutes to avoid disaster.  Several characters in the show took it upon themselves to have faith and religiously push the button, resetting the clock and averting the unknown “disaster”.  There are striking similarities in this story to the code we write every day.  Here are some common ones that I encounter:

  • “I don’t know what it does but the application doesn’t work without it”
  • “I added that code because I saw it in other similar places, I didn’t understand it, but thought it was necessary.” (for consistency, or to make things “work”)
  • “An error message recommended it”
  • “I copied that code” (and didn’t look at what it was doing)
  • “It was suggested in a forum online and it fixed my problem so I left it”

In all of these cases we haven’t done our due diligence to understand what the code we are writing is actually doing.  In the rush to get things done it seems like we’re willing to push any button (add any line of code) just to get our desired result and move on.  All of the above explanations are common things we encounter, and are valid ways to work through a problem we have, but when we find a solution to a task we are working on (whether a bug or a feature), we should take a moment to reflect on what we don’t understand.  Remove what isn’t necessary, comprehend and simplify what is. 

Why is it detrimental to commit code we don’t understand?

  • Perpetuates unnecessary code
    • If you copy code that isn’t necessary, someone else is more likely to do so, especially peers
  • Perpetuates tech debt
    • Adding unnecessary code leads to extra code that must be understood, maintained and eventually cleaned up in longer lived projects
    • Tech debt begets tech debt as other developers copy or use this code as guidelines in similar situations
  • Increases maintenance
    • How do we know the code is simplified if we don’t understand it?
  • Perpetuates a lack of ownership
    • Makes it seem ok to commit anything so long as it “gets the job done”
  • Perpetuates the notion that programming is magic
    • If we don’t take the time to understand every line of code we add, then we are contributing to the notion that it is simply enough to make the code work, regardless of how.


Don’t commit code that you don’t understand, take the time to understand it, simplify it and then commit it!

Creating abstractions instead of using generic collections

How we’re going to use NuGet

This is a quick introduction to moving from our own internal assembly repository to using NuGet.  It’s terse for a reason, just a note to self and those I work with:

  1. What we had
    1. One source of builds external to our code repository
      1. Didn’t check in external builds
      2. Used DVCS to share single feed
    2. Rake task to update and copy latest builds local checkout (this location is excluded from VCS)
    3. Automatic updates if changed in central repository and version not incremented
      1. Caused problems when changes were breaking
      2. Sometimes people updated old versions, not realizing there were new versions
    4. Simple source of existing builds (easier to bring external libraries into a project)
  2. Why move to NuGet?
    1. So we don’t have to
      1. Maintain this tool
      2. Educate others to use it (easier to work with other teams)
      3. Maintain open source / 3rd party builds
    2. Easily add/remove/update external dependencies
      1. Keep up to date with open source / 3rd party builds (easier)
      2. Automatic versioning for every release (it reads the assembly version when building packages)
        1. Avoid automatic updates that break apps even for small changes
    3. Automatic dependency conflict resolution (if possible)
      1. We can update a base dependency with a non breaking change and not need to recompile all child dependencies of it.
    4. Multiple sources/feeds of packages
      1. Can segment internal feed versus customer specific feeds
      2. Decentralize feeds!
      3. Composite feeds, we can override what is in one feed with what is in another!
    5. Update checks if it’s already updated first, instead of copying every time, saves time on builds
    6. Distribute content as well as binaries
      1. Images
      2. Css
      3. Javascript
      4. etc
    7. Compressed builds (zip) will decrease the size of our internal feed(s)
    8. Easier to publish builds of our own packages to the open source community
    9. Simplified updating of all references of a dependency in one power shell command
  3. How
    1. Setup
      1. Links
      2. Get 1.4 build of NuGet command line (fixes multiple feeds for CLI installer)
      3. Install NuGet Package Manager in Visual Studio (Extension Manager)
      4. Install Package Explorer (optional) – view & edit packages
      5. (optional) – update all, this is a feature added in 1.4 I believe
    2. Configuration
      1. Setup any local feeds via the Extension Manager in Visual Studio
    3. Rake tasks
      1. rake dep
        1. Updates dependencies if missing (so we don’t have to check them in)
        2. Run before builds
        3. Example:

          desc "Setup dependencies for nuget packages"
          task :dep do
              package_folder = File.expand_path('Packages')
              packages = FileList["**/packages.config"].map{|f| File.expand_path(f)}
              packages.each do |file|
                  sh %Q{nuget install #{file} /OutputDirectory #{package_folder}}

      2. rake nuget
        1. Used to build and deploy a package to a feed
        2. Use env variable to point to a dev feed source NuGetDevFeed
          1. All builds go here and can be copied to other official sources once verified
        3. Example:

          task :nuget => [:build] do 
              sh "nuget pack src\\BclExtensionMethods\\BclExtensionMethods.csproj /OutputDirectory " + ENV[‘NuGetDevFeed’]

    4. Package Manager Console

      1. Use this to find, install, update and manage packages, works very rapidly to update references in a project as needed and to help modify config files for assembly bindings.

  4. Problems I encountered

    1. NuGet CLI is buggy with relative paths for package sources, use absolute paths

    2. Local feeds used to be able to segment different packages by folder, but the latest version doesn’t work with nested folders in the local (disk) feed.  So for now just dump the output right in the same folder as all other packages.

    3. Reactive team decided to pull Rx-Main newer version because they were building experimental and stable branches under the same package name.  This caused a bit of a headache.  To fix this I had to manually edit packages.config and remove the ref to Rx-Main and reinstall the older version.

      1. Also, had to nuke the local copy of the Rx-Main newer version from the packages folder as it would check there first.

      2. In the future it might be better to just copy the local version to my own NuGet local feed but in this case I decided I didn’t want the experimental code so I rolled back versions.

      3. Might be nice if NuGet had support to downgrade versions.

Refactoring Part II - Tight rope walking / what can break and KISS

Like it or not, we humans make mistakes.  If we embrace the fact that we are going to make mistakes, we can direct our efforts to reduce mistakes in areas that are critical in exchange for potentially making more mistakes in areas that aren’t.  Gasp!  We need to get over the silly notion that our work can ever be 100% perfect and try to maximize in the areas that matter.

Does it really matter?

These are the things I’ve found that typically don’t matter as much in the grand scheme of development.  Start training yourself to identify areas that matter!

  • Infrequently used features
    • Especially if there’s an easy workaround during a failure
  • Administrative crud pages
    • Especially in smaller apps, the developer is usually the admin and can just hack at the DB in a failure
  • MVC Controllers
  • Logging
    • This just means debugging will be a bit harder and I’m sure we’ll fix it quick enough.
  • User management / authentication
    • Development typically involves logging in daily, so it’s likely we’ll catch the mistakes.
    • Please just use OpenId already or another common alternative to rolling your own.
    • If no one can login, deploying a fix won’t interrupt anyone!
  • Easily fixed/deployed features
    • Any feature that isn’t critical, that can easily be fixed and deployed.
  • CSS and images
    • How many of the things we do with CSS and images are just for aesthetic purposes and really don’t matter if the application still functions?
    • Do I really care if my bank sends me a statement and their logo shows up a red X?
  • Reports versus entry
    • If we allow a change (like a balance transfer) to occur that is invalid, it’s probably a bigger problem than if we accidently show an invalid balance total on a statement report.  This is highly subjective, but I’m more worried about data going into my systems than data coming out, except where data flowing out flows into other systems.
  • Features that are no longer used / should be removed
  • Areas where testing seems to slow down development
    • IMHO, testing typically slows down development in areas that don’t matter (stubbing controllers/services, duplicated/overlapping test cases, KISS code)  In areas that are important, we typically find complexity, and testing often helps avoid bugs in complexity faster than F5 debugging.


In the areas that don’t matter, we should strive for simplicity.  Readable and simplified code is less likely to contain mistakes.  Controllers can be thin instead of fat.  Reports can isolate complexity in a tested model or functional components of reuse (calculations). 

What does this mean?

So we know what doesn’t matter as much, what does that mean?  For me it means less testing:

  • Not writing automated tests (or very few) … integration or unit
  • Not double/triple checking my work
  • Sometimes compiling is good enough
  • Sometimes a quick run of a common scenario on a related page, but not all scenarios
  • Rarely, the occasional bat shit crazy refactoring with little regard to these areas.



Some of this may sound radical, if so, don’t adopt it.  I can refactor a lot faster if I know where I can run versus where I should crawl.  Always crawling (extra effort is expended upfront) is as reckless as always running (extra effort is expended after the fact), an optimum is usually found in balancing what can and what shouldn’t break.

Sadly, I’ve seen a lot of hugely beneficial refactoring passed up simply because it would be invasive to areas that ironically aren’t that important in the grand scheme of things.

Happy Coding!


Things I've noticed with DVCS


Things I encourage:

Frequent local commits

This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task. 

The notion of a task

By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you.

Partial commits

Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest.

Outstanding changes as a guide

If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times.

Throw away / stash commits

There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes.

Sync with the central repository daily

The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily.

Things I discourage:

Lots of partial commits right at the end of a series of changes

If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes.

Committing single files

Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this. 

Committing frequently does not mean committing frequently right at the end of a day's work.

It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

Refactoring Part 1 : Intuitive Investments

Fear, it’s what turns maintaining applications into a nightmare.  Technology moves on, teams move on, someone is left to operate the application, what was green is now perceived brown.  Eventually the business will evolve and changes will need to be made.  The approach to those changes often dictates the long term viability of the application.  Fear of change, lack of passion and a lack of interest in understanding the domain often leads to a paranoia to do anything that doesn’t involve duct tape and bailing twine.  Don’t get me wrong, those have a place in the short term viability of a project but they don’t have a place in the long term.  Add to it “us versus them” in regards to the original team and those that maintain it, internal politics and other factors and you have a recipe for disaster.  This results in code that quickly becomes unmanageable.  Even the most clever of designs will eventually become sub optimal and debt will amount that exponentially makes changes difficult. 

This is where refactoring comes in, and it’s something I’m very passionate about.  Refactoring is about improving the process whereby we make change, it’s an exponential investment in the process of change. Without it we will incur exponential complexity that halts productivity. Investments, especially in the long term, require intuition and reflection. 

How can we tackle new development effectively via evolving the original design and paying off debt that has been incurred?

The longer we wait to ask and answer this question, the more it will cost us.  Small requests don’t warrant big changes, but realizing when changes now will pay off in the long term, and especially in the short term, is valuable.

I have done my fair share of maintaining applications and continuously refactoring as needed, but recently I’ve begun work on a project that hasn’t had much debt, if any, paid down in years.  This is the first in a series of blog posts to try to capture the process which is largely driven by intuition of smaller refactorings from other projects.

Signs that refactoring could help:


  • How can decreasing test time not pay dividends?
  • One of the first things I found was that a very important piece often takes 30+ minutes to test.  I can only imagine how much time this has cost historically, but more importantly the time it might cost in the coming weeks: I estimate at least 10-20 hours per person!  This is simply unacceptable for almost any situation.  As it turns out, about 6 hours of working with this part of the application and I was able to cut the time down to under 30 seconds!  In less than the lost time of one week, I was able to fix the problem for all future weeks!
  • If we can’t test fast then we can’t change fast, nor with confidence.
  • Code is used by end users and it’s also used by developers, consider your own needs in terms of the code base.  Adding logic to enable/disable features during testing can help decouple parts of an application and lead to massive improvements.  What exactly is so wrong about test code in real code?  Often, these become features for operators and sometimes end users. 
  • If you cannot run an integration test within a test runner in your IDE, it’s time to refactor.


  • Are variables named meaningfully via a ubiquitous language?
  • Is the code segmented functionally or behaviorally so as to minimize the complexity of any one area?
  • Are aspects properly segmented to avoid confusion (security, logging, transactions, translations, dependency management etc)
  • Is the code declarative (what) or imperative (how)?  What matters, not how.  LINQ is a great abstraction of the what, not how, of collection manipulation.  The Reactive framework is a great example of the what, not how, of managing streams of data.
  • Are constants abstracted and named, or are they just inline?
  • Do people constantly bitch about the code/design?
  • If the code is hard to understand, it will be hard to change with confidence.  It’s a large undertaking if the original designers didn’t pay much attention to readability and as such will never be done to “completion.”  Make sure not to go over board, instead use this as you change an application, not in lieu of changes (like with testability).


  • Simplicity will never be achieved, it’s highly subjective.  That said, a lot of code can be significantly simplified, tidy it up as you go. 
  • Refactoring will often converge upon a simplification step after enough time, keep an eye out for this.


  • In the process of changing code, one often gains a better understanding of it.  Refactoring code is a good way to learn how it works.  However, it’s usually best in combination with other reasons, in effect killing two birds with one stone.  Often this is done when readability is poor, in which case understandability is usually poor as well.  In the large undertaking we are making with this legacy application, we will be replacing it.  Therefore, understanding all of its features is important and this refactoring technique will come in very handy.

Unused code

  • How can deleting things not help?
  • This is a freebie in refactoring, it’s very easy to detect with modern tools, especially in statically typed languages.  We have VCS for a reason, if in doubt, delete it out (ok that was cheesy)!
  • If you don’t know where to start when refactoring, this is an excellent starting point!


  • Do not pray and sacrifice to the anti-duplication gods, there are excellent examples where consolidated code is a horrible idea, usually with divergent domains.  That said, mediocre developers live by copy/paste.  Other times features converge and aren’t combined.  Tools for finding similar code are great in the example of copy/paste problems.  Knowledge of the domain helps identify convergent concepts that often lead to convergent solutions and will give intuition for where to look for conceptual repetition.

80/20 and the Boy Scouts

  • It’s often said that 80% of the time 20% of the application is used most.  These tend to be the parts that are changed.  There are also parts of the code where 80% of the time is spent changing 20% (probably for all the refactoring smells above).  I focus on these areas any time I make a change and follow the philosophy of the Boy Scout in cleaning up more than I messed up.  If I spend 2 hours changing an application, in the 20%, I’ll always spend at least 15 minutes cleaning it or nearby areas.
  • This gives a huge productivity edge on developers that don’t.
  • Ironically after a short period of time the 20% shrinks enough that we don’t have to spend 80% of our time there and can move on to other areas.


Refactoring is highly subjective, never attempt to refactor to completion!  Learn to be comfortable with leaving one part of the application in a better state than others.  It’s an evolution, not a revolution.  These are some simple areas to look into when making changes and can help get one started in the process.  I’ve often found that refactoring is a convergent process towards simplicity that sometimes spans a few hours but often can lead to massive simplifications over the timespan of weeks and months of regular development.