Fervent Coder

Coding Towards Utopia...by Rob Reynolds
posts - 272 , comments - 427 , trackbacks - 0

My Links


Rob Reynolds

Subscribe to Fervent Coder RSS
Subscribe to Fervent Coder by Email

About Me

I manage several open source projects. Need...
   ...package management for Windows?
   ...automated builds?
   ...database change management (migrations)?
   ...your application to check email?
   ...a monitoring utility?

I also write for

Like what you are reading? Want to buy me a cup of coffee?
PayPal - The safer, easier way to pay online!


Post Categories


Saturday, January 16, 2016

Chocolatey Community Feed Update!

Average approval time for moderated packages is currently under 10 hours!
In my last post, I talked about things we were implementing or getting ready to implement to really help out with the process of moderation.  Those things are:
  • The validator - checks the quality of the package
  • The verifier - tests the package install/uninstall and provides logs
  • The cleaner - provides reminders and closes packages under review when they have gone stale.

The Cleanup Service

We've created a cleanup service, known as the cleaner that went into production recently.
  • It looks for packages under review that have gone stale - defined as 20 or more days since last review and no progress
  • Sends a notice/reminder that the package is waiting for the maintainer to fix something and that if another 15 days goes by with no progress, the package will automatically be rejected.
  • 15 days later if no progress is made, it automatically rejects packages with a nice message about how to pick things back up later when the maintainer is ready.

Current Backlog

We've found that with all of this automation in place, the moderation backlog was quickly reduced and will continue to be manageable. A visual comparison:

December 18, 2015 - 1630 packages ready

December 18, 2015 – 1630 packages ready for a moderator

January 16, 2016 - 7 packages ready

January 16, 2016 – 7 packages ready for a maintainer

Note the improvements all around! The most important numbers to key in on are the first 3, they represent a waiting for reviewer to do something status. With the validator and verifier in place, moderation is much faster and more accurate, and the validator has increased package quality all around with its review! The waiting for maintainer (927 in the picture above) represents the bulk of the total number of packages under moderation currently. These are packages that require an action on the part of the maintainer to actively move the package to approved. This is also where the clean up service comes in. The cleaner sent 800+ reminders two days ago. If there is no response by early February on those packages, the waiting for maintainer status will drop significantly as those packages will automatically be rejected. Some of those packages have been waiting for maintainer action for over a year and are likely abandoned. If you are a maintainer and you have not been getting emails from the site, you should log in now and make sure your email address is receiving emails and that the messages are not going to your spam folder. A rejected package version is reversible, the moderators can put it back to submitted at any time when a maintainer is ready to work on moving the package towards approval again.


This is where it really starts to get exciting. Some statistics:
  • Around 30 minutes after a package is submitted the validator runs.
  • Within 1-2 hours the verifier has finished testing the package and posts results.
  • Typical human review wait time after a package is deemed good is less than a day now.
We're starting to build statistics on average time to approval for packages that go through moderation that will be visible on the site.  Running some statistics by hand, we've approved 236 packages that have been created since January 1st, the average final good package (meaning that it was the last time someone submitted fixes to the package) to approval time has been 15 hours. There are some packages that drove that up due to fixing some things in our verifier and rerunning the tests. If I change to only looking at packages since those fixes have went in on the 10th, that is 104 packages with an average approval within 7 hours!

Posted On Saturday, January 16, 2016 8:34 AM | Comments (0) | Filed Under [ Code chocolatey ]

Friday, December 18, 2015

Chocolatey Community Feed State of the Union

tl;dr: Everything on https://chocolatey.org/notice is coming to fruition! We've automatically tested over 6,500 packages, a validator service is coming up now to check quality and the unreviewed backlog has been reduced by 1,000 packages! We sincerely hope that the current maintainers who have been waiting weeks and months to get something reviewed can be understanding that we’ve dug ourselves into a moderation mess and are currently finding our way out of this situation.

Notice on Chocolatey.org

We’ve added a few things to Chocolatey.org (the community feed) to help speed up review times for package maintainers. A little over a year ago we introduced moderation for all new package versions (besides trusted packages) and from the user perspective it has been a fantastic addition. The usage has went up by over 20 million packages installed in one year versus just 5 million the 3 years before it! It’s been an overwhelming response for the user community. Let me say that again for effect: Chocolatey’s usage of community packages has increased 400% in one year over the prior three years combined!

But let’s be honest, we’ve nearly failed in another area. Keeping the moderation backlog low. We introduced moderation as a security measure for Chocolatey’s community feed because it was necessary, but we introduced it too early. We didn’t have the infrastructure automation in place to handle the sheer load of packages that were suddenly thrown at us. And once we put moderation in place, more folks wanted to use Chocolatey so it suddenly became much more popular. And because we have automation surrounding updating and pushing packages (namely automatic packages), we had some folks who would submit 50+ packages at a time. With one particular maintainer submitting 200 packages automatically, and a review of each of them taking somewhere between 2-10 minutes, you don’t have to be a detective to understand how this is going to become a consternation. And from the backlog you can see it really hasn’t worked out well.

1597 submitted

The most important number to understand here is the number in the submitted (underlined). This is the number of packages where a moderator has not yet looked at a package. A goal is to keep this well under 100. We want that time from a high quality package getting submitted to approved within 1-2 days.

Moderation has up until recently been a very manual process. Sometimes depending on which moderator that looked at your package determined whether it was going to be held in review for various reasons. We’ve added moderators and we’ve added more guidance around moderation to help bring a more structured review process. But it’s not enough.

Some of you may not know this, but our moderators are volunteers and we currently lack full-time employees to help fix many of the underlying issues. Even considering that we’ve also needed to work towards Kickstarter delivery and the Chocolatey rewrite (making choco better for the long term), it’s still not the greatest news to know that it has taken a long time to fix moderation, but hopefully it brings some understanding. Our goal is to eventually bring on full-time employees but we are not there yet. The Kickstarter was a start, but it was just that. A kick start. A few members of the core team who are also moderators have focused on ensuring the Kickstarter turns into a model that can ensure the longevity of Chocolatey. It may have felt that we have been ignoring the needs of the community, but that has not been our intention at all. It’s just been really busy and we needed to address multiple areas surrounding Chocolatey with a small number of volunteers.


So What Have We Fixed?

All moderation review communication is done on the package page. Now all review is done on the website, which means that there is no email back and forth (the older process) and what looks like one-sided communication on the site. This is a significant improvement.

Package review logging. Now you can see right from the discussion when and who submits package, when statuses change and where the conversation is.

package review logging

More moderators. A question that comes up quite a bit surrounds the number of moderators that we have and adding more. We have added more moderators. We are up to 12 moderators for the site. Moderators are chosen based on building trust, usually through being extremely familiar with Chocolatey packaging and what is expected of approved packages. Learning what is expected usually comes through having your own packages approved and having a few packages. We’ve written most of this up at https://github.com/chocolatey/choco/wiki/Moderation.

Maintainers can self-reject packages that no longer apply. Say your package has a download url for the software that is always the same. You have some older package versions that could take advantage of being purged out of the queue since they are no longer applicable.

The package validation service (the validator). The validator checks the quality of a package based on requirements, guidelines and suggestions for creating packages for Chocolatey’s community feed. Many of the validation items will automatically roll back into choco and will be displayed when packaging a package. We like to think of the validator as unit testing. It is validating that everything is as it should be and meets the minimum requirements for a package on the community feed.

validation results

The package verifier service (the verifier). The verifier checks the correctness (that the package actually works), that it installs and uninstalls correctly, has the right dependencies to ensure it is installed properly and can be installed silently. The verifier runs against both submitted packages and existing packages (checking every two weeks that a package can still install and sending notice when it fails). We like to think of the verifier as integration testing. It’s testing all the parts and ensuring everything is good. On the site, you can see the current status of a package based on a little colored ball next to the title. If the ball is green or red, the ball is a link to the results (only on the package page, not in the list screen).

passed verification - green colored ball with link

  • Green means good. The ball is a link to the results
  • Orange if still pending verification (has not yet run).
  • Red means it failed verification for some reason. The ball is a link to the results.
  • Grey means unknown or excluded from verification (if excluded, a reason will be listed on the package page).

Coming Soon - Moderators will be automatically be assigned to backlog items. Once a package passes both validation and verification, a moderator is automatically assigned to review the package. Once the backlog is in a manageable state, this will be added.


What About Maintainer Drift?

Many maintainers come in to help out at different times in their lives and they do it nearly always as volunteers. Sometimes it is the tools they are using at the current time and sometimes it has to do with where they work. Over time folks’ preferences/workplaces change and so maintainers drift away from keeping packages up to date because they have no internal incentive to continue to maintain those packages. It’s a natural human response. I've been thinking about ways to reduce maintainer drift for the last three years and I keep coming back to the idea that consumers of those packages could come along and provide a one time or weekly tip to the maintainer(s) as a thank you for keeping package(s) updated. We are talking to Gratipay now - https://github.com/gratipay/inside.gratipay.com/issues/441 This, in addition to a reputation system, I feel will go a long way to help reduce maintainer drift.


Final Thoughts

Package moderation review time is down to mere seconds as opposed to minutes like before. This will allow a moderator to review and approve package versions much more quickly and will reduce our backlog and keep it lower.

It’s already working! The number in the unreviewed backlog are down by 1,000 from the month prior. This is because a moderator doesn’t have to wait until a proper time when they can have a machine up and ready for testing and in the right state. Now packages can be reviewed faster. This is only with the verifier in place, sheerly testing package installs. The validator expects to cut that down to near seconds of review time. The total number of packages in the moderation backlog have also been reduced, but honestly I only usually pay attention to the unreviewed backlog number as it is the most important metric for me.

The verifier has rolled through over 6,500 verifications to date! https://gist.github.com/choco-bot/

When chocobot hit 6500 packages verified

We sincerely hope that the current maintainers who have been waiting weeks and months to get something reviewed can be understanding that we’ve dug ourselves into a moderation mess and are currently finding our way out of this situation. We may have some required findings and will ask for those things to be fixed, but for anything that doesn't have required findings, we will approve them as we get to them.

Posted On Friday, December 18, 2015 9:32 AM | Comments (0) | Filed Under [ chocolatey ]

Friday, November 14, 2014

How Passion Saved Windows

“Don’t worry about people stealing your ideas. If your ideas are any good, you’ll have to ram them down people’s throats.” – Howard H. Aiken

Look around today. There is so much that you can do on Windows with respect to automation that just wasn’t possible a few short years ago. It’s hard to see what has changed because our memories are sometimes so short about how it used to be, so let’s go back about 4 years ago to 2010. PowerShell was still young, there was no Chocolatey, and things like Puppet and Chef didn’t work on Windows yet.

Folks were leaving Windows left and right once they got a taste of how easy automation was in other OS platforms. Well versed folks. Loud folks. Folks that were at the top of their game heading out of Windows. Many others have considered it. You’ve all heard the “leaving .NET stories” from some of the best developers on the .NET platform. But what you may not have realized is that these folks were not just leaving .NET, they were leaving Windows entirely. Some of this was in part to limitations they were finding that just were not there in other OSes. What you are also missing are all the folks that were silently leaving. For every one person speaking out about it, there were many more of the silent losses. The system admins, fed up with GUIs and lack of automation, leaving for greener pastures. The developers who didn’t blog leaving the platform.

But a change has occurred more recently that has slowed that process. I believe it is better tools and automation of the Windows platform. Some people have shown such a passion that they’ve saved Windows as a platform for future generations.

So What Saved Windows?

PowerShellPowerShell – Arguably this could be seen as the catalyst that started it all. It came out in 2006 and while v1 was somewhat limited, v2 (Oct 2009) added huge improvements, including performance. PowerShell is prevalent now, but it had humble beginnings. When Jeffrey Snover saw a need for better automation in Windows, no one understood what he was trying to do. Most folks at Microsoft kept asking, why do you need that? But Jeffrey had such a passion for what was needed that he took a demotion to make it happen. And we are thankful for that because it shaped the face of Windows automation for all. Jeffrey’s passion brought us PowerShell, and it is continuing to bring us more things that have come out of his original Monad Manifesto from 2002.

ChocolateyChocolatey – In 2011 Chocolatey furthered the automation story of Windows with package management, something that other platforms have enjoyed for years. Rob Reynolds’ goals for Chocolatey in the beginning were simply to solve a need but it has since grown into so much more and is now making improvements to become a true package manager. It wasn’t the first approach to package management on Windows and it is certainly not the last. But it did many things right, it didn’t try to achieve lofty goals. It started working at the point of the native installers and official distribution points with a simple approach to packaging and very good helpers to achieve many abilities. When Rob first started working on it, most of his longtime technical friends questioned the relevance of it. Rob did not stop because he had a vision, a passion for making things happen. As his vision has been realized by many he is about to change the face of package management on Windows forever.

Puppet LabsPuppet (and other CM tools) – In 2011 Puppet started working on Windows thanks to Josh Cooper. He single-handedly brought Puppet’s excellent desired state configuration management to Windows (Chef also brought Windows support in 2011). Josh saw a need and convinced folks to try an experiment. That experiment has grown and has brought the last bits of what was needed to save Windows as a platform. His passion for bringing Puppet to Windows has grown into so much more than what it originally started out to do. And now it is arguably the best CM tool for any platform, as the Puppet Labs CEO stated at PuppetConf 2014, Puppet is becoming the lingua franca of infrastructure configuration.

The Effects of Passion

All of this passion for automation has really changed Microsoft. They have adopted automation as a strategy. They are moving to a model of openness, recently announcing that the entire .NET platform is going to be Open Source. They are getting behind Chocolatey with OneGet and getting it built into Windows. They announced PowerShell DSC last year and have made huge improvements in it since then. From where we are sitting, it appears Microsoft now gets it. The effects of passion have really turned the company around and has saved Windows. Windows is becoming the platform we all hoped it would be, it’s really bringing many folks to see it as a true platform for automation and that makes Windows a formidable platform for the foreseeable future.

Posted On Friday, November 14, 2014 8:42 AM | Comments (2) |

Saturday, November 8, 2014

Herding Code On Chocolatey

Recently I talked to Herding Code about the kickstarter, package moderation, OneGet, and where we are going with Chocolatey.

Listen now - http://herdingcode.com/herding-code-199-rob-reynolds-on-the-chocolatey-kickstarter-chocolatey-growth-and-oneget/

Posted On Saturday, November 8, 2014 12:15 PM | Comments (0) | Filed Under [ chocolatey ]

Monday, October 27, 2014

Chocolatey Now has Package Moderation

Well just after three years of having https://chocolatey.org, we’ve finally implemented package moderation. It’s actually quite a huge step forward. This means that when packages are submitted, they will be reviewed and signed off by a moderator before they are allowed to show up and be used by the general public.

What This Means for You Package Consumers

  • Higher quality packages - we are working to ensure by the time a package is live, moderators have given feedback to maintainers and fixes have been added.
  • More appropriate packages - packages that are not really relevant to Chocolatey's community feed will not be approved.
  • More trust - packages are now reviewed for safety and completeness by a small set of trusted moderators before they are live.
  • Reviewing existing packages - All pre-existing packages will be reviewed and duplicates will be phased out.
  • Not Reviewed Warning - Packages that are pre-existing that have not been reviewed will have a warning on chocolatey.org. Since this is considered temporary while we are working through moderation of older packages, we didn't see a need to add a switch to existing choco.

Existing packages that have not been moderated yet will have a warning posted on the package page that looks like

This package was submitted prior to moderation and has not been approved. While it is likely safe for you, there is more risk involved.

Packages that have been moderated will have a nice message on the package page that looks like

This package was approved by moderator mwrock on 10/26/2014.

If the package is rejected, the maintainer will see a message, but no one else will see or be able to install the package.

You should also keep the following in mind:

  • We are not going to moderate prerelease versions of a package as they are not on the stable feed.
  • We are likely only moderating the current version of a package. If you feel older versions should be reviewed, please let us know through contact site admins on the package page.
  • Chocolatey is not going to give you any indication of approved. We expect this to be temporary while we review all existing packages, so we didn’t see much benefit to the amount of work involved to bring it to the choco client in its current implementation.

What This Means for Package Maintainers

  • Guidelines - Please make sure you are following packages guidelines outlined at https://github.com/chocolatey/chocolatey/wiki/createpackages - this is how moderators will evaluate packages
  • Re-push same version - While a package is under review you can continually push up that same version with fixes
  • Email - Expect email communication for moderation - if your email is out of date or you never receive email from chocolatey, ensure it is not going to the spam folder. We will give up to two weeks before we reject a package  for non-responsive maintainers. It's likely we will then review every version of that package as well.
  • Learning about new features - during moderation you may learn about new things you haven't known before.
  • Pre-existing - We are going to be very generous for pre-existing packages. We will start communicating things that will need to be corrected the first time we accept a package, the second update will need to have those items corrected.
  • Push gives no indication of moderation - Choco vCurrent gives no indication that a package went under review. We are going to put out a point release with that message and a couple of small fixes.

Moderation Means a Long Term Future

We are making investments into the long term viability of Chocolatey. These improvements we are making are showing you that your support of the Chocolatey Kickstarter and the future of Chocolatey is a real thing. If you haven’t heard about the kickstarter yet, take a look at https://www.kickstarter.com/projects/ferventcoder/chocolatey-the-alternative-windows-store-like-yum.

Posted On Monday, October 27, 2014 12:30 AM | Comments (0) | Filed Under [ Personal chocolatey ]

Friday, October 17, 2014

Chocolatey Kickstarter–Help Me Take Chocolatey to the Next Level

I’m really excited to tell you about The Chocolatey Experience! We are taking Chocolatey to the next level and ensuring the longevity of the platform. But we can’t get there without your help! Please help me support Chocolatey and all of the improvements we need to make!



Posted On Friday, October 17, 2014 8:53 AM | Comments (0) | Filed Under [ chocolatey ]

Saturday, September 27, 2014

Chocolatey Newsletter

Chocolatey logoChocolatey has some big changes coming in the next few months, so we’ve started a newsletter to keep everyone informed of what’s coming. The folks who are signed up for the newsletter will hear about the latest and greatest changes coming for Chocolatey first, plus they will know when the Kickstarter (Yes! Big changes are coming!) kicks off before anyone else. Sign up for the newsletter now to learn about all the exciting things coming down the pipe for Chocolatey!

Posted On Saturday, September 27, 2014 11:28 AM | Comments (0) | Filed Under [ chocolatey ]

Thursday, August 7, 2014

Puppet: Getting Started On Windows

Now that we’ve talked a little about Puppet. Let’s see how easy it is to get started.

Install Puppet

PuppetLet’s get Puppet Installed. There are two ways to do that:

  1. With Chocolatey: Open an administrative/elevated command shell and type:
    choco install puppet
  2. Download and install Puppet manually - http://puppetlabs.com/misc/download-options

Run Puppet

  • Let’s make pasting into a console window work with Control + V (like it should):
    choco install wincommandpaste
  • If you have a cmd.exe command shell open, (and chocolatey installed) type:
  • The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!).
  • If you have to restart your CLI (command line interface) session or you installed Puppet manually open an administrative/elevated command shell and type:
    puppet resource user
  • Output should look similar to a few of these:
    user { 'Administrator':
      ensure  => 'present',
      comment => 'Built-in account for administering the computer/domain',
      groups  => ['Administrators'],
      uid     => 'S-1-5-21-some-numbers-yo-500',
  • Let's create a user:
    puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }"
  • Relevant output should look like:
    Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created
  • Run the 'puppet resource user' command again. Note the user we created is there!
  • Let’s clean up after ourselves and remove that user we just created:
    puppet apply -e "user {'bobbytables_123': ensure => absent, }"
  • Relevant output should look like:
    Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed
  • Run the 'puppet resource user' command one last time. Note we just removed a user!


You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet.

Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

Posted On Thursday, August 7, 2014 10:39 AM | Comments (0) | Filed Under [ chocolatey Puppet ]

Puppet: Making Windows Awesome Since 2011

imagePuppet was one of the first configuration management (CM) tools to support Windows, way back in 2011. It has the heaviest investment on Windows infrastructure with 1/3 of the platform client development staff being Windows folks.  It appears that Microsoft believed an end state configuration tool like Puppet was the way forward, so much so that they cloned Puppet’s DSL (domain-specific language) in many ways and are calling it PowerShell DSC.

Puppet Labs is pushing the envelope on Windows. Here are several things to note:

It can be overwhelming learning a new tool like Puppet at first, but Puppet Labs has some resources to help you on that path. Take a look at the Learning VM, which has a quest-based learning tool. For real-time questions, feel free to drop onto #puppet on freenode.net (yes, some folks still use IRC) with questions, and #puppet-dev with thoughts/feedback on the language itself. You can subscribe to puppet-users / puppet-dev mailing lists. There is also ask.puppetlabs.com for questions and Server Fault if you want to go to a Stack Exchange site. There are books written on learning Puppet. There are even Puppet User Groups (PUGs) and other community resources!

Puppet does take some time to learn, but with anything you need to learn, you need to weigh the benefits versus the ramp up time. I learned NHibernate once, it had a very high ramp time back then but was the only game on the street. Puppet’s ramp up time is considerably less than that. The advantage is that you are learning a DSL, and it can apply to multiple platforms (Linux, Windows, OS X, etc.) with the same Puppet resource constructs.

As you learn Puppet you may wonder why it has a DSL instead of just leveraging the language of Ruby (or maybe this is one of those things that keeps you up wondering at night). I like the DSL over a small layer on top of Ruby. It allows the Puppet language to be portable and go more places. It makes you think about the end state of what you want to achieve in a declarative sense instead of in an imperative sense.

You may also find that right now Puppet doesn’t run manifests (scripts) in order of the way resources are specified. This is the number one learning point for most folks. As a long time consternation of some folks about Puppet, manifest ordering was not possible in the past. In fact it might be why some other CMs exist! As of 3.3.0, Puppet can do manifest ordering, and it will be the default in Puppet 4. http://puppetlabs.com/blog/introducing-manifest-ordered-resources

You may have caught earlier that I mentioned PowerShell DSC. But what about DSC? Shouldn’t that be what Windows users want to choose? Other CMs are integrating with DSC, will Puppet follow suit and integrate with DSC? The biggest concern that I have with DSC is it’s lack of visibility in fine-grained reporting of changes (which Puppet has). The other is that it is a very young Microsoft product (pre version 3, you know what they say :) ). I tried getting it working in December and ran into some issues. I’m hoping that newer releases are there that actually work, it does have some promising capabilities, it just doesn’t quite come up to the standard of something that should be used in production. In contrast Puppet is almost a ten year old language with an active community! It’s very stable, and when trusting your business to configuration management, you want something that has been around awhile and has been proven. Give DSC another couple of releases and you might see more folks integrating with it. That said there may be a future with DSC integration. Portability and fine-grained reporting of configuration changes are reasons to take a closer look at Puppet on Windows.

Yes, Puppet on Windows is here to stay and it’s continually getting better folks.

Posted On Thursday, August 7, 2014 9:21 AM | Comments (0) | Filed Under [ Puppet ]

Wednesday, July 9, 2014

Puppet ACLs–Mask Specific

Access Control Lists and permissions can get inherently complex. We didn’t want to sacrifice a sufficiently advanced administrator/developer/etc from being able to get to advanced scenarios with ACLs with Puppet’s ACL module. With the ACL module (soon to be) out in the wild, it may be helpful to explain one of the significantly advanced features of the ACL module: mask specific rights. I am going to interchangeably use the term “acl” to mean the module during the rest of this post (and not the Access control list or discretionary access control list).

Say you need very granular rights, not just RX (read, execute), but also to read and write attributes. You get read attributes (FILE_READ_ATTRIBUTES) with read (FILE_GENERIC_READ), see http://msdn.microsoft.com/en-us/library/windows/desktop/aa364399(v=vs.85).aspx.  ACL provides you with the ability to specify ‘full’,’modify’,’write’,’read’,’execute’ or ‘mask_specific’. Mask specific is when you can’t get the specific rights you need for an identity (trustee, group, etc.) and need to get more specific.

Let’s take a look at what mask specific looks like:

acl { 'c:/tempperms':
  permissions => [
   { identity => 'Administrators', rights => ['full'] }, #full is same as - 2032127 aka 0x1f01ff but you should use 'full'
   { identity => 'SYSTEM', rights => ['modify'] }, #modify is same as 1245631 aka 0x1301bf but you should use 'modify'
   { identity => 'Users', rights => ['mask_specific'], mask => '1180073' }, #RX WA #0x1201a9
   { identity => 'Administrator', rights => ['mask_specific'], mask => '1180032' }  #RA,S,WA,Rc #1180032  #0x120180
  inherit_parent_permissions => 'false',

Note specifically that “rights=>[‘mask_specific’]” also comes with a mask integer specified as a string e.g. “mask => ‘1180032’”. Now where did that number come from? In this specific case you see it is RA,S,WA,Rc (Read Attributes, Synchronize, Write Attributes, Read Control). Let’s take a look at http://msdn.microsoft.com/en-us/library/aa394063(v=vs.85).aspx to see the Access Mask values (integer and hex).

1048576 (0x100000)

If we look here, 1048576 is the one we want. Let’s whip out our calculators. You knew that math in high school and college was going to be put to good use, right? Okay, calculators out, let’s add those numbers up.

S  = 1048576
Rc =  131072
RA =     128
WA =     256

That’s the same as the number we have above, so we are good. You know how to make mask_specific happen with the acl module should you ever need to. 

Understanding Advanced Permissions

Oh, wait. I should explain a little more advanced scenario. RX, WA – like we started to talk about above. How do you get to that number, where is FILE_GENERIC_READ? Back to http://msdn.microsoft.com/en-us/library/windows/desktop/aa364399(v=vs.85).aspx, we can see that it includes FILE_READ_ATTRIBUTES, FILE_READ_DATA, FILE_READ_EA, STANDARD_RIGHTS_READ, and SYNCHRONIZE. FILE_GENERIC_EXECUTE contains FILE_EXECUTE, FILE_READ_ATTRIBUTES, STANDARD_RIGHTS_EXECUTE, and SYNCHRONIZE. Notice the overlap there? Each one of those flags only get added ONCE. This is important.  If you are following along and looking, you have noticed STANDARD_RIGHTS_READ and STANDARD_RIGHTS_EXECUTE are not listed on the page with the rights. Where did those two come from? Take a look at http://msdn.microsoft.com/en-us/library/windows/desktop/aa374892(v=vs.85).aspx down in the C++ section. See if you notice anything? Wait, what?

STANDARD_RIGHTS_READ, STANDARD_RIGHTS_EXECUTE, and STANDARD_RIGHTS_WRITE are all synonyms for READ_CONTROL. What? Why not just call it read control? I don’t know, I’m not the guy that wrote the Access Masks. Anyway, now we know what we have so let’s get our calculators ready again.

RA    =     128
RD    =       1
REa   =       8
StdRd =  131072
S     = 1048756
FE    =      32
REa   =       8
StdEx =  131072
S     = 1048756

Let’s remove the duplicates (and the tricky READ_CONTROL duplicate).

RA  =     128
RD  =       1
REa =       8
Rc  =  131072
S   = 1048756
FE  =      32

That doesn’t quite work out to what we were thinking of ‘1180073’. Did we forget something? Yes, we got a little wrapped up in just getting RX sorted out that we forgot about WA, which adds another 256 to the number.

RA  =     128
RD  =       1
REA =       8
Rc  =  131072
S   = 1048756
FE  =      32
WA  =     256



Parting Thoughts

While the ACL module has a simple interface you can definitely see that it packs some power with it. Having this kind of power is really helpful when you need to get fine-grained with your permissions.

Posted On Wednesday, July 9, 2014 1:04 PM | Comments (0) |

Tuesday, October 1, 2013

Hacking on Puppet in Windows: Running Puppet Commands Against the Source Code

This is not something one would normally do, but this is here for future reference for me.

First of all ensure puppet, facter and hiera source codes are all checked out from git and have the same top level directory.

Then you take the environment.bat file that is shipped with the puppet installer (in the bin directory), copy it somewhere that you have in the PATH and you edit the first two lines to change the PL_BASEDIR to your top level directory for all of those previous items.

SET PL_BASEDIR=C:\code\puppetlabs
REM Avoid the nasty \..\ littering the paths.

Then copy the puppet.bat file over to the same directory as your modified environment.bat file and you are money.

Don’t have those files? No problem, I’ve created a Gist for that.

Puppet for the Win[dows]!

Posted On Tuesday, October 1, 2013 3:35 PM | Comments (1) | Filed Under [ Puppet ]

Monday, August 26, 2013

PuppetConf 2013

I recently attended PuppetConf 2013 (the 3rd annual event) and all I can say coming away from that is wow. It was an amazing event with quite a few amazing speakers and sessions out there. There were over 100 speakers and more than 1200 attendees. And we had live streaming for quite a few sessions and keynotes that had a huge attendance (I don’t remember the number off the top of my head). With seven tracks going at a time, not including demos or hands on labs, it was quite an event.

Disclaimer: I work for Puppet Labs but my opinions are my own.

The venue was awesome (San Francisco at the Fairmont Hotel) and I wished that I had a little more time outside of the conference to go exploring. Being there as an attendee, speaker, employee, and volunteer, I saw all sides of the conference. Everything was well prepared and I saw no hiccups from any side. Walking around at some of the events I could hear a buzz in the air about Windows and I happened to overhear a few folks mention the word chocolatey, which was definitely cool considering the majority of folks that are at PuppetConf are mainly Linux with some mixing of environments. I’m hoping to see that start to tip next year.

There were 4 talks on Windows and I was able to make it to almost all of them (5 talks if you consider my hands on lab a talk). Only two of those were given by puppets, so it was nice to see some talks considering there were none last year (I need to verify this).

My Hands On Lab – Getting Chocolatey (Windows Package Provider) with Puppet

Link: http://puppetconf2013b.sched.org/event/ddd309df1b03712cf1ba39224ad5e852#.Uht-a2RgbVM

The hands on lab did not go so well. Apologies to the attendees of the lab, but there was an issue with the virtual machine that I had provided. It was corrupted somewhere between copying it from my box to all of the USB sticks that we gave to lab attendees. Since it was only a 40 minute lab, we had to switch to a quick demo.

I did promise those folks that I would get them a functional hands on lab and here it is: https://github.com/chocolatey/puppet-chocolatey-handsonlab (You can take advantage of it as well for free!).

My Talk – Puppet On Windows: Now You’re Getting Chocolatey!

Link: http://puppetconf2013b.sched.org/event/ecfda2ef5c398eca29b00ce756cd405d#.Uht_7GRgbVM

My talk went very smoothly. It was almost night and day having given a failing lab a little over an hour prior to a talk that had quite a bit of energy in the room. I enjoyed the feedback coming from the audience and the session went (I felt) very well. Sessions were recorded so be on the lookout for that to show up soon.  Until then you can check out the slides here: http://www.slideshare.net/ferventcoder/puppet-on-windows-now-youre-getting-chocolatey-puppetconf2013 – and if you came to the session, I’d appreciate feedback on how I did and where I can improve. You can do that here: http://speakerrate.com/talks/25271-puppet-on-windows-now-you-re-getting-chocolatey

Posted On Monday, August 26, 2013 11:40 AM | Comments (2) | Filed Under [ Personal chocolatey ]

Tuesday, June 25, 2013

Career-Defining Moments

Fear holds us back from many things. A little fear is healthy, but don’t let it overwhelm you into missing opportunities.

In every career there is a moment when you can either step forward and define yourself, or sit down and regret it later. Why do we hold back: is it fear, constraints, family concerns, or that we simply can't do it?

I think in many cases it comes to the unknown, and we are good at fearing the unknown. Some people hold back because they are fearful of what they don’t know. Some hold back because they are fearful of learning new things. Some hold back simply because to take on a new challenge it means they have to give something else up. The phrase sometimes used is “It’s the devil you know versus the one you don’t.” That fear sometimes allows us to miss great opportunities.

In many people’s case it is the opportunity to go into business for yourself, to start something that never existed. Most hold back hear for a fear of failing. We’ve all heard the phrase “What would you do if you knew you couldn’t fail?”, which is intended to get people to think about the opportunities they might create. A better term I heard recently on the Ruby Rogues podcast was “What would be worth doing even if you knew you were going to fail?” I think that wording suits the intent better. If you knew (or thought) going in that you were going to fail and you didn’t care, it would open you up to the possibility of paying more attention to the journey and not the outcome.

In my case it is a fear of acceptance. I am fearful that I may not learn what I need to learn or may not do a good enough job to be accepted. At the same time that fear drives me and makes me want to leap forward. Some folks would define this as “The Flinch”. I’m learning Ruby and Puppet right now. I have limited experience with both, limited to the degree it scares me some that I don’t know much about either. Okay, it scares me quite a bit!

Some people’s defining moment might be going to work for Microsoft. All of you who know me know that I am in love with automation, from low-tech to high-tech automation. So for me, my “mecca” is a little different in that regard.

Awhile back I sat down and defined where I wanted my career to go and it had to do more with DevOps, defined as applying developer practices to system administration operations (I could not find this definition when I searched). It’s an area that interests me and why I really want to expand chocolatey into something more awesome. I want to see Windows be as automatable and awesome as other operating systems that are out there.

Back to the career-defining moment. Sometimes these moments only come once in a lifetime. The key is to recognize when you are in one of these moments and step back to evaluate it before choosing to dive in head first. So I am about to embark on what I define as one of these “moments.”  On July 1st I will be joining Puppet Labs and working to help make the Windows automation experience rock solid! I’m both scared and excited about the opportunity!

Posted On Tuesday, June 25, 2013 4:49 PM | Comments (3) | Filed Under [ Personal ]

Saturday, June 1, 2013

Chocolatey official public feed now has 1,000 stable packages


Chocolatey has reached a milestone at 1K unique stable packages! When I started chocolatey a little over two years ago I didn't know there would be such a tremendous community uptake. I am blessed that you have found value in chocolatey and have contributed code, packages, bugs and ideas to making chocolatey better.

To celebrate this we should look at who contributed the package that put us over the top. It was Justin Dearing with SqlKerberosConfigMgr (http://chocolatey.org/packages/SqlKerberosConfigMgr). And I'm giving Justin a $50 gift card for Amazon as a small token of my appreciation. It's not much but we appreciate the contributions! This was unannounced because we want to focus on quality, not quantity.

Now, while this is a significant milestone, we are not very far in the bigger scheme of offerings for Windows. There is no hurry to get there, we prefer quality packages over quantity of packages. We will eventually grow much bigger and as we add additional sources, it increases the amount of packages we can offer.

Thanks so much to all of you for all of your work, we wouldn't be where we are today without the community!

Posted On Saturday, June 1, 2013 11:10 AM | Comments (0) | Filed Under [ chocolatey ]

Thursday, January 3, 2013

Chocolatey Automatic Packages

I updated three packages this morning. I didn’t even notice until the tweets came in from @chocolateynuget.

How is this possible? It’s simple. I love automation. I built chocolatey to take advantage of automation. So it would make sense that we could automate checking for package updates and publishing those updated packages. These are known as automatic packages. Automatic packages are what set Chocolatey apart from other package managers and I daresay could make chocolatey one of the most up-to-date package manager on Windows.

Automatic Packages You Say?

You’ve followed the instructions for creating a Github (or really any source control) repository with your packages. All you need to do now is to introduce two new utilities to your personal library, Ketarin and Chocolatey Package Updater (chocopkgup for short).


Ketarin is a small application which automatically updates setup packages. As opposed to other tools, Ketarin is not meant to keep your system up-to-date, but rather maintain a compilation of all important setup packages which can be burned to disc or put on a USB stick.

There are some good articles out there that talk about how to create jobs with Ketarin so I am not going to go into that.

Ketarin does a fantastic job of checking sites for updates and has hooks to give it custom command before and after it has downloaded the latest version of an app/tool.

Chocolatey Package Updater

Chocolatey Package Updater aka chocopkgup takes the information given out from Ketarin about a tool/app update and translates it into a chocolatey package that it builds and pushes to chocolatey.org. It does this so you don't even have to think about updating a package or keeping it up to date. It just happens. Automatically, in the background, and even faster than you could make it happen. It's almost as if you were the application/tool author.

How To

Prerequisites And Setup:

  1. Optional (strongly recommended) - Ensure you are using a source control repository and file system for keeping packages. A good example is here.
  2. Optional (strongly recommended) - Make sure you have installed the chocolatey package templates. If you’ve installed the chocolatey templates (ReadMe has instructions), then all you need to do is take a look at the chocolateyauto and chocolateyauto3. You will note this looks almost exactly like the regular chocolatey template, except this has some specially named token values.
    #Items that could be replaced based on what you call chocopkgup.exe with
    #{{PackageName}} - Package Name (should be same as nuspec file and folder) |/p
    #{{PackageVersion}} - The updated version | /v
    #{{DownloadUrl}} - The url for the native file | /u
    #{{PackageFilePath}} - Downloaded file if including it in package | /pp
    #{{PackageGuid}} - This will be used later | /pg
    #{{DownloadUrlx64}} - The 64bit url for the native file | /u64
  3. These are the tokens that chocopkgup will replace when it generates an instance of a package.
  4. Install chocopkgup (which will install ketarin and nuget.commandline). cinst chocolateypackageupdater.
  5. Check the config in C:\tools\ChocolateyPackageUpdater\chocopkgup.exe.config  (or chocolatey_bin_root/ChocolateyPackageUpdater). The PackagesFolder key should point to where your repository is located.
  6. Create a scheduled task (in windows). This is the command (edit the path to cmd.exe accordingly): C:\Windows\System32\cmd.exe /c c:\tools\chocolateypackageupdater\ketarinupdate.cmd
  7. Choose a schedule for the task. I run mine once a day but you can set it to run more often. Choose a time when the computer is not that busy.
  8. Save this Ketarin template somewhere: https://github.com/ferventcoder/chocolateyautomaticpackages/blob/master/_template/KetarinChocolateyTemplate.xml
  9. Open Ketarin. Choose File –> Settings.
  10. On the General Tab we are going to add the Version Column for all jobs. Click Add…, then put Version in Column name and {version} in Column value. 
       Create a Custom Field (Ketarin)
  11. Click [OK]. This should add it to the list of Custom Columns.
  12. Click on the Commands Tab and set Edit command for event to “Before updating an application”. 
    Ketarin settings - Commands Tab - Before updating an application
  13. Add the following text:
    chocopkgup /p {appname} /v {version} /u "{preupdate-url}" /u64 "{url64}" /pp "{file}" 
    REM /disablepush
  14. Check the bottom of this section to be sure it set to Command
    Command selected
  15. Click Okay.
  16. Note the commented out /disablepush. This is so you can create a few packages and test that everything is working well before actually pushing those packages up to chocolatey. You may want to add that switch to the main command above it.

This gets Ketarin all set up with a global command for all packages we create. If you want to use Ketarin outside of chocolatey, all you need to do is remove the global setting for Before updating an application and instead apply it to every job that pertains to chocolatey update.

Create an Automatic Package:

Preferably you are taking an existing package that you have tested and converting it to an automatic package.

  1. Open Ketarin. Choose File –> Import… 
  2. Choose the template you just saved earlier (KetarinChocolateyTemplate.xml).
  3. Answer the questions. This will create a new job for Ketarin to check.
  4. One important thing to keep in mind is that the name of the Application name needs to match the name of the package folder exactly.
  5. Right click on that new job and select Edit. Take a look at the following:
    Ketarin Job Notes
  6. Set the URL appropriately. I would shy away from FileHippo for now, the URL has been known to change and if you upload that as the download url in a chocolatey packages, it won’t work very well.
  7. Click on Variables on the right of URL.
  8. On the left side you should see a variable for version and one for url64. Click on version.
  9. Choose the appropriate method for you. Here I’ve chosen Content from URL (start/end).
  10. Enter the URL for versioning information.
    Ketarin Variable Details
  11. In the contents itself, highlight enough good information before a version to be able to select it uniquely during updates (but not so much it doesn’t work every time as the page changes). Click on Use selection as start.
  12. Now observe that it didn’t jump back too far.
  13. Do the same with the ending part, keeping in mind that this side doesn’t need to be too much because it is found AFTER the start. Once selected click on Use selection as end.
  14. It should look somewhat similar to have is presented in the picture above.
  15. If you have a 64bit Url you want to get, do the same for the url64 variable.
  16. When all of this is good, click OK.
  17. Click OK again.

Testing Ketarin/ChocoPkgUp:

  1. We need to get a good idea of whether this will work or not.
  2. We’ve set /disablepush in Ketarin global so that it only goes as far as creating packages.
  3. Navigate to C:\ProgramData\chocolateypackageupdater.
  4. Open Ketarin, find your job, and right click Update.  If everything is set good, in moments you will have a chocolatey package in the chocopkgup folder. 
  5. Inspect the resulting chocolatey package(s) for any issues.
  6. You should also test the scheduled task works appropriately.


  • Ketarin comes with a logging facility so you can see what it is doing. It’s under View –> Show Log.
  • In the top level folder for chocopkgup (in program data), we log what we receive from Ketarin as well and the process of putting together a package.
  • The name of the application in ketarin matches exactly that of the folder that is in the automatic packages folder.
  • Every once in awhile you want to look in Ketarin to see what jobs might be failing. Then figure out why.
  • Every once in awhile you will want to inspect the chocopkgupfolder to see if there are any packages that did not make it up for some reason or another and then upload them.


Automatic chocolatey packages are a great way to grow the number of packages you maintain without any significant jump in maintenance cost by you. I’ve been working with and using automatic packages for over six months. Is it perfect? No, it has issues from time to time (getting a good version read or actually publishing the packages in some rare cases). But it works pretty well. Over the coming months more features will be added to chocopkgup, such as been able to run its own PowerShell script (for downloading components to include in the package, etc) that would not end up in the final chocolatey package.

With full automation instead of having packages that are out of date or no longer valid, you run the small chance that something changed in the install script or something no longer works. The chances of this are much, much lower than having packages that are out of date or no longer valid.

It takes just a few minutes longer when creating packages to convert them to automatic packages but well worth it when you see that you are keeping applications and tools up to date on chocolatey without any additional effort on your part. Automatic packages are awesome!

Posted On Thursday, January 3, 2013 1:15 AM | Comments (0) | Filed Under [ ApplicationsToysOther gems NuGet chocolatey ]

Wednesday, December 19, 2012

this.Log– Source, NuGet Package & Performance

Recently I mentioned this.Log. Given the amount of folks that were interested in this.Log, I decided to pull this source out and make a NuGet package (well, several packages).


The source is now located at https://github.com/ferventcoder/this.log. Please feel free to send pull requests (with tests of course). When you clone it, if you open visual studio prior to running build.bat, you will notice build errors. Don’t send me a pull request fixing this, I want it to work the way it does now. Use build.bat appropriately.

To try to cut down on the version number being listed everywhere, I created a SharedAssembly.cs (and a SharedAssembly.vb for the VB.NET samples). That helped, but it didn’t solve the problem where it was in the nuspecs as dependencies. So I took it a step further and created a file named VERSION. When you run the build, it updates all the files that contain version information. Having one place to handle the version is nice.


When moving this.Log to a NuGet package (or in this case 9 NuGet packages), I was able to play with some features of NuGet I had not previously, symbol servers and packing a csproj. With packing a csproj, I was able to quickly (well mostly) set up the build to package up every project with NuGet packages.

All packages can be found by searching for this.log on NuGet.org.

NOTE: If you’ve installed any of these prior to this post, you will want to uninstall and reinstall them (there was an particular issue with the version on the Rhino Mocks version). I’ve fixed and updated quite a bit on them from version to


Performance testing with log4net showed this only has an overhead of 42 ticks tested over 100,000 iterations. That’s a pretty good start given that it has a reflection hit on every call.

Posted On Wednesday, December 19, 2012 11:21 PM | Comments (0) | Filed Under [ Code NuGet ]

Saturday, December 15, 2012

Introducing this.Log

One of my favorite creations over the past year has been this.Log(). It works everywhere including static methods and in razor views. Everything about how to create it and set it up is in this gist.

How it looks

public class SomeClass {
  public void SomeMethod() {
    this.Log().Info(() => "Here is a log message with params which can be in Razor Views as well: '{0}'".FormatWith(typeof(SomeClass).Name));

    this.Log().Debug("I don't have to be delayed execution or have parameters either");

  public static void StaticMethod() {
    "SomeClass".Log().Error("This is crazy, right?!");

Why It’s Awesome

  • It does no logging if you don’t have a logging engine set up.
  • It works everywhere in your code base (where you can write C#). This means in your razor views as well!
  • It uses deferred execution, which means you don’t have to mock it to use it with testing (your tests won’t fail on logging lines).
  • You can mock it easily and use that as a means of testing.
  • You have no references to your actual logging engine anywhere in your codebase, so swapping it out (or upgrading) becomes a localized event to one class where you provide the adapter.

Some Internals

This uses the awesome static logging gateway that JP Boodhoo showed me a long time ago at a developer bootcamp, except it takes the concept further. One thing that always bothered me about the static logging gateway is that it would construct an object EVERY time you called the logger if you were using anything but log4net or NLog. Internally it likely continued to reuse the same object, but at the codebase level it appeared as that was not so.

/// <summary>
/// Logger type initialization
/// </summary>
public static class Log
    private static Type _logType = typeof(NullLog);
    private static ILog _logger;
    /// <summary>
    /// Sets up logging to be with a certain type
    /// </summary>
    /// <typeparam name="T">The type of ILog for the application to use</typeparam>
    public static void InitializeWith<T>() where T : ILog, new()
        _logType = typeof(T);
    /// <summary>
    /// Sets up logging to be with a certain instance. The other method is preferred.
    /// </summary>
    /// <param name="loggerType">Type of the logger.</param>
    /// <remarks>This is mostly geared towards testing</remarks>
    public static void InitializeWith(ILog loggerType)
        _logType = loggerType.GetType();
        _logger = loggerType;
    /// <summary>
    /// Initializes a new instance of a logger for an object.
    /// This should be done only once per object name.
    /// </summary>
    /// <param name="objectName">Name of the object.</param>
    /// <returns>ILog instance for an object if log type has been intialized; otherwise null</returns>
    public static ILog GetLoggerFor(string objectName)
        var logger = _logger;
        if (_logger == null)
            logger = Activator.CreateInstance(_logType) as ILog;
            if (logger != null)
        return logger;

You see how when it calls InitializeFor, that’s when you get something like the following in the actual implemented method:

_logger = LogManager.GetLogger(loggerName);

So we take the idea a step further by implementing the following in the root namespace of our project:

/// <summary>
/// Extensions to help make logging awesome
/// </summary>
public static class LogExtensions
    /// <summary>
    /// Concurrent dictionary that ensures only one instance of a logger for a type.
    /// </summary>
    private static readonly Lazy<ConcurrentDictionary<string,ILog>> _dictionary = new Lazy<ConcurrentDictionary<string, ILog>>(()=>new ConcurrentDictionary<string, ILog>());
    /// <summary>
    /// Gets the logger for <see cref="T"/>.
    /// </summary>
    /// <typeparam name="T"></typeparam>
    /// <param name="type">The type to get the logger for.</param>
    /// <returns>Instance of a logger for the object.</returns>
    public static ILog Log<T>(this T type)
        string objectName = typeof(T).FullName;
        return Log(objectName);
    /// <summary>
    /// Gets the logger for the specified object name.
    /// </summary>
    /// <param name="objectName">Either use the fully qualified object name or the short. If used with Log&lt;T&gt;() you must use the fully qualified object name"/></param>
    /// <returns>Instance of a logger for the object.</returns>
    public static ILog Log(this string objectName)
        return _dictionary.Value.GetOrAdd(objectName, Infrastructure.Logging.Log.GetLoggerFor);

You can see I’m using a concurrent dictionary which really speeds up the operation of going and getting a logger. I get the initial performance hit the first time I add the object, but from there it’s really fast. I do take a hit with a reflection call every time, but this is acceptable for me since I’ve been doing that with most logging engines for awhile.


If you are interested in the details, see this gist.

Extensions are awesome if used sparingly. Is this.Log perfect? Probably not, but it does have a lot of benefits in use. Feel free to take my work and make it better. Find a way to get me away from the reflection call every time. I’ve been using it for almost a year now and have improved it a little here and there.

If there is enough interest, I can create a NuGet package with this as well.

Posted On Saturday, December 15, 2012 9:22 AM | Comments (2) |

Friday, December 14, 2012

Super D to the B to the A – AKA Script for reducing the size of a database

The following is a script that I used to help me clean up a database and reduce the size of it from 95MB down to 3MB so we could use it for a development backup. I will note that we also removed some of the data out. I shared this with a friend recently and he used this to go from 70GB to 7GB!

UPDATE: Special Note

Please don’t run this against something that is live or performance critical. You want to do this where you are the only person connected to the database, like a restored backup of the critical database. Doing it against something live will most definitely cause issues. I can in no way be responsible for the use of this script. You should understand what you are doing before you execute these scripts.

So what does it do?

  • It gives you a report of what tables are taking up the most space.
  • It allows you to specify those tables for cleaning.
  • Gives you that same report of space used up by tables after the clean.
  • It rebuilds and reorganizes all indexes with reports before and after.
  • It runs shrink file on the physical files (potentially unnecessary due to the next thing it does, but hey, couldn’t hurt right?!).
  • It runs shrink database on the database.

The Script

Provided it shows up correctly, here is the gist:

 * Scripts to remove data you don't need here  

 * Now let's clean that DB up!

DECLARE @DBName VarChar(25)
SET @DBName = 'DBName'

 * Start with DBCC CLEANTABLE on the biggest offenders

PRINT 'Looking at the largest tables in the database.'
 t.NAME AS TableName,
 i.name AS indexName,
 SUM(p.rows) AS RowCounts,
 SUM(a.total_pages) AS TotalPages, 
 SUM(a.used_pages) AS UsedPages, 
 SUM(a.data_pages) AS DataPages,
 (SUM(a.total_pages) * 8) / 1024 AS TotalSpaceMB, 
 (SUM(a.used_pages) * 8) / 1024 AS UsedSpaceMB, 
 (SUM(a.data_pages) * 8) / 1024 AS DataSpaceMB
 sys.tables t
 sys.indexes i ON t.OBJECT_ID = i.object_id
 sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
 sys.allocation_units a ON p.partition_id = a.container_id
 i.OBJECT_ID > 255 AND  
 i.index_id <= 1
 t.NAME, i.object_id, i.index_id, i.name 

PRINT 'Cleaning the biggest offenders'
DBCC CLEANTABLE(@DBName, 'dbo.Table1')
DBCC CLEANTABLE(@DBName, 'dbo.Table2')

 t.NAME AS TableName,
 i.name AS indexName,
 SUM(p.rows) AS RowCounts,
 SUM(a.total_pages) AS TotalPages, 
 SUM(a.used_pages) AS UsedPages, 
 SUM(a.data_pages) AS DataPages,
 (SUM(a.total_pages) * 8) / 1024 AS TotalSpaceMB, 
 (SUM(a.used_pages) * 8) / 1024 AS UsedSpaceMB, 
 (SUM(a.data_pages) * 8) / 1024 AS DataSpaceMB
 sys.tables t
 sys.indexes i ON t.OBJECT_ID = i.object_id
 sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
 sys.allocation_units a ON p.partition_id = a.container_id
 i.OBJECT_ID > 255 AND  
 i.index_id <= 1
 t.NAME, i.object_id, i.index_id, i.name 

 * Fix the Index Fragmentation and reduce the number of pages you are using (Let's rebuild and reorg those indexes)

PRINT 'Selecting Index Fragmentation in ' + @DBName + '.'
 ,SI.NAME AS IndexName
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, NULL) DPS --N'LIMITED') DPS
INNER JOIN sysindexes SI 
ORDER BY DPS.avg_fragmentation_in_percent DESC

PRINT 'Rebuilding indexes on every table.'
EXEC sp_MSforeachtable @command1="print 'Rebuilding indexes for ?' ALTER INDEX ALL ON ? REBUILD WITH (FILLFACTOR = 90)"
PRINT 'Reorganizing indexes on every table.'
EXEC sp_MSforeachtable @command1="print 'Reorganizing indexes for ?' ALTER INDEX ALL ON ? REORGANIZE"
--EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"
PRINT 'Updating statistics'
EXEC sp_updatestats

 ,SI.NAME AS IndexName
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, NULL) DPS --N'LIMITED') DPS
INNER JOIN sysindexes SI 
ORDER BY DPS.avg_fragmentation_in_percent DESC

 * Now to really compact it down. It's likely that SHRINKDATABASE will do the work of SHRINKFILE rendering it unnecessary but it can't hurt right? Am I right?!

DECLARE @DBName VarChar(25), @DBFileName VarChar(25), @DBLogFileName VarChar(25)
SET @DBName = 'DBName'
SET @DBFileName = @DBName
SET @DBLogFileName = @DBFileName + '_Log'



Here are some of the references in the gist:

Posted On Friday, December 14, 2012 9:43 AM | Comments (1) | Filed Under [ Code ]

Wednesday, December 12, 2012

Refresh Database–Speed up Your Development Cycles

Refresh database is an workflow that allows you to develop with a migrations framework, but deploy with SQL files. It’s more than that, it allows you to rapidly make changes to your environment and sync up with other teammates. When I am talking about environment, I mean your local development environment: your code base and the local database back end you are hitting.

Refresh database comes in two flavors, one for NHibernate and one for Entity Framework. I’m going to show you an example of the one for Entity Framework, which you can find in the repository for rh-ef on github.  One note before we get started: This could work with any migrations framework that will output SQL files.

What is this? Why should I use this?

How long do you spend updating source code and then getting your database up to snuff afterward so you can keep moving forward quickly? Do you work with teammates? Do you have multiple workstations that you might work from and want to quickly sync up your work?

It’s a pain most of us don’t see and an idea that was originally incubated by Dru Sellers. He wanted a fast way of keeping his local stuff up to date right from Visual Studio. Out of that was born Refresh Database. We are talking a simple right click and debug to a synced up database.

Others have talked in the past about how you want to use the same migration algorithm and test it all the way up to production. Refresh DB allows you to test that migration from a local development environment many times a day. So by the time you hand over the SQL files for production (or use RoundhousE), there is no guess work about whether it is going to work or not. You have a security in knowing that you are good to go.

It’s definitely something that can really speed up your team so you never hear “I got latest and now I’m trying to sync up all the changes to the database.” This should be easy. This should be automatic.

You should never again hear “I made some domain changes but now I’m working to get them into the database.” This should be easy. This should be automatic.

Whether you decide to look further into this or not, it doesn’t matter to me. It just means my teams will get to market and keep updated faster than you (given the same technologies, Winking smile).

How does this work?

This is the simple part. Convincing you to look at it in the first place is the hard part. I have put together a short video to show you exactly how it works. You will see that it is super simple.


Refresh Database has been around for over two years. It’s definitely something that has paid for itself time and again. It’s something you might consider looking at it if you have never heard of it.

If you don’t do something with migrations and source control for your database yet, please start now. This will save you countless hours in the future. I’ve walked into more than one company that was hurting in the area of database development b/c they didn’t treat the database scripts as source code in the same way that they did the rest of the code. It’s a must anymore. I also see teams doing shared development database development. This is a huge no no (except in certain considerations) due to the amount of lost time it causes. That however, is a discussion for another day.

Posted On Wednesday, December 12, 2012 5:08 PM | Comments (2) | Filed Under [ Code RoundhousE chucknorris NuGet ]

HowTo: Use .NET Code on a Network Share From Windows

If you use VMWare/VirtualPC and you want to offload your source code repositories to your host OS and code from it inside the VM, you need to do a few things to fully trust the share.

I’ve found that I keep heading out and searching on this every time I need it so I thought I would write it down this time to save myself the trouble next time.

CasPol Changes

Save the following as caspol.bat:

%WINDIR%\Microsoft.NET\Framework\v2.0.50727\caspol -q -machine -ag 1.2 -url file://e:/* FullTrust
%WINDIR%\Microsoft.NET\Framework\v4.0.30319\caspol -q -machine -ag 1.2 -url file://e:/* FullTrust
%WINDIR%\Microsoft.NET\Framework64\v2.0.50727\caspol -q -machine -ag 1.2 -url file://e:/* FullTrust
%WINDIR%\Microsoft.NET\Framework64\v2.0.50727\caspol -q -machine -ag 1.2 -url file://e:/* FullTrust

%WINDIR%\Microsoft.NET\Framework\v2.0.50727\caspol -q -machine -ag 1.2 -url file://\\vmware-host\* FullTrust
%WINDIR%\Microsoft.NET\Framework\v4.0.30319\caspol -q -machine -ag 1.2 -url file://\\vmware-host\* FullTrust
%WINDIR%\Microsoft.NET\Framework64\v2.0.50727\caspol -q -machine -ag 1.2 -url file://\\vmware-host\* FullTrust
%WINDIR%\Microsoft.NET\Framework64\v2.0.50727\caspol -q -machine -ag 1.2 -url file://\\vmware-host\* FullTrust

Make sure you replace the file locations appropriately. Then run it as an administrator.

This will do the first part of allowing your code to execute without security exceptions. Credit to Chris Sells for the most comprehensive article on this: http://www.sellsbrothers.com/Posts/Details/1519 

Make VMWare Share Part of the Local Intranet

This is one I’ve found to get stuff to build that I didn’t find anywhere else. Even after running caspol I still couldn’t run executables on the share. That is, until I made the share part of the Local Intranet.

  • Open Internet Explorer, then open Internet Options.
  • Find the Security Tab
  • Open Local Intranet by selecting Local Intranet and pushing the Sites button
  • Click Advanced
  • Now add file://vwmare-host to this file
  • Click Close when completed
  • There is a picture below for reference

Setting Local Intranet


This will allow for executables to start working. All but the ones built and run from Visual Studio.

.NET Built Executables/Services No Longer Work

It may be awhile before you run into this one. You may have a console application you are building. You will notice once you move over to the share, you start getting errors related to that. What you need to do is add a small configuration value to the the config files.

Add the following to your config files:

  <loadFromRemoteSources enabled="true" />

This will allow it to be loaded into memory, otherwise it will not run from a network share.

Caveats to Network Share

Caveats to think about when developing against a share:

  • Visual Studio has trouble noticing updates to files if you update them outside of Visual Studio.
  • If you run the local built in web server for web development, don’t expect it to catch the files updating automatically.
  • If you do any kind of restoring a database from a backup, you may want to consider copying that database to a local drive first.

Posted On Wednesday, December 12, 2012 1:08 AM | Comments (0) | Filed Under [ Code ]

Wednesday, September 12, 2012

Chocolatey featured on LifeHacker!

Chocolatey was just featured on LifeHacker! http://lifehacker.com/5942417/chocolatey-brings-lightning-quick-linux+style-package-management-to-windows

I was ecstatic to hear about this, of course now I need to write an actual comparison between chocolatey and other windows package managers.

Comments on Reddit: http://www.reddit.com/r/commandline/comments/zqnj6/chocolatey_brings_lightning_quick_linuxstyle/

Posted On Wednesday, September 12, 2012 9:30 AM | Comments (0) | Filed Under [ Personal ApplicationsToysOther chocolatey ]

Wednesday, August 15, 2012

How To: Improve Skype Quality

I always forget this until I need it the next time, but there is a great post that talks about how to fix your skype quality. http://pauloflaherty.com/2008/03/26/improve-skype-quality-with-these-tips/

1. In Skype, Go to Tools > Options > Connection. Select the option to use ports 80 and 443. In the “Incoming Connections” box you can chose any port between 1024 and 65535.

2. Reconfirm that your firewall is correctly configured. Follow the simple visual guide here:


3. Quit any file sharing applications or high-bandwidth usage applications.

4. For more detailed security setup on a network:http://www.skype.com/security/guide-for-network-admins.pdf (does not work anymore)

5. If these suggestions do not resolve improve call quality, follow these steps:

* Quit Skype

* Locate the shared.xml file found in
C:\Documents and settings\Your Windows Username\Application data\Skype\shared.xml

* Delete the file called shared.xml

* Restart Skype ( shared.xml will be recreated )
Note: Showing hidden folders and files has to be turned on: please navigate to :

In XP – My Computer > Tools (Menu) > Folder Options > View.

In Windows 7 – Open Explorer >Organize (Menu)> Folder And Search Options > View.

Once there, please make sure that the option “Show Hidden Files and Folders” is enabled.

6. Disable Quality of Service packet scheduling. Go to Start -> Control Panel -> Network Connections.  Right click on the connection you are using. Select Properties. Untick the “QoS Packet Scheduler” option.

I do steps 1-3, 5, and 6. For step 2, please make sure your firewall is port forwarding your skype port to the proper computer. That is where you get the best performance.

Step 5 is a maintenance one that you will find yourself doing from time to time when things start to slow down. Instead of deleting shared.xml, I just append the date to the end of it in YYYMMDD format.

Hope this helps someone that is trying to improve conversations with Skype.

Posted On Wednesday, August 15, 2012 3:08 PM | Comments (0) | Filed Under [ ApplicationsToysOther ]

Entity Framework and Stored Procedures Issue - Unable to determine a valid ordering for dependent operations. Dependencies may exist due to foreign key constraints, model requirements, or store-generated values

When working with EF Database First (don’t ask) and mapping stored procedures you may run into this issue.

Julie Lerman has written a great story on how to do the mappings and has some code to download to inspect how to set up the mappings for insert, update, and delete appropriately for use with stored procedures (http://msdn.microsoft.com/en-us/data/gg699321.aspx).

You may have searched everywhere else and have not been able to find a satisfactory answer. In some cases your model has a circular dependency and there are multiple search results that will help you with that out there.

In my case the problem came down to using a Manage type sproc that would handle both insert and update. As you can imagine you would pass in the primary key field to the Sproc no matter what.

Entity Framework believes this is an association (possibly to a foreign key) so it gives the error above. When you convert it to using separate Insert and Update stored procedures where the insert does not pass in the PK, everything works appropriately.

So if you are getting the above error, make sure you are not mapping the PK in the insert procedure.

Hope this helps some poor soul who falls upon this issue.

Posted On Wednesday, August 15, 2012 1:36 PM | Comments (0) |

Saturday, July 28, 2012

Remote Work: Placeshift and Stay Highly Collaborative Part 2–Focus on YOU

Companies want to hire the type of person that is cut out to be a remote worker. The type of person that can be a remote worker is the type of person that excels at their work and that is what companies are always looking for.

In the first part of this series we talked about what remote work is and how a business benefits from remote workers. In this article we are going to focus on you. What does it take to be a remote worker? Is remote work possible in your job? How do you work from home when there are distractions?

NOTE: The following is not a definitive list and not true for every situation. Some of this represents what works well for me in my experiences over the last few years.

Can YOU be a Remote Worker?

Are you the remote worker type? This is always an interesting question. You can learn behaviors, I don’t believe this is a type that you are either born into or not. I think this is something you can learn and become if you just know how. So what are the key behaviors to being a remote worker? Surprisingly they are strikingly similar to what companies prefer in their best workforce:

  • Self-Sufficient
  • Self-starting
  • Disciplined / Focused
  • Motivated

With that in mind, it’s not a huge gap that companies would actually want to hire the type of worker that is cut out to be a remote worker. “Wait a minute, didn’t you say this was learned?” Yes, many of these are learned behaviors. Let’s take a look at each in more detail.


You must know how to do your job well enough that you don’t need someone helping you through the work (until you can stand on your own). This doesn’t mean you need zero guidance; we all need help from time to time. I won’t put a number on what determines self-sufficiency, I think most of us know that we know our jobs or not. If not you can probably ask your peers if they feel you are self-sufficient or not.

If you are paying attention, you may have just realized that this means someone “junior” (or just starting out in a new industry) should not be a remote worker. Why? To be able to work effectively without high amounts of guidance usually comes when you have good knowledge of how to do your job and how to do it well.

Folks new to an industry should probably shy away from trying remote work until they are more comfortable in their roles, responsibilities and simply put, skills and abilities. The biggest reason a “junior” worker should shy away from remote work is that the most important career objective for them is to learn and that is harder to do when they are remote. As a junior worker you should want to pair with others to learn how to do things better. The best type of learning is always face to face. It’s hard enough to teach someone face to face, doing it remotely compounds all of the issues that come along with paying attention to the non-verbal cues of whether someone is catching on or not.


When you are not physically around others working towards the same goals it can sometimes be unclear what you should be doing. Keep in mind it is not the company’s job to make sure you have something to do. It is your responsibility. To be actively engaged, you need to take an active role in making sure you have work to do. Being a self-starter is a high value to a company because they know you are not just going to sit around and wait for something to do. You are going to ask when you need something to do so. That means you are producing something for the company to offset your costs to the company. You want to make the company more money than the opportunity cost of you. This makes you valuable to the company, which is especially important when you are not physically present.

Disciplined / Focused

Discipline and focus means you can work when there are distractions around you. Does that mean you don’t work to eliminate distractions? No, in fact, eliminating distractions is extremely important for many of us. Being able to concentrate with distractions can be very difficult and stressful in the long term, so I would highly recommend removing distractions. How do you do that? We’ll get to that when we talk about how you help your home support remote work.

Discipline is a learned behavior. How do I know? Like many folks I know, I’ve been in the military. I have seen first hand how people become disciplined. It’s really a matter of habit. You do the same thing over and over until it just becomes a habit. So if you want to be disciplined you just practice discipline for some amount of time (some say 21 days straight) and from then on it will become a habit.

Focus is a little harder to achieve. I believe with discipline comes focus. If you are distracted by the twitters and you have the discipline to only turn it on at breaks, you can achieve focus on what you are working on when you don’t have it on. You gain focus by removing distractions until all there is in front of you is what you need to accomplish.


I left this last because motivation is a weird animal. People are motivated for different reasons. It’s really about learning what motivates you. To be motivated in this sense really just means to accomplish goals of the company.

When I was in high school I remember listening to a Tony Robbins lesson on how you can categorize folks into two types of motivation: positive and negative. I can’t find the source of this but it boils down people being motivated in two ways, pain and pleasure. You cannot motivate a person that is negatively motivated with positive reinforcement and you would offend a positively motivated person with negative reinforcement. I digress. The point I’m trying to make here is to find what motivates you and adapt some if needed so that it aligns with the goals of the company that you work with.

I believe that motivation can be a learned behavior with the proper conditioning. At the root of all of everything about being a remote worker is motivation. You need to be motivated to succeed at remote work. You need to be motivated to try remote work. You need to be motivated to possibly pursue a new home with a good setup for working remotely. You need to be motivated to learn new ways to enhance your communication with those surrounding you at work.


Now that we’ve talked about you, let me say that some or all of these qualities will lend well to remote work. Does that mean that this is true in all situations? Absolutely not. Each situation is unique and what lends well for one situation may not lend well for another or even make sense. If you are motivated to make remote work work for you, you will find a way to make it happen. And this list may not even describe you at all.

Next up we are going to talk about jobs that lend well to remote work.

Posted On Saturday, July 28, 2012 6:41 AM | Comments (0) | Filed Under [ Personal ProjectManagement ]

Saturday, March 10, 2012

Remote Work: Placeshift and Stay Highly Collaborative Part 1

The biggest complaint most remote workers have in regards to working on a team? Feeling disconnected. The biggest complaint an office has about remote workers? They forget the remote workers are there and don’t always trust what they are doing. Want to learn how to get past both issues?

Hi, my name is Rob and I have a confession to make. I’m a remote worker four days a week. I’m a placeshift remote worker, and yet I am still highly collaborative with my team. “Placeshifting?” you say. “Highly collaborative?” you say. Over the next series of articles I am going to show you how this can be done.

If you are a business and you have not seriously looked into a technology known as Embodied Social Proxies, you are paying opportunity costs. You are losing money. More on that below. This series is for you so pay attention. I will highlight both business benefits and worker benefits.

If you are a worker and you have considered working from home (or just remotely) but you are not quite sure how you would make it work, this series is for you. Or you are already doing remote work and want to learn how to collaborate better.

Two Types of Remote Work

Timeshift – This is when you perform work at different intervals than the mainstream office may perform the work. Many folks have done this kind of work in one respect or another, even when working a regular full time job. If you ever went home and continued working in the evening, you have done what some might consider timeshift remote work. This series is not geared to this type of remote work.

Placeshift – Placeshifting is when you perform work at the same time as everyone else, but at a different location. This is what most people think of when they hear the term remote workers. If you ever have work from home days, you know what it is like to placeshift. This series is geared to this type of remote work.

The terms placeshifting and timeshifting are borrowed from media industry (television, music, etc) with respect to devices like DVRs. Not quite clear? When you record a TV show and watch it later you are timeshifting the show. Timeshifting dates back to the 1970s with VCRs and Betamax, while placeshifting media is a newer concept made possible by devices like the Slingbox. When you use a Slingbox to watch a show from a device like your phone at the same time the show is playing, you are placeshifting. The difference should be clear when you think of placeshifting as same time, different location and timeshifting as different time, location irrelevant.

This same terminology can be applied to remote work. Although I was hoping to coin the remote work types terminology, Anybots and GigaOm beat me to print with their recent article (How and why robots are placeshifting remote workers). At least this means the terminology is sound.

Bottom Line

Placeshifting remote work is not for everyone and not for every type of business work either. Some jobs have physical requirements or security requirements that negate the ability for remote work. Not every person is able to be productive in a setting outside the office (and the converse is also true). The world is not fair, okay? Get over it. If you are someone who can work by yourself and do so well without being easily distracted (read: there are ways to remove distractions in a work from home situation – I’ll touch on those), then it’s possible you have what it takes to be a remote worker.

Business: We Tried Remote Workers Before, It Didn’t Work

This is the argument I hear the most. The biggest problem with this argument is that it is subjective. Remote work itself is subjective/situational. No two remote workers are going to be alike, no two situations are going to be the same. It’s possible you tried remote work with an individual who was not able to work remotely effectively. It’s highly possible you had an employee who moved away and you wanted to keep them so you allowed them to work remotely. But you may not have set yourself (and the individual) up for success. How much planning and research did you do prior to these remote work situations? How much enabling were you towards your remote worker? Did you attempt to manage your remote worker in the same way as the centrally located folks? Have you even heard of Embodied Social Proxies prior to reading this?

The awareness I am trying to raise with you is that there are situations for businesses to make it work. And you can benefit hugely from remote workers if you do the proper planning, research and understand guidelines for making it work in your situation.

How Do I Benefit as a Business?

Talent Pool

Here’s a hard pill to swallow – you are limited by your talent pool. If you require people to be onsite for work, you are limited by the area in which you do business. I hate to be the one to inform you, but you are not the most awesome place to work. I’m sorry. No matter how awesome you are there is somewhere else that is more awesome and does x better. It’s a losing battle. Get over it already.

In this day and age less and less people will move just to work for you. If you expect the most talented folks in your industry to relocate for you, I have to tell you that 1990 called. I’m sorry to inform you it’s not going to happen in every case. And if it does, it’s borrowed time. Because someone else is going to attract them away.

It’s likely the most talented people in your industry will never work for you if you don’t have a remote option available. There are many reasons, but it boils down to where you expect your talent to live.

Happy Workers Are Superfans (and Productive Workers)

This is so huge I can’t even begin to give it the proper amount of attention. You want your workers to be happy. Tom Preston-Werner, cofounder of GitHub, speaks to this in a presentation called Optimizing For Happiness. Please go there now. The bottom line is that you keep your workers happy, and they are much less likely to leave your organization. Turnover costs are huge to a company. If you are not making your employees happy, they are talking to others about not working for you. They have their ears open to new opportunities. They are likely looking for other jobs as you read this.

If you think you are making your employees happy, I would ask what metric you use for evaluation. I’ll be the first to tell you that you are not doing enough to keep your employees happy. If you give out raises once a year and they are around 3-5% across the board, you might be doing it wrong. Not every employee is created equal, not every employee performs at the same level. Why would you pay them the same? Why would you give them the same raises?

I’m going to make a bold statement here: Your best people outperform your middle of the line folks by ten times. If you are not paying them ten times as much or even five times as much, you might want to re-evaluate how truly happy you are making your employees. If you are not challenging your employees, you are boring them and they will find something more exciting. If you are not doing x you are likely not making your employees happy. You need better metrics into what makes for happy workers.

Facility Costs

Your facility costs are significantly cheaper when it comes to remote workers. A remote worker or semi-remote worker can take up a lot less space than a full time worker. If they come into the office once or twice a week, they will take up some space during that time, but the rest of the week that space could be used by other remote workers when then come into the office. Think of this as space sharing.

Remote workers don’t bring/keep a lot of items in the office. Seriously. Get up and walk around your office. Take a look. Notice how much stuff each worker has surrounding their areas. Notice how much space they take up. Go ask how much it costs for the space of each worker you have in the office per month. If you don’t have this number on hand, you won’t understand what it costs for that worker.

This actually isn’t that hard to calculate if you don’t have it. Just find out the costs of your office space on a monthly basis. Electricity, rent, etc. Now take that number and divide by the number of workers you have on site. This will give you a rough estimate. There are ways to get more accurate estimates, but this is a good start.

For the space of that one onsite worker, you might be able to put 5-10 remote workers in there (if you build and use embodied social proxies which are highly recommended and will be discussed during this series). Imagine that. 5-10 remote workers in that same space. That means for every 10 remote workers you hire, you can only hire one onsite person. Kind of sounds weird to hear it like that, right?

Bigger Staff – More Work In The Pipeline

This is probably the most overlooked opportunity cost when it comes to remote workers. You are limited by the number of folks you have into what you can accomplish. When you open up to remote work, you also open up to the fact that you can take on more work. More work in some terms means more revenue for your business. This is huge.

Final Thoughts For Businesses

Remote work is not without its challenges. I can tell you that the benefits far outweigh the challenges. If you’ve tried remote work in the past and it didn’t work out, don’t let that be a limiter to trying again. If Thomas Edison quit the first time he failed, he may not have been credited with the invention of the light bulb as we know it! Failure is a step on the road to success. Food for thought.

Remote Work Series

  • Next up I’ll talk about what individuals need to be successful remote workers.
  • Building an Embodied Social Proxy, aka, the Remote Portal for a practical cost
  • Possibly other follow ups to come

Posted On Saturday, March 10, 2012 8:39 AM | Comments (2) |

Powered by: