Making a Case For The Command Line

I have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them.

Effort Required

Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application!

Plumbing Problems

A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible.

Mix and Match

So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face.


As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows:

  1. Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user.
  2. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible.
  3. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves.

I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system:

   1:  public class CustomerTools
   2:  {
   3:    [Verb]
   4:    public void UpdateName(int customerId, string firstName, string lastName)
   5:    {
   6:       //invoke internal services/domain objects/hwatever to perform update
   7:    }
   8:  }

This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code:

Parser.Run(args, new CustomerTools());

Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this:

SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber

‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in.

After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around:

  1. Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would.
  2. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort.
  3. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort.
  4. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel.

I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

posted on Sunday, June 30, 2013 4:03 PM Print
# re: Making a Case For The Command Line
Vladimir Kocjancic
7/1/2013 1:38 PM
Why don't you create WCF service for administration on server and deploy client that connects with it to development team?
# re: Making a Case For The Command Line
7/2/2013 4:45 AM
Funny, I have a different outlook. I prefer to build all my services as libraries. I frequently have small console app's to test these, but I can also have test suites, server processes and GUI app's all utilise the same library.

I believe most software should be constructed this way. That way if you want to support a new UI/service model, the core component should be available for reuse.
# re: Making a Case For The Command Line
7/2/2013 6:12 AM
So instead of using an existing tool (like SSH) for piping commands one way and their results back the way, you've written a web page to do it?

A better alternative might be to provide a page with some sort of bootstrapper link which fires up a pre-authenticated SSH session. I bet such a tool already exists.
# re: Making a Case For The Command Line
Jeff Williams
7/2/2013 8:01 AM
I would agree with RobG. Make your service as a library. Then wrap that in a command line client to test or make a functional command line interface. Then you can wrap it in a UI or web based frontend, knowing that the core functionality will work and is reusable.
# re: Making a Case For The Command Line
7/2/2013 9:32 AM
@Vladimir: You could certainly create a WCF service to host and run the code needed for these internal tools, but I think that's still more overhead and work than a simple command-line based approach. Also, letting users execute the text commands from within a web application that they already are very familiar with is convenient.
# re: Making a Case For The Command Line
7/2/2013 9:37 AM
@RobG and Jeff: You have a good point that I think shows an aspect of this approach that I didn't call out very well in the post. What I'm advocating really isn't all that far off from what you guys are saying.
There's no reason that the code needed to actually perform the task that these internal tools needs to perform can't live in a library that is separate from the web application. In fact, that's exactly how I plan on building this. The library could be referenced by the web application, and expose a single point through which you submit the commands to be parsed and executed. You could end up putting any kind of UI over the top of that separate library. To start out, I'm just using a simple page in a web application because it's convenient and all of the plumbing that I need is already there.
# re: Making a Case For The Command Line
7/2/2013 10:10 AM
I have been working on a similar approach. I did write my own command line parser/dispatcher server side. I was also able to use the same sub system client side to provide various useful services. Here are a few notes/thoughts:

- I am using webapi/MVC. This allows my Javascript calls to map to the same code that my CLI is using.
- There is an open source web based CLI on GitHub. Needs work but was a great starting point. I was actually able to port my C# dispatching arch to the web and run a CLI from a web page. This CLI can not only send commands back to the server, but can also be used to interact with the local client (local storage, etc.).
- The command line vector is just a Controller so I was easily able to use the same authorization scheme.
- My command 'controllers' and command vector methods are decorated with attributes. This allows a startup scanner to identify and register the functionality with the dispatcher.
- Because of WebApi's self hosting capability, I was able to use the same architecture in my service as well (authentication was a little tricky, IIRC). So I can use the same client to connect to my web server or my windows service. Both servers are using a common user store.

All in all, I am a big believer in having a CLI. Once you have it in place, you will continually find use cases for it. Whether it be utility apps, testing, config...
# re: Making a Case For The Command Line
Dan Sutton
7/2/2013 12:45 PM
I find myself agreeing with RobG's approach of building libraries: it seems to me that writing stuff which runs at the command line and then patching it together is a method of deliberately keeping one's coding style in some way tied to obsolete platforms. I have no doubt that it works... and one hopes that the coffee machine works too, because I can envision needing to go and make coffee while waiting for the resultant programs to run... OK, OK - I'm being facetious... but in reality, what happens if you write command-line oriented stuff is that ultimately, you end up forgetting what any of the commands actually do, and if you write enough of this stuff, it becomes unmaintainable at a certain size.
# re: Making a Case For The Command Line
John Q
7/2/2013 2:23 PM
Great idea, this has worked for me. We just did a CRUD app using C# LINQ with initial prototyping in web pages, then as the database grew in complexity we moved the same code around into utility libraries and a console app evolved. One other benefit of this approach: We always use the utility app to do the bulk changes into the db, and can see the log file (who did what when). Every time we needed to do an operation, we quickly added a new command-line switch to handle it. The code was then readily available if we ever needed to do that again. Sure, lots of switches, but if we forget what "-updatesystemstats:true" means, we still know how to read the code and figure it out.
# re: Making a Case For The Command Line
7/2/2013 2:55 PM
If you haven't already thought of it, you might want to include a way to upload a sequence of commands from a file. The risk is that as soon as you have that, you may find you also want conditional execution and all the other features of a full scripting language. Implementing those features can become an entire project in and of itself.
# re: Making a Case For The Command Line
7/29/2013 10:45 AM
Another alternative that I like to do similar is to write small powershell cmdlets.
You get options / flags / command line handling / doco almost for free (once you learn the PS way).
Then from a PowerShell command prompt, the user types Import-Module yourCmdLet.dll and then:
Update-Customer -customerId 123 -firstname Jesse -lastname Taber

[CmdLet(VerbsData.Update, "Customer"]
public class UpdateCustomerCmd : PsCmdlet
public string CustomerId { get; set; }
public string FirstName { get; set; }

protected override void ProcessRecord()

Post Comment

Title *
Name *
Comment *  
Tag Cloud