Technology, open source, unsolicited opinions & digital sovereignty

« Have You Considered Load-bearing Shell History?

  • 21 July, 2022
  • 1,419 words
  • seven minutes read time

I have poor command-line hygiene. When I fever-dream a useful pipeline into existence, I very seldom commit it to a shell configuration somewhere. Rather, I let fzf dredge it up sometime later when I need it. My functions are not documented, have hideously short names and variables because I tend to code golf, and are specific to the use-case-at-the-time and not general.

I’ve since decided that this is the optimal approach.

Take That Statement Back, You Deviant

No! I actually believe this. I have reasons.

The notion of a command or shell history in our profession is a profoundly under-appreciated mechanism. Consider the fact that – if you retain all history and express all of your operational tasks in the form of terminal commands – you may have the entire history of your work (apart from coding artifacts) within a few seconds’ reach. How do you create x509 certs with openssl? What’s the easiest way to check for Spectre among a fleet of hosts? If you can recall a few substrings from the commands that may answer the question, a history search can bring this lost memory to your fingertips to be re-used immediately.

Imagine how another profession may value this type of instant, infinitely-retained, zero-effort-to-record recall that translates into an immediately-usable action. A lawyer submitting a particular type of legal document? A teacher scoring a set of tests? You can probably think of more, but my point is that the concept of shell history at the intersection of white-collar software engineering work holds tremendous potential.

Here’s an actual example taken from my own work. Your use case may be different, but one situation that I run into sometimes is unsealing a Vault instance for a host that I’ve brought down for something like a kernel update. The feedback loop looks kind of like this:

The final command here is brusque and sort of obtuse. It’s not hard to see how you could clean this up and place it into a neatly-codified shell function or script: a variable for my Vault unseal key, a function to extract a list of sealed hosts, and so on.

But I do this kind of short-scripting a lot. And the “amount of little pipelines I write the time it takes to turn them into good shell scripts and I put into the right file and commit to my dotfiles repo” number would be really non-trivial. So I just skip it!

I used to judge myself for this. However, over the course of many years, the number of times that I’ve kicked myself for not recording my many pipelines in a script somewhere versus in my shell history is very small to non-existent.

I did have one large regret: when I didn’t take them with me across my workstations. But did you know there are tools specifically designed to help you carry around your shell history with you? That’s no longer a big problem!

So what happens when you wrap up large and sometimes-complicated actions within your shell history, do it often, and let your .shell_history grow into a beautiful, terrifying garden?

You can get so much done, and the overhead to both a) record those shortcuts and b) use them is essentially frictionless. With the power of something like fzf at your fingertips, the entire history of your professional career in command-line work becomes available at a moment’s notice. You aren’t constrained to what you felt was worth overcoming any barrier to recording it for posterity. Everything is there, and you didn’t have to think about documenting any of it.

As an example: I haven’t worked directly with low-level S3 metrics in a long time. But I do recall needing to aggregate total traffic through an S3 bucket in the past. I can find the command to answer that question for you in about two seconds by typing Ctrl-r aws s3 statistics <RET>:

today="$(date +%s)"
for bucket in $(aws s3 ls | awk '{ print $NF; }')
  echo -n "$bucket "
  aws cloudwatch get-metric-statistics \
    --namespace AWS/S3 \
    --start-time "$(echo $today - 86400 | bc)" --end-time "$today" \
    --period 86400 \
    --statistics Average \
    --region us-east-1 \
    --metric-name BucketSizeBytes \
    --dimensions Name=BucketName,Value=$bucket Name=StorageType,Value=StandardStorage \
      | jq '.Datapoints[].Average' \
      | gnumfmt --to=si

This probably can’t be used 100% as written for a similar task a few years later, but smash that Ctrl-x Ctrl-e, give it a quick edit, and move on. Not too hard, and you can see why committing this to something very formal like a shell function just isn’t worth the overhead. It was specific then, it needs to be adapted now, and using it in this context is a very good match for “take it from my history and tweak it”.1


Try this:

  1. I’ll acknowledge that examples like break down a bit in a team situation. Although I can punch up our S3 traffic in one second, how do you disperse that knowledge and capability to others? Big repositories of docs and scripts can solve this, but my purpose in this post is to drive home the fact that low friction cost and low barrier-to-entry cost is what makes the history shine. Maybe there’s a good way to make that and team sharing work in tandem. 

  2. There’s another post buried here about “command-line first work”, but suffice it to emphasize that I think there are great benefits to be had by consolidating as much work as you can into command-line form. Easily-recalled actions in the form of shell history are one benefit alongside many others like interoperability, repeatability, and more.