Published: 2019-01-10 03:13 |
Category: Projects | Tags: arbtt, arbtt-stats, breakdown, curiosity, documentation, how to, productivity, time tracking
Tom Woodward has a semi-regular weekly roundup blog post with links to interesting things he finds on the Internet. A couple weeks back, he posted a link to something called arbtt (pronounced ar-bit? that's what I say, anyways), which is actually an acronym for "arbitrary, rule-based time tracker." In short, it runs in the background and samples all the open programs on your computer, once per minute, and writes that information to a text log.
It's super, super geeky. Like seriously. I've used todo-txt for almost two years now and I recently started tracking how long I work on a given task so I can keep better track of what I spend my time on. But, the catch is, I have to remember to do turn on the tracker. arbtt runs in the background. The data is standardized, so you can write different queries and poll for information in very, very granular ways.
It was a real pain to get set up. After two days of fussing on and off, I have it running well on my computer. The documentation for Mac OS is really lacking, so here's what I did for my machine.
(This is fairly technical, so jump to the bottom if you want to see what it does once it's running.)
Installation wasn't too bad for arbtt on Mac. The install instructions on the project page worked fine, particularly because I already had Homebrew installed and set up. I'm not going to rehash those steps here. Go read them there.
Getting the thing to run was a different story. arbtt installs itself at the User level on Mac OS in a hidden .arbtt directory. This holds the configuration file and the actual capture.log file.
The actual executables are in the .cabal directory (also under /Users/yourname) because they're managed by the package manager. The documentation says to go into .cabal/bin and run arbtt-capture in order to start capturing data with the daemon.
Well, that didn't work.
The files in .cabal/bin are symlinked to the actual executables, and from what I can gather, Mac OS doesn't like that. At all. So, to run the scripts, you have to call the absolute path to the actual binaries. Those are in .cabal/store/ghc-8.4.4/rbtt-0.10.1-*/bin. I don't know enough about package managers, but those binaries are buried. I ended up creating aliases in my terminal so I can use one-line invocation.
Because the collection of information is arbitrary, you can collect without knowing what you want to know, which is pretty cool. The syntax for the query rules is in Haskell, which I don't know, so mine is a little hacky right now. In my playing, there are two main rules:
- Target the current window with $program.
- Target the current window title with $title.
You can use standard boolean operators to combine queries to get specific information. Each query pushes to a tag object that contains a Category and identifier in a Category:identifier syntax. A query I'm using to watch my Twitter use is:
current window $program == "Vivaldi" && current window $title =~ [m!TweetDeck!, m!Twitter!] ==> tag Social:twitter
So, it checks for both the browser (I use one called Vivaldi) and the window title before pushing (==>) to the Social:twitter tag. Mine are all really rudimentary right now, but you can sort and filter by date, titles, even directory locations if you're working on your filesystem. Since the underlying data never changes, you can just write new rules that will parse with arbtt-stats (next).
The main capture daemon just runs in the background and collects data. The workhorse is arbtt-stats which parses all of the raw data. You can run arbtt-stats in the command line and get a full report of all matched rules or you can query by specific tag with the -c flag. So, executing arbtt-stats -c Social:twitter would return only the time recorded where Twitter was the focused window.
This all happens in the command line, which isn't super helpful, especially if you have a lot of results like this:
Filtering down by tag with -c is helpful, but it would also be nice to turn this into something graphical. That's where arbtt-graph comes in. It's a Python-based parser that generates a website based on arbtt stats for each day as well as a four-month aggregate.
The biggest problem I had with arbtt-graph was that python isn't super happy with relative file locations. I had to edit the scripts with absolute paths to write and read all of the necessary documents. It's a fun little helper on my computer, and if I was insane, I might investigate putting it online somewhere with a cron job, but that's for another day.